Rabbit Dynamic C - not optimizing

Don't ever call the Dynamic C compiler "optimizing". Here is an example:

longval = intval;

This gets compiled to:

ld hl,0x0000 add hl,sp push hl ; a pointer to longval ld hl,(sp+12) ; hl = intval ex de,hl ; de = intval ld a,d ; a = msb of intval rla ; Carry = sign of intval sbc hl,hl ; hl = sign extension of intval ld b,h ld c,l ; bc = sign extension of intval pop hl ; get that pointer to longval call slong_ ; store bc:de at (hl) with interrupts disabled

The reason they use the special 32-bit store function slong_ is that they want to make sure all such stores are atomic, so that an ISR won't find an inconsistent (partially set) value. But longval was an automatic variable allocated off the stack. There was no reason to go through so much trouble for a variable not declared as volatile. They could have simply done this:

ld hl,(sp+12) ; hl = intval ld (sp),hl ; lsb of longval is set rl h ; Carry = sign of intval sbc hl,hl ; hl = sign extension of intval ld (sp+2),hl ; msb of longval is set

I have other examples of simple local optimizations that were missed, such as failing to recognize that bit shifts of 8 and 16 bits can be accomplished by simply moving registers. Even a "non-optimizing" compiler should be expected to do these things right.

-Robert Scott Ypsilanti, Michigan (Reply through newsgroups, not by direct e-mail, as automatic reply address is fake.)

Reply to
Robert Scott
Loading thread data ...

That's not terribly surprising. Many compilers, especially with embedded targets, generate unoptimized code. This appears to be the situation of generating code from some sort of generalized case, rather than the specific one. I know of one compiler for an embedded target that generates pretty good code until you use bit fields. It then stumbles all over itself to generate correct, but horrible code.

That fast that it is allocated off the stack is irrelevant because a pointer could be set to its address, allowing asynchronous access. The fact that it is not declared volatile is also irrelevant, as far as Standard C is concerned. Only integer types of sig_atomic_t are guaranteed to be atomic in access. Volatile means that an access is not optimized out, but does not guarantee atomicity.

Sometimes I think about writing a compiler post-processor that takes the assembly output and optimizes known patterns, such as the one you describe. Most compilers offer many such possiblities. ;-)

Thad

Reply to
Thad Smith

The allocation of the local variable on the stack or in a register is not relevant - what is important is whether it can be seen from the outside or not. The compiler should know this (it's called "escape analysis") - basically, if the address of a local variable is not taken, and it is not volatile, then there is no way for it to be changed or viewed outside of the function, and the compiler should optomise acordingly.

Reply to
David Brown

address is fake.)

I have never been very impressed with Dynamic C, though that's more to do with XMEM limitations. As you say it produces very verbose code. I changed a SPI piece of code from C to assembler and it ran 20+ times faster. If you pay peanuts then you get a monkey compiler.

About a year ago I was writing code for a Motorola Coldfire processor and using a debugging tool. I forget whose C compiler it but I was very impressed with the code it produced. I couldn't have written it better in assembler myself.

Reply to
Fred

My thoughts are:

It is pretty lousy product with pretty huge base of ready written code which of course isn't tested enough and BUG free.

But as it is often true, what you pay is what you get, so it was pretty cheap wasn't it?

Reply to
MArk

Robert,

As I recall, nobody ever said it was "optimizing". As far as I'm concerned, its a good product and the price was right. Having started from scratch with a need to create an embedded TCP/IP controller board, I can't imagine going through the learning curve any faster than I did with DC and the rabbit products. The user support group is excellent.

When choosing a controller for a project, really, what's available in terms of compilers is often more important than the controller itself.

Mike

Reply to
Mike Turco

I don't think that this really gives the right picture. I also don't know when you made your experience with the product. My experience so far were pretty positive in the end. Especially for a product where you need only a couple of pieces the Rabbit aproach is quite good. The compiler is of course not that great but if the need arises, writing critical functions with assembler is after all also not rocket sience.

As you say, the "pretty huge base" of ready written code does contain bugs, but you get all the source so if you meet something critical you can fix it on your own. Still ways better than writing this all from scratch. Then again, the later releases meanwhile are fairly mature and stable - at least the stuff I used so far.

Another thing to mention is that there exists also another compiler from a third party (if memroy serves softtools) which is fully ansi complying and optimizing. The bad thing about this compiler is that it's a bit expensive for this kind of product.

Markus

Reply to
Markus Zingg

That is true. But even a non-optimizing compiler might be expected to produce somewhat efficient code. The more I look at the listing file produced by this compiler, the more I am convinced it is a sloppy piece of work. Something as simple as

*++ptr = intval;

Gets compiled with several clearly unnecessary instructions that may have a purpose in some other context, but not in this context. I don't expect global optimization, but I do expect any serious compiler to include the obvious local optimizations that can be done quite easily.

That being said, I do appreciate the complete TCP/IP implementation that so far has worked without any problems.

-Robert Scott Ypsilanti, Michigan (Reply through newsgroups, not by direct e-mail, as automatic reply address is fake.)

Reply to
Robert Scott

Why doesn't someone write a peephole optimiser to operate on the generated assembly code?

Reply to
Bill Davy

It was pretty strange to me how long it takes just to toggle a bit on RCM2200 or RCM2000 I think that it is a product that is not yet finished enough and they are distributing system that is in development (beta) and they are continously improveing it upon people reports.

All in all as I said you get what you pay for, in tis case you even get more than you payed for but it still doesn't mean that the product is great (processor and hardware isn't at all bad but compiler and IDE is really .....)

Reply to
Mickey

Amazing, isn't it? The complied code is really quite slow. (I've been using the 2200 module.)

I can't argue about the compiler, but what is wrong with the IDE? It works pretty well for me.

Mike

Reply to
Mike Turco

It worked just fine for our purpose, allowing us to create a TCP capable IO brick using the RCM3100 with a couple weeks of my development time.

No, the code probably isn't as efficient as it could be, but you know what... the damn thing works!

--
Alex Pavloff - remove BLAH to email
Software Engineer, ESA Technology
Reply to
Alex Pavloff

Well Mike I could state a number of things that are wrong with IDE but the thing that bothered me the most was that when you compiled your code all the maximized windows would turn to normal size so you would have to maximize them again, and you couldn't kill the compilation message window , well I won't debate on this I think IDE is really like it was made by children.

What do you think about debugger (pure shit)?

The only useful function I can give credits to is "printf" for debugging.

Reply to
Mickey

Well, there is a reason that K&R themselves caution against using bit-fields :)

Doug

Reply to
Doug Dotson

works

made

The debugger -- you mean the break point function?

I got the whole setup for $100, dev board, cable, c-like compiler, etc. It certainly wasn't the nicest or best development environment in the world, but it was cheap and everything worked.

I'm not familiar with all that's out there, much less the newest and hottest stuff.

Dollar for dollar: given the requirements for TCP/IP, a handful of I/O, a compiler, an IDE and a debugger, what do you think would be a better deal? Had I been aware of something better at that time I certainly would have went for it.

Mike

Reply to
Mike Turco

I already stated that it is very good capability/price ratio I only am against debugging so many (a few bugs ok) bugs when product is on the market. I can't allow myself to produce devices with many bugs because my customers would complain and so on...

But if I had to do a device with TCP/IP I'd use the RCM module again.

I wanted to say that if they had spent a few more months debugging more and making IDE more friendly they would have a few times more customers.

I had some problems with devices madewith RCM2200 modules and TCP/IP network ithas been exhausting to trace where the problem was and it turned out that the problem was in Server application on PC so I'm still under influence of that bad experienc (not a rabbit faoult) but my stupid bosses fault.

Anyways, I won't debate on this RCM2200, Rabbit, Dynamic C any more in this post anymore. I have said all that I wanted to say.

Best regards, Mickey

Reply to
Mickey

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.