Gnu tools for ARM Cortex development

[...]

which Compiler(s) would you recommend for the Coldfire and CM3?

Oliver

--
Oliver Betz, Munich
despammed.com might be broken, use Reply-To:
Reply to
Oliver Betz
Loading thread data ...

Re-ordering is done for many reasons - confusing the debugger is not an aim, but it is a side-effect! When you want accurate stepping during debugging, it can be useful to reduce optimisation to -O1 to avoid a fair amount of the re-ordering.

How much re-ordering affects the running code depends on the target. For some cpus, pipelining of instructions is important for speed, so the compiler does a lot of re-arranging. Typically you have a latency between starting a instruction, and the resulting value being available in a register. If you can fit an unrelated instruction in between the first instruction and the code using that result, you can make use of that "dead" time.

It affects speed too, depending on how your code is structured. In particular, with LTO the compiler is able to inline functions across modules, which is a speed gain. gcc 4.5 is able to do even more fun things - if you have a function that is called often as "foo(1, x)" and "foo(2, x)", but never anything else for the first parameter, it can effectively re-factor your code into "foo1(x)" and "foo2(x)" as two functions with constant values. These constant values can then be used to optimise the implementation of the two copies of foo(). Typically (though not always), this leads to extra code space, but it can speed up some types of code.

Yes, these are aimed for larger systems (for example, with code space

0.5 MB - 16 MB), or embedded Linux systems.

There is a fair amount of information in the documentation - perhaps there's not much in the marketing details. But you can download the documentation if you want - you can also download the free version of the tools as well as getting an evaluation license.

I don't actually make much use of the standard C library in small systems, so I can't tell you much about CodeSourcery's implementation.

CodeSourcery gives you the sources, depending on the version of the license that you buy.

Reply to
David Brown
[...]

AFAIK not very important for the Coldfire V2, because...

...this doesn't happen.

This applies to accesses to chip peripherals in Coldfire microcontrollers. After a write access, subsequent write accesses are delayed for a certain time. If the compiler "collects" these writes (which might happen because they are usually volatile), the result is much slower than immediate (and therefore distributed) writes.

But as I wrote, I didn't yet try whether I can construct such cases.

I see. Until now, I do this manually for relevant functions. Of course, it would be cleaner to have the compiler doing the optimization.

[...]

I did so, but the evaluation version contains the same documentation of newlib (!). The "Getting Started" document tells me: "CSLIBC is derived from Newlib but has been optimized for smaller code size on embedded targets. Additional performance improvements will be added in future releases".

Well, I had a brief look at newlib. IMO the "one for all" approach and the attempt to be compatible also with every non-standard environment leads to a rather convoluted coding.

This seems to be the more efficient approach. Likely I implement the needed (trivial) function in less time than I need to fiddle with newlib (-descendants), uClibc etc.

I'm not sure about this (see earlier in this thread), but I didn't yet ask them.

Oliver

--
Oliver Betz, Munich
despammed.com might be broken, use Reply-To:
Reply to
Oliver Betz

Is this a deliberate misrepresentation? Incorporating open source software in your own software, especially if you want to circumvent the spirit of the license, can be tricky.

"Making use of existing open source software" like for instance using Linux and a gcc compiler to develop software for an embedded system has in general fewer restrictions, is easier, involves less risks of breaking license counts (this happens sometimes despite due vigilance) etc. etc.

I once ported an embedded 68K system to gcc. 1] I ended up scrapping the (supposedly high quality) SUN C-compiler we bought a license for in order to build the 68K cross-compiler. (In order to not hamper other developments we wanted an extra license.) This was because we already had a legal SUN compiler, but it was less pain to install a gcc SUN compiler than get even the license manager working on a cluster properly with those two licenses. Now who is laying down a "legal minefield"? Building gcc took a fraction of the time and effort to get even a service engineer on the phone. The resulting gcc 68K cross compiler generated the exact same code, but not so fast. Even if a total build would be 10 minutes instead of 5, who cares? (That would be a dramatic influence of a compiler on a total build process.)

Regards quality. The 68K gcc was a dramatic improvement and plenty good enough to shrink the code by 30% which meant allowing adding new features to EPROM restricted hardware. A difference of 10% of gcc in "performance" (read speed) is a big deal in advertisements, but much less so in practice. (In this project we didn't care and didn't measure performance. Mechanics was determining the speed.)

Groetjes Albert

1] There were no other options than gcc. I needed to change the c-compiler to adapt to existing assembler libraries. So only a source license would do. Oh, and I investigated how to get a source license on the original compiler. I gave up because I didn't even manage to establish who owned the rights to this compiler. Talking of "legal minefields", shees!

--

--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
 Click to see the full signature
Reply to
Albert van der Horst

I didn't think it was a misrepresentation, and if I was unclear then it certainly wasn't deliberate. Re-reading what I wrote, I can see how it could be misinterpreted, and I thank you for clarifying it. I was trying to say the same thing as you do below.

Yes, that's correct. It is clearly /possible/ to incorporate open source software in your own software. You just need to follow the license requirements. And many pieces of open source software aimed at embedded targets come with very developer-friendly licences to make it easier. However, some are more awkward - you have to check carefully.

Of course, the same thing applies when you are incorporating closed source software in your own software. While commercially licensed libraries and code seldom has restrictions on the license for code that links to it (unlike the GPL, for example), and seldom requires prominent copyright notices (unlike some BSD licenses), they all have a license with legal requirements and restrictions. This might for example restrict you from selling it on to third parties in part of another library, or perhaps restrict the countries you can export your product to. There might be complicated requirements for royalties, auditing, developer PC node locking, etc. The issues are different from those of open source software, but there are issues nonetheless.

Use of open source developer programs like gcc is very seldom a problem (unless you have company management with bizarre company rules, of course). It is very difficult to violate typical open source licenses by simply /using/ the programs. As in your example below, this is in contrast to commercial programs, some of which can be particularly awkward to use legally and correctly within their licenses.

I've seen similar cases where using gcc was simply much faster and easier than using a commercial compiler. I've also seen cases where using gcc has taken more time and effort than getting a commercial tool in action. All one can say for sure is that there are no easy ways to judge which would be the best tool for a given job - high price is no indication of quality or time-saving, just as free cost price is no indication of low real-world costs.

My own experience with gcc for the 68k is similar - it's of similar code generation quality to the modern big-name commercial compiler I've compared it to (and much better than the older big-name commercial compiler I used previously). The balance between code size and code speed varies a little, and the techniques for squeezing the very best out of the code are compiler dependent, but certainly gcc is a fully appropriate compiler for good code on the 68k.

Slower run-time performance for the compiler itself doesn't come as a big surprise. gcc is built up in parts rather than a single speed-optimized tool. Part of this is from its *nix heritage - if you use gcc on a windows machine it can be noticeably slower than using it on a *nix machine, simply because process creation and communication is slower on windows.

Reply to
David Brown

Well, I am sure that some commercial compilers, especially those written by smart guys like Walter, and the CPU designers like ARM, will beat GCC. At the same time, here's an example how x86 GCC does quite well in a contest against Intel, Sun, Microsoft and LLVM compilers:

formatting link

It's an interesting paper in several ways---he points out that compilers are often so good that tactical optimizations often don't make sense.

Reply to
Przemek Klosowski

The paper deals with a dozen or so optimizations and shows the variation on the generated code, quite useful. What is missing from the paper is any form of analysis when the compiler should utilize a specific optimization and how each of the compilers made that choice.

The paper touches on source code ways to improve the quality of source level debugging information. Source level debugging is important but in many fundamental ways this is one of the major aggravating factors in gcc. One of the fundamental ways to ship reliable code is to ship the code that was debugged and tested. Code motion and other simple optimizations leaves GCC's source level debug information significantly broken forcing many developers to debug applications with much of the optimization off then recompile later with optimization on but the code largely untested.

Regards,

Walter..

-- Walter Banks Byte Craft Limited

formatting link

Reply to
Walter Banks

That wasn't really the point of the paper. I believe the author was aiming to show that it is better to write logical, legible code rather than "smart" code, because it makes the code easier read, easier to debug, and gives the compiler a better chance to generate good code. There was a time when you had to "hand optimize" your C code to get the best results - the paper is just showing that this is no longer the case, whether you are using gcc or another compiler (for the x86 or amd64 targets at least). It was also showing that gcc is at least as smart, and often smarter, than the other compilers tested for these cases. But I did not see it as any kind of general analysis of the optimisations and code quality of gcc or other compilers - it does not make any claims about which compiler is "better". It only claims that the compiler knows more about code generation than the programmer.

I don't really agree with you here. There are three points to remember here. One is that /all/ compilers that generate tight code will re-arrange and manipulate the code. This includes constant folding, strength reduction, inlining, dead-code elimination, etc., as well as re-ordering code for maximum pipeline throughput and cache effects (that applies more to bigger processors than small ones). You can't generate optimal code and expect to be able to step through your code line by line in logical order, or view (and change) all local variables. Top range debuggers will be able to fake some of this based on debugging information from the compiler, but it will be faked.

I make no claims that gdb is such a "top range" debugger, and it is definitely the case that while many pre-packed gcc toolchains include the latest and greatest compiler version, they are often lax about using newer and more powerful gdb versions. Add to that the fact that many people use a simple "-g" flag with gcc to generate debugging information, rather than flags giving more detailed debugging information (gcc can even include macro definitions in the debugging information if you ask it nicely), and you can see that people often don't use as powerful debugging tools as they might with gcc. That's a failing in the way gcc is often packaged and configured, rather than a failing in gcc or gdb.

Secondly, gcc can generate useful debugging information even when fully optimising, without affecting the quality of the generated code. Many commercial compilers I have seen give you a choice between no debug information and fast code, or good debug information and slower code. gcc gives you the additional option of reasonable debug information and fast code. I can't generalise as to how this compares to other commercial compilers - it may be that the ones I used were poor in this regard.

Thirdly, there are several types of testing and several types of debugging. When you are debugging your algorithms, you want to have easy and clear debugging, with little regard to the speed. You then have low optimisation settings, avoid inlining functions, use extra "volatile" variables, etc. When your algorithm works, you can then compile it at full speed for testing - at this point, you don't need the same kind of line-by-line debugging. But that does not mean your full-speed version is not debugged or tested! Thus you do some of your development work with a "debug" build at "-O1 -g" or even "-O0 -g", and some with a "release" build at "-Os -g" or "-O3 -g".

mvh.,

David

Reply to
David Brown

He made that point, and I agree

Not really. The author used very simple examples that for the most part can be implemented with little more than peep hole optimizers. He also didn't claim otherwise.

Agreed that has been true for quite a while in practically all compilers.

We can expect debugging information to tie the code being executed to the original statement. Inline code may have multiple links to the source. Code motion may execute code out of source order

This isn't true in the commercial compilers I am familiar with.

gcc and gcc (The ones with the copyright filed off) based compilers often recommend the you suggest. It is the change of optimization levels that high reliability folks avoid. It has been a big problem for our customers who also use gcc.

Regards,

Walter..

-- Walter Banks Byte Craft Limited

formatting link

Reply to
Walter Banks

Is the paper available somewhere?

--
Grant Edwards               grant.b.edwards        Yow! I am NOT a nut....
                                  at               
 Click to see the full signature
Reply to
Grant Edwards

Tanenbaum once said in a lecture: " Global optimisers and symbolic debuggers are each others arch enemies" A moment of thought should be enough to convince one self of the truth of this.

I fail to see how this situation is different for GCC than for any compiler.

By the way.

- The very best code is tested but never debugged, because there is no need. (Chuck Moore the inventor of Forth reportedly never debugs. He checks his code and it works. Mostly subprograms are one line. That makes it easier, of course. )

- I always run tests on shipped code. Don't you?

- If you expect the outcomes of different optimisation levels to be different, you're living a dangerous live, because apparently you don't trust your code not to have undefined behaviour.

Groetjes Albert

--

--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
 Click to see the full signature
Reply to
Albert van der Horst

I think we agree here - the paper was not aiming for an in-depth comparison of optimisation techniques (though I don't think vectorisation counts as a simple peep-hole).

It's been true for a while with many compilers, but far from "practically all" compilers. It's true for high-end compilers, and it's true for gcc, but it is not always the case for the "cheap and cheerful" market of development tools. These don't have the budget for advanced compiler technology, nor can they get it "for free" like gcc (where small ports like avr-gcc or msp-gcc can benefit from the work done on the more mainstream ports like x86). There are a great many compilers available for a great many different processors which /do/ need "hand optimised C code" to generate the best object code.

But the author's main point is that when targeting gcc or other sophisticated compilers, you want to write clear and simple code and let the compiler do the work - there are many developers who want to get the fastest possible code, but don't understand how to work with the compiler to get it.

If you know of any gcc-based compilers with the copyrights filed off, I'm sure the FSF would be very happy to hear about it - just as they would tell you if they knew that your copyrights had been violated.

High reliability folks will aim to write code that is correct and works regardless of optimisation levels and other settings, as well as being as independent as possible of compiler versions and other variables. Then they will fix these at a particular setup and only ever qualify their resulting program for a given build setup.

Code that works differently on different optimisation settings, other than in terms of speed, is broken code (or very occasionally, a broken compiler).

Of course, knowing that your code is correct and verifying and qualifying it for high reliability requirements is a very different thing, which is why you do your heavy-duty testing with the same settings as shipping versions. But that does not mean you can't use different settings during development! Suggesting that you don't change compiler and debugger settings during development is like suggesting you don't distinguish between prototype card designs and production card designs.

Reply to
David Brown

I agree on all of the above.

A few more relevant quotations:

Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it".

(I can't remember who said the following - I think it was either K or R of K&R C fame.) "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."

Reply to
David Brown

formatting link

I entered the URL in firefox and got it. What is your problem ?

--
42Bastian
Do not email to bastian42@yahoo.com, it's a spam-only account :-)
 Click to see the full signature
Reply to
42Bastian Schick

I see not, why "broken debug information" is an excuse for not testing the final version. In an ideal world, there should be no need to debug the final version ;-)

And if optimization breaks your code, it is likely your code was broken before ( e.g. missing 'volatile').

--
42Bastian
Do not email to bastian42@yahoo.com, it's a spam-only account :-)
 Click to see the full signature
Reply to
42Bastian Schick

formatting link

I suspect he was hoping to find a full text paper, with the transcript of the talk, rather than just the slides.

Reply to
David Brown

formatting link

I didn't find a paper. All I could find were "powerpoint" slides.

--
Grant Edwards               grant.b.edwards        Yow! Bo Derek ruined
                                  at               my life!
 Click to see the full signature
Reply to
Grant Edwards

formatting link

I don't really care about a transcript of the talk (nor the slides that accompanied the talk), I was just hoping to read the actual paper.

--
Grant Edwards               grant.b.edwards        Yow! And then we could sit
                                  at               on the hoods of cars at
 Click to see the full signature
Reply to
Grant Edwards

That isn't true ... optimizations frequently don't play well together and many combinations are impossible to reconcile on a given chip.

GCC isn't a terribly good compiler and its high optimization modes are notoriously unstable. A lot of perfectly good code is known to break under 03 and even 02 is dangerous in certain situations.

George

Reply to
George Neuner

GCC isn't a terribly good compiler. Nonetheless I think it is misleading to lump code motion with "simple" optimizations. The dependency analyses required to safely move any but the simplest straight-line code are quite involved.

George

Reply to
George Neuner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.