GCC compiler for ARM7-TDMI

They are excellent, but too expensive for many users.

Leon

Reply to
Leon
Loading thread data ...

In article , Dr Justice writes

The first two are true in my experience. for example in this thread You say the only "trustworthy" benchmark is one from a GCC supplier which puts GCC as the best compiler.

Other commercial ARM compiler libraries use the full Dinkumware which is not a slimed down anything. The only compiler I know of that uses a slimed down printf is the old Keil printf. SO you are comparing like with like and some of the Gcc libraries are not that good.

I was making nothing up.

As you say lets just drop it. I would have replied off line but you have a fake email address.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
 Click to see the full signature
Reply to
Chris Hills

What GCC libraries? The compiler doesn't come with a C library, you must provide that yourself.

-a

Reply to
ammonton

It may be that I shouldn't be writing this, but:

[snip]

which

[snip]

Perhaps I'm being unclear(?) I specifically said it was the one that I had found and trusted, and I have said that there may be better ones that I do not know about. I have not claimed that it is the only trustworthy benchmark that exists. Furthermore, the conclusion of the app note was more or less that ARMs compiler was best overall, GCC could just about hang with IAR and that KEIL fell somewhat behind.

This may be not the case of a GCC fan, as you say, but of a GCC critic/sceptic. I note that you run a company that deals in non-GCC based compilers. That's fine, I 've stated what my impression of GCC is and you have stated yours. The irony is that I'm not in disagreement with you as such: yes, GCC is not as good as the better commercial ones. Still, I reckon it's just fine for many people and projects. I do not have any extensive opinions on the various libraries.

fake email address. Yes, I normally conceal real addresses on usenet. If you want to, you're welcome to mail to aleistad xat chello ydot no.

DJ

Reply to
Dr Justice

GCC is certainly better than the cheaper compilers, however it is significantly behind the best commercial compilers. On large benchmarks the difference is around 20-30% on both codesize and performance. This includes code that has been written for GCC, like the Linux kernel.

That is very true - and that includes the above benchmark! Most of the benchmarking efforts I've seen are fatally flawed in many aspects. Some obvious flaws in this one:

  1. What is codesize? Compilers have different code generation strategies and some compilers inline more data in code than others. It's unfortunate there is no standard way to measure codesize from ELF images, but it's essential it is measured correctly. I use "ROM size", ie. everything that ends up in flash, including code, literal pools, switch tables, strings, constant data and RW data initializers. This counts more than pure code, but it is the only reliable way to measure codesize.
  2. It's tiny: total size without libraries is about 22KB Thumb code. That is at least 2 orders of magnitude too small for a codesize benchmark... It is dangerous to make conclusions from something so small as generated code varies a lot depending on the source code.
  3. Measuring codesize of performance benchmarks. Benchmark code is typically small, badly written (*) and optimised for performance, so measuring codesize of performance benchmarks is not representative. For example, table 2 shows the ARM compiler generating on average 14% smaller ARM code on average than GCC, yet it only manages to win 4 of the 8 benchmarks. There are large variations in codesize due to the code being very repetitive, so a single optimization can make a huge difference.

(*) I keep being amazed by how much people rely on in-house benchmarks when deciding on multi-million dollar deals. The code is typically tiny, written by someone with no programming experience (let alone in writing efficient code), so the score can often be improved significantly with the right compiler options, small source code changes or trivial compiler tweaks...

  1. Interworking: It appears some compilers have interworking on, and some don't. Eventhough it is claimed the difference is small, it can have a significant effect on both codesize and performance. To be fair, it should have been turned off (or on) in all compilers tested.
  2. Inlining. For dubious reasons, inlining is disabled in the ARM compiler, but not in any of the other compilers. The ARM compiler contains a finely tuned inliner (which helps performance a lot), so turning it off while allowing GCC to inline isn't exactly fair...

Benchmarking is seriously difficult, fair benchmarking is next to impossible...

Wilco

Reply to
Wilco Dijkstra

Some good comments there.

You are probably right. And of course there will always be biases with compiler vendors testing compilers.

Still, IMO there are degrees of trustworthy here. Eg. this:

formatting link
I find to be rather implausible (never mind the KEIL buy out). Contrast with this, mentioning Keil at the end.

formatting link
Not sure what to make of that, or indeed the two together.

For reference here's IARs:

formatting link

And here's Imagecrafts take on benchmarking:

formatting link

The raisonance benchmark was the most detailed I could find, not least in its description of the benchmarking premises. The just-a-bunch-of-numbers-on-a-webpage "benchmarks", sometimes with 'too strange' results, I find harder to put as much trust in.

These are the things that are readily available to us mere mortals to judge from, in addition to comments e.g. here in c.s.a.e. It's not easy to know what to think. As aked for previously - if anybody has pointers to good benchmarks, now would be a good time to post them.

DJ

Reply to
Dr Justice

Of course. There are many tricks one can pull. And then there is how you present it which provides even more possibilities to misrepresent the opposition...

Yes, it's totally bogus indeed. I'm surprised that page is still up since ARM replaced the old Keil compiler. The obvious flaws are:

5 year old compilers used from the competition vs an unreleased compiler, codesize methodology not explained, compiler options not listed, the (modified) sources are not available.

It also forgets to mention that 95% of the codesize and performance is from/spent in the C library. So they are really comparing 4 libraries rather than 4 compilers. Since GCC and ARM use optimised ARM assembler in the floating point libraries, much of the code is actually ARM, not Thumb. It is essential that the 128-bit flash interface in the LPC2294 was enabled, or ADS and GNU are heavily penalized. My guess it wasn't, or ADS would have won on Whetstone. Finally I believe CARM doesn't do double and uses float instead, which is an unfair advantage.

formatting link

Indeed. Apart from the flaws mentioned above, the benchmarks appear to be hand picked from a much larger set as there is only one case where ADS wins and none where GCC wins (contrast that with the Raisonance benchmarks which show much more variation). Also the totals graph is obviously wrong (sum of benchmarks should be around

500KBytes, not 350KB for Thumb). The normalized result doesn't seem represented correctly in the totals graph (which I presume was chosen on purpose to be as unclear as possible).

The same library issues with math libraries and printf are at play here. Much of the benchmarks consist of libraries, eventhough some benchmarks are a bit larger, most still are small performance benchmarks which are measured for codesize, and the average size is pretty small (around 20KBytes with libraries, so perhaps 10KBytes without). The codesize benchmarking I do involves an average application size of around 250KBytes, with the largest application (actual mobile phone code) being 6 MBytes of Thumb code.

One issue not mentioned yet is that of accuracy. The C standard - continuing its great tradition of not standardizing much at all - doesn't set a standard here either, so the accuracy of math functions like sin can vary greatly between implementations. I know the IAR math functions (and I presume the Keil ones too) are not very accurate. Of course an inaccurate version is much smaller and faster...

Interesting. He is complaining about similar flaws and tricks...

I completely agree. You don't find this often, including all the source code and options used. It doesn't mean it is 100% correct, but at least you can do your own measurement if you want to. So you're right the Raisonance benchmarks are more trustworthy than all the others together.

We don't need any more benchmarks, there are already enough bad ones available! You simply can't trust most benchmark results even if they were done in good faith - flaws are likely due to incompetence or simply letting marketing do the benchmarking. The solution: don't let anyone who can't explain the advantages and disadvantages of geometric mean anywhere near a benchmark!

Official benchmarking consortiums aren't much better either - SPEC attracts a lot of complaints of benchmarks being gamed, and the quality of EEMBC benchmarks is unbelievably bad. It would be better to use Dhrystone as a standard

formatting link

My advice is that the best benchmark is the application you're planning to run. People put far too much emphasis on tiny benchmarks when they should simply test their existing code. Benchmarks are never going to be representative, even if you find one that matches the area you are interested in. Most compilers have free evaluation periods, so you can try your code on various compilers before deciding which to use.

Wilco

Reply to
Wilco Dijkstra

A correction to my own previous post:

The second URL should be

formatting link

And why not have a look at this too for fun:

formatting link

:-)) That may just be so.

The next thing would be to benchmark libraries instead, since there may be non-negligible evidence indicating that they are a big performance factor and in part the culprit of current compiler benchmarking... (maybe someone somwhere is doing this).

Yes. Although for some it may not be so quick and easy to collect all the compilers, get their their real-life projects compiled on on the possibly code size crippled eval versions, then gather and interpret the stats. That is what benchmarks were meant to cover for, if I'm not mistaken.

DJ

Reply to
Dr Justice

It's also usually in violation of the license agreements on commerical compilers to publish benchmark results.

-p

--
Gotch, n. A corpulent beer-jug of some strong ware.
Gotch, v. To surprise with a remark that negates or usurps a remark
 Click to see the full signature
Reply to
Paul Gotch

Only some of them. Quite a few don't have the restriction. As noted in the IAR set GreenHills do have that restriction but the others, Keil, IAR, ARM etc do not.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
 Click to see the full signature
Reply to
Chris Hills
[Snipped]

I agree totally. Another advantage of actually testing one's own code is to see how easy it will be to port as well. Lots of code rely on hidden assumptions, which breaks very quickly under high optimisation levels. If it was an assumption which is true for GCC, then GCC might be the best option even "IF" the comercial compilers are better. The first time I had to port my own code from one compiler to another (exactely the same hardware), I was amazed by how many things broke. Since then I have been MUCH more aware of things one should not assume.

Regards Anton Erasmus

Reply to
Anton Erasmus

In article , Anton Erasmus writes

This is why you should use the eval versions of the commercial compilers.

That depends on the compiler. the more standard the code the less this should happen. Obviously when you get close to the HW all compilers have non standard extensions.

Why? I don't follow this logic?

What sort of thing? I am curious?

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
 Click to see the full signature
Reply to
Chris Hills

Of course one should use all the available compilers. This is a good way to test the reasonable priced against the megabucks priced commercial compilers.

What do you mean. All commercial compilers are not better than GCC. GCC is being improved at a higher rate than most commercial compilers. Some commercial compilers are currently better - mostly in the libraries they provide. For some projects one need the "best" whatever it is at the moment. For most "good enough" is good enough.

It has been a while so it is a bit difficult to make a list, but I will try.

  1. Comparisons between unsigned and signed integers.
  2. Using unions to access bytes inside a long.
  3. Assuming structure member alignment would be on boundaries of
32bits on a 32-bit architecture. (This could be fixed with a compiler option)
  1. Assuming that all compilers would cast and perform the various operators in the same order in an expression such as long a,d, short b,c;

a=d+b+c;

(This is a simplified expression and might be a bad example. In the original code the one compiler did the operation between the shorts first, then the cast and then the final operation. The other did the casts first)

Most of the errors were the typical type of errors lint would have warned one about. Especially relying on implied casts.

Regards Anton Erasmus

Reply to
Anton Erasmus

Seems like this Green Hills tactic would significantly reduce their sales. Is there product so good that potential customers are not concerned about this restriction?

David

formatting link
Microc> In article , Paul Gotch

Reply to
david.fowler

In article , snipped-for-privacy@gmail.com writes

It doesn't

Yes it is.

Have you never come across the GHS tools?

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
 Click to see the full signature
Reply to
Chris Hills

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.