I posted a while back concerning help choosing an MCU/DSP and I greatly appreciated the input you guys gave me so I'm back with more questions.
I've decided to go the ARM9 way. More specifically, I went with ST's STR9 series since it had the correct processing and ADC sampling rates.
Anyhow, I ordered a kit from IAR to test things out and so far so good. My question is should I stick with IAR or go with something like EMBEST (they look like they have a good prices for their dev kits) or GNU.
Looks like GCC is pretty well regarded but seems like it is more complex to install / use than other solutions.
GCC is well regarded by some but not by others. The people who do our compiler validation would not give it house room. But that is not the specific GCC that Rowley use. Not sure how theirs is different.
Rowley's prices have gone up somewhat of late. From their web site:-
Shared-Developer Commercial License with Sentinel Dongle. 1,555.87 GBP
Compared to IAR at 1540 GBP
That makes Rowley 15 GBP MORE EXPENSIVE than IAR.
And that is for a GCC compiler from Rowley.
The IAR compiler uses the EDG front end and the Dinkumware libraries both of which are very highly regarded and fully tested by both Perennial and Plum Hall. IAR compilers have been validated for Safety Critical use at SIL3. You can't say that about GCC
BTW IAR have over 30 dev kits and many many examples and BSP's they also support a very wide range of debuggers and RTOS awareness in the debugger.
Check the pricing.... from what I have seen the new Rowley pricing is not far off that of IAR if not the same these days. From the Rowley web site
Shared-Developer Commercial License with Sentinel Dongle. £1,521.29
The IAR system is 1400 GBP for the compiler and IDE or 1800 GBP with the debugger. (There is even an IAR version at 600 GBP)
Also the IAR supports a LOT more of the MSP430 versions than Crossworks does.
BTW Rowley only give support for people on a paid up maintenance contract just like everyone else. No difference there
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/
Download the free RIDE Ide from Raisonance. It uses gnuarm as the compiler. Very easy to get going with the STR9. The free support library provided by ST is fully integrated within the Raisonance environment, and getting a project started for a specific STR7 or STR9 variant is trivial and very quick. The debugger is limited in code size on the free version.
These are the guys who actually write and maintain the gcc ports for ARM, ColdFire, and a few other targets - they don't just distribute ready-made code. They are also a lot cheaper (free if you just want the command-line stuff, $400 for more libraries, the IDE and better debugger support, and a few thousand for the full package, unlimited support, etc.).
They are also happy to run Plum Hall and other such validation suites on their tools. I don't know what results they have, or certifications, or what validation they have for the different parts of their toolkits - that's more in the realms of "professional" support.
This is part of the problem there are many who asynchronously maintain GCC ports so not two GCC are the same. Then there are others who just distribute some one else's port with their own bits and add-ons.
So it can be very difficult to know exactly what version of the compiler you have.
So for a professional package the cost the same as a professional commercial compiler? Without all the trace ability history and testing you get for a commercial compiler.
Why don't they? Plum Hall or Perennial
It is quite difficult to do for GCC. Also it would only be for a specific version and build. As so as you change anything you have to re-test. Also it only applies to the binary. If you release the source for some one lese to build it is not covered. (Because any one could change the source or build it with a different compiler.
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/
That is one of the reasons I use (and recommend) CodeSourcery as a source of embedded gcc compilers - then you *do* know who is working on it. There are certainly other people who contribute to gcc - after all, it is a modular system with dozens of major cpu targets, dozens of host OS's, and vast numbers of professional and non-professional users. Major contributors include companies like IBM, Intel, Sun, and Freescale as well as Linux-oriented companies like Novel and Red Hat. Much of this will be for gcc front-end and middle-end development (and backends for x86, amd64, and ppc), rather than the backend for ARM, ColdFire, or other embedded targets.
The advantage of having many maintainers is that all gcc ports can benefit from this sort of development. It certainly does mean there is less coherence in the project - it can therefore take more time for improvements in different parts of gcc to make it through to tested and ready-to-use downloads (you can get them earlier if you want to patch and build yourself), and it is harder for the front-ends and back-ends to take advantage of changes - they must communicate using a common interface.
CodeSourcery are the official maintainers of the ARM and ColdFire (and a few other targets) backends, and they are also heavily involved in general gcc development. This means they have a much better view of what is going on in gcc, and what parts or changes are suitable for embedded development. They are also heavily involved in gdb development for debugging.
Have a look at and to see the differences. Personally, I use the free ("lite") version of their tools normally - I'm not a fan of IDEs. But I've also used the "personal" edition - it actually covers everything a professional user could want except that there is only limited official support (there are still support forums, frequented by the developers, installation support, bug support, and so on). The "professional" version gives you unlimited support from the engineers that write the tools:
Sourcery G++ Professional Edition customers receive unlimited support from CodeSourcery, directly from CodeSourcery's engineers. This support ? provided without any per-incident fees ? covers much more than just installation and basic usage. CodeSourcery will happily answer questions about porting programs from other tools, the C and C++ programming languages, using GNU features like inline assembly, and all other topics related to use of Sourcery G++. Support is provided by the same expert developers who have contributed thousands of changes to the GNU toolchain over CodeSourcery's ten-year history.
With commercial closed source tools, it's a rarity that you can get in touch with the developers directly. Often you deal with dedicated support staff who know less about the tools and target than expert users (on the other hand, they know a lot about licensing, dongles, license servers, and such like - that covers a substantial number of support issues).
I'm not sure what you mean by "trace ability history" - there are certainly no problems getting older versions of gcc (google for "gcc archives" and download the source from 1988 if you want), and the CodeSourcery website keeps releases directly available for download for several years. Since all the source code is included, you have as much "history" as you could want. And if you've paid for the unlimited support, I'm sure you'll get all the help you want too.
As for testing, gcc already has substantial test suites which CodeSourcery use (amongst other test suites).
I don't know if they do or not. You could ask them if you like - I'm sure people who are happy to pay the large prices of commercial toolkits (or CodeSourcery's "professional" edition) are interested in this sort of thing, and will ask all potential tool vendors about validation before committing themselves. For example, IAR's web site says they test with Plum Hall and Perennial - they don't give any results for these tests for their compilers. I'm sure they would tell me if I ask, especially if I'm offering lots of money - and I'm sure the same applies to CodeSourcery.
Personally, I am not interested in such big-name test suites. I have no a priori reason to think that an expensive closed-source test suite is any better than an open source test suite, and plenty of reason to think that open source test suites are better in some ways (for example, if a bug is found in gcc, then a test can be added to the regression test suite to ensure that the bug is not repeated in future versions). Of course, it is always better to test with as many testsuites as conveniently possible (none of them will cover everything).
Certainly there are times when it is legally important to have certifications from independent well-known third parties - but I don't think it is likely to make any realistic difference to the reliability of the end product (it is *far* more likely that any bugs are do to *my* programming, not the compiler).
CodeSourcery releases compiler builds - you download the pre-packaged binary and install it just as you would for any closed source tool. They release new versions about twice a year (with faster updates for paying customers), just like for closed source tools. They run all their internal tests and validation (whatever these may be) on these builds, just like for closed source tools.
You can also, if you like, download the source code. You can, if you like, rebuild it yourself with or without modification. As you say, any such builds will no longer be covered by whatever certifications the binaries had - but that's your own choice. By getting your gcc tools from CodeSourcery (or other serious gcc vendors), you have both options.
Have a look at this post - it explains pretty well why you don't see many gcc Plum Hall results published:
Anyone are allowed to use the free libraries. Many people have great difficulty in getting started unless there is some "Wizard" where one can click on an option, and the library is automatically added to the link command, and the library headers are added to the include path. RIDE has such a wizard. The actual libraries seem to support Keil, IAR and GCC. So according to my knowledge of available compilers this is far from everyone. How easy it is to use with a particular compiler depends on the person's skill level and user interface support.
I am only speaking from my understanding of the gcc organization. It is divided into various phases, terminating in a code generation (and possible optimization) phase. This is the only phase that requires adjustment in porting. So, if fooling with the syntactical areas is avoided, the port should be relatively easy. Note that a single port covers all the languages handled by gcc, which include at least Ada, C, C++, Fortran. Gnu publishes the validation tests it runs, which should verify all.
[mail]: Chuck F (cbfalconer at maineline dot net)
And all the guys who want to compile for the ARMv7 architecture still use the 2007Q3 version (e.g a one year old build). All later versions don't work in one way or another. Some of the builds are so much broken that they can't compile a simple byte-copy loop.
I don't complain. I haven't payed anything for their GCC build.
However, after your rant and praise for the CodeSourcery packages I can't resist to state that their compiles are not perfect either. A bit more internal quality assurance would not hurt for sure.
Yes, I've had a little look around with Google - it seems there is not much anyone can say except that they "test with Plum Hall". I guess Plum Hall wants interested parties to buy their own license and test themselves.
Yes, I that was my mistake. I asked CodeSourcery about their testing, and they made this point as well. They actively use Plum Hall to test for language conformance, and have found and fixed issues as a result. Because of licensing issues, they can't give out details, of course.
Many of the issues that will be found using something like Plum Hall will be for unusual language uses - things that don't occur in normal real-world programming, but are nonetheless part of the language standards. That's why I don't feel these tests are of direct interest for me - if a flaw is so obscure that it is only found by such complete language tests rather than common test suites and common usage, then that flaw will not be triggered by *my* code, because I don't write obfuscated code.
If I make a system that contains a bug that leads to death, am I less responsible if I can claim that the compiler used passes Plum Hall tests? If the Plum Hall tests are considered proof that the compiler is correct, that only increases the evidence that it was *my* code that caused the failure! The only legal benefit from having the Plum Hall certification is if the fault really was in the compiler - I could claim that I didn't need to check the compiler because Plum Hall said it was OK.
I agree on that (although where do you stop? Does it only apply when the compiler binary is run on the same kind of processor as when it was tested?)
Quite. They need to eat too :-) Time and effort costs.
These test suites are not small or insignificant. The current Perennial C test suite has over 68000 tests and the C++ has 124,000 (C and C++ are different languages that parted company back in the 170's)
It does get confusing. You need to check the thing is built right then did we build the right thing. (Verification and validation) Then you get on to testing the maths libraries, the assembler etc :-)
Quite... BUT it only applies to that specific binary that was tested under a set of specific conditions. When we tested an ARM binary we did
30 sets of tests on the one compiler for 30 different ARM targets.
This is a problem... "don't normally occur" the world has a differing
30% of the language they use. Overall they use about 99% of it. More to the point there are a lot of things they don't know they use. How well do you know the internal workings of the library for your compiler? Do you know what the library uses? Probably best not to in some cases :-)
The problem is you either test or you don't. There is no halfway house.
If you test you test all of it or not at all. If you test all you can list AND DOCUMENT the areas you do not meet. This is quite common with embedded compilers (and virtually all C99 compilers :-) We meet the standard here BUT in the following places we do something different. More importantly this is what we do where we don't meet the standard.
Not true...... It depends on how the compiler works internally and how it optimises. Also it depends what you are doing. There are parts of C99 that the majority have not implemented. However they were put in because small pressure groups got them in and you can bet that they use them in the one or two compilers that implemented them. However what does your compiler do when it meets those constructs?
It depends on the accident. However it does show that you have taken reasonable care to ensure the compiler meets the specification of an ISO C compiler.
Not at all. :-) It is simply that given a set of inputs as described in the standard you will get a set of outputs as specified in the standard. OR not and if not what it does do.
I am Not a Lawyer Yes. However if you are not using a tested compiler, or you roll your own from source then you are more likely to be seen as liable as have used untested or unsuitable tools. How do we know they are "unsuitable" tools? Because they are not validated or tested.
You wan to be hanged for one crime or guillotined for the other? :-)
Yes. However for safety critical use you have to show due diligence on the tools. Plum Hall (or perennial) is only part of a validation of a compiler. However having taken reasonable steps if the fault is in the compiler I would think that (not being a Lawyer) that it would lessen your liability somewhat.
BTW the new Corporate manslaughter Act that came into force this year says that there are fine of up to 15% of *TURNOVER* (not profit) and jail sentences for directors and responsible managers.
Not sure what you mean here. Normally you will run the compiler binary on for example a windows platform and test it. If that compiler binary is distributed on various versions of windows then there should be no problem.
For MAC's I assume you would test (at least we would) on both the PPC and the Intel platforms.
If you have a compiler that you distribute on Power PC, Sparc, Intel, Apha, MIPS etc then you build and test on EACH of those.
For the Embedded Arm compiler we test on 30 different targets. Though in all cases the compiler binary runs on the same Windows host (XP SP2 so far.
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/
You are liable only if you have been neglegent. To not be neglegent you have to show that you have used due dilligence to identify and mitigate hazards. The compiler is a potential source of problems, so if you fail to identify this I suppose you could be considered neglegent.
Once you have identified the compiler you have to do what is necessary to remove it as a potential problem. Running the compiler through a test suite is one way of showing that you are taking care, but there are others (as per my previous post in this thread) and test suites have their limitations. Somebody very high up in the FAA once said to me - "its impossible to validate a compiler as there are an infinite number of inputs" [BTW: I don't buy that statement myself]. I have spoken with people who have attempted a formal (i.e. mathematical) proof of a compiler, but this is too big a task to be viable unless you seriously restrict the inputs.
Its not proof, but it is using the "state of the art", and doing all that is practical *to show language compliance*, and therefore a worth while exercise if you are worried *about language compliance*. If you test your code fully from requirements to object code then the compiler can be completely non "standard" and your code still be shown to be completely conformant to its specified behaviour.
On the assumption that, if your code has the potential to cause death, then you are going to test it pretty damn well, then the compiler makes little difference. Bum code generation will be picked up when a test fails.
If somebody is dead, then an investigation will find the source of the problem no matter whether you try to hind behind somebody elses unpublished results or not. At least this is the case in the risk adverse, knee jerk reaction, British society.
Out of interest, I have seen most bugs in generated code where non standard features are used (like __interrupt qualifiers, etc.) and particularly subtle stack frame problems that can be very reliant on particual events occurring just at the wrong moment (temporal effects) - for example an interrupt being taken just as another interrupt is being exited. The interrupt entry/exit code being very hardware specific. Do the Plum Hall tests pick up on that sort of thing?
I'm snipping a lot of this because it's getting a bit unwieldy - you can assume that I basically agree with your comments if I've snipped them.
First off, I don't care how my compiler reacts to these few language features put in for a small pressure group - I'm not in one of these groups, and I don't use those features.
More generally, if you code to a particular standard (say, MISRA), and you have tests and code reviews in place that enforce those standards, then you don't need to consider how your compiler deals with code outside those standards.
Consider the clichéd car analogy. If you buy a car in Britain, and only ever drive it in Britain, then you don't care if it has been tested in outside temperatures of over 45 C or under -25 C. As you won't be using it outside these parameters, you don't have to worry about them. If you know that you are always careful about checking the oil levels, then you don't care how the car reacts to a lack of oil - that corner-case situation does not apply to you. I'm sure the car manufacturer will do more testing - but *you* don't care about such tests.
Sometimes processors have bugs (like the Pentium FDIV bug), or perhaps differences in the way they handle apparently identical instructions. Do you test your compiler for compliance on all possible processors, or do you assume it works the same on each one? Sometimes the various versions of windows have differences that might affect the compiler behaviour (it's unlikely, of course, but can you be sure? Different system libraries might give different results in odd cases). Do you test them all? What about less usual circumstances, like running the windows binary under Wine, or using some sort of virtualisation software? Perhaps the compiler has a bug in its __DATE__ macro that is only apparent on 29th February - do you have to test the compiler on each day? Libraries need to be validated too - if you have different libraries, do you need to validate each compiler/library combination individually? What about compiler switches - do they also need to be considered?
My point is that testing a binary, or testing a binary/target combination, is an arbitrary boundary (albeit a reasonable choice). You could also argue that you should not consider a compiler Plum Hall validated unless you have run Plum Hall on the compiler running on
*your* development PC. Or you could argue that it is fine to validate it for a particular source code / configuration combination (as long as it is then compiled with a validated compiler, of course).
How do you know that? It may use some of those constructs indirectly in the library. Which features?
Actually there are two sets of testing one for hosted and one for free-standing compilers. However the differences are specified in the standard. This is for compilers that target a system that uses an OS and for systems that target a system without an OS (the majority of cases)
MISRA is neither a standard nor a full subset.
This is not true. It depends.... The whole point of compiler testing is that you remove the "it depends" Also I am willing to bet that you use many unspecified and undefined parts of the standard... these are the implementation defined parts.
For example there are three types of char. The two integer types signed char and unsigned char. Then there is the plain char. Used for characters. Is it signed or unsigned? This depends on your implementation. This has an effect across the whole standard library.
Yes you will. You can not guarantee that the UK temperature will never go outside those limits. More to the point you can not guarantee that some one will not drive it to the Munich Oktober Fest. It gets below -25 there.
Correct. However should you knock the sump on a speed hump (and you can't miss them these days) and loose oil no matter how careful you are checking the oil before you travel you will be driving the car without oil.
Assuming the world is perfect means you don't have to test at all. However you are talking about fault conditions not does the car perform as specified.
Compiler testing is does it perform as specified not what happens if I break parts of it.
So you have a set of tests for a car to be driven in the UK and a set of tests for a car to be driven in Germany...
What if I only want to drive it in Surrey on Tuesdays? For many years people could get (unofficially and illegally ) death trap cars MOT'ed because they were only using them locally etc.
That is what you are arguing for.
Of course the real world and other vehicles and people intruded in to theis local world.
Interesting point. Do you mean as a host or as a target? If for targets then yes we test on multiple targets. The compiler targeted for ARM processors is tested on about 30 different Arm targets.
No we specify the host OS. Normally Windows XP in our case. The compilers only run on windows
No. You are thinking GCC again. The compilers we test are supplied with a single standard library. We test that. However it does raise the point that will the GCC you have multiple libraries for many sources This makes it even more difficult to test GCC.
We do not test it under those hosts. We state a specific host OS. Emulations and virtual systems are up to you and you would need to test on those systems for higher SIL.
Again it makes the use of these systems far more difficult for Safety critical systems. Hence the reason why it is dammed difficult to validate GCC systems.
Not every day, no. I will see if I can find out what the testing for the __DATE__ macro is since you raise it.
Yes. However most commercial compilers come with a single system library. If you are using additional libraries you have to validate them too. However that would not be part of the compiler validation.
Definitely. Tests are run for different target memory configurations etc.
This is why for C there are over 68000 tests and you run them multiple times for different configurations.
Not at all.
For SIL4 that is exactly what you do. For SIL 1-3 you can use a reference platform i.e. WinXP- SP2 as long as you are using a windows host for development..
No. You validate the binary because the source can be altered.
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/