Any constructs used in common libraries automatically fall under the label of commonly used and well-tested features. Incorrect behaviour (either because the compiler writers did not properly interpret the standards, or because the compiler does not implement their interpretation) will be quickly spotted and handled by the build test suites.
My point is that there is no need for Plum Hall validation for common features and common language constructs - the compiler's standard test suites should cover all these features. Don't get me wrong - I am still glad that my compiler suppliers use Plum Hall to test and improve the tools. More testing, especially testing in different and independent ways, is always good. But I don't see any benefit for *me* to know the details of the Plum Hall validation tests.
Add to this mix, you have the problem that most embedded compilers have some non-standard additions or extensions that are essential for their use and effectively invalidate independent tests. If a compiler has an extra "flash" keyword, for example, then you must either test with the keyword disabled (which means testing with different settings from when you use the compiler), or with the keyword enabled (which means the compiler is no longer compliant, as you can't use "flash" as an identifier). Even if there is a middle ground such as "__flash" as the keyword, the Plum Hall tests will not cover this feature, which will be heavily used in real programs, meaning that their worth as an independent test tool is greatly diminished.
Well-written libraries will function identically independently of the sign of plain "char".
But it is certainly important to remember that there are parts of the C standards that are not fully specified, and are implementation dependent. Plum Hall cannot validate these (although perhaps it can report on them in some way) - with multiple correct answers, there is no pass/fail test. So source code that is dependent on these features may work on one Plum Hall validated compiler, and fail on another. The only way to get around this is to avoid using constructs that have such dependencies (for example, using "signed char" or "unsigned char" explicitly when it is relevant). That's much the same thing as avoiding obscure (but standards-compliant) language constructs that are unlikely to be well reviewed and tested by the compiler's standard test suites.
Perhaps I'm not making myself clear. As a compiler user, or car driver, I am concerned that the tools work as I expect them to do, when *I* use them. If I don't drive in temperatures under -25 C, then I am not concerned about how the car reacts in those circumstances. That's all there is to it - you can't start adding fantasies about how I *might* theoretically be able to go outside my assumptions. A car that explodes when you start the engine at -26 C is still within my required specifications. A C++ compiler that generates subtle bugs for a six-layer deep class hierarchy with multiple virtual inheritance and overloaded virtual "friend" operators is also within my required specifications for a compiler - it's not code that I would use or consider safe.
Again, you are misunderstanding me. I'm in favour of compiler manufacturers using Plum Hall (as CodeSourcery and other gcc testers do), and doing as much testing as reasonably practical - I just don't care about validations or certifications that are not relevant in my use of the tools. A car manufacturer will use the same tests of the UK and Germany (although they might distinguish between Norway and Oman markets), but I don't care about the results outside my area of interest.
I was thinking of the host here (as you already talked about different targets). That's what's relevant when you decide that it is the compiler binary that's important.
There are dozens of variants of XP (different service packs, different languages, different choice of additional software that can give different versions of system libraries).
And what about compilers that run under different OS's? There are plenty of commercial tools that run under a variety of *nix's.
Perhaps the compilers you sell are limited in this way. Other tools may come with multiple libraries, or multiple variants of their libraries (perhaps maths libraries designed for small size, high speed, or high accuracy). If you add in support for different operating systems, you get a whole new set of library issues - even the basic C libraries can come in variants for different OS's, and with or without support for things like multi-threading.
I think it is fair to say that if you want a compiler that can claim to be third-party "validated" or "certified" in some way, you are talking about a *very* restricted set of circumstances - bare-bones with no operating system, a specific small library, specific target processors, specific compiler settings, and specific host environments (including OS, processor, and installed additional software).
It's like Windows NT's famous "C2" security levels - they are only valid on a machine that is so locked-down that it is barely usable, and don't apply in the real world.
If my company felt that Plum Hall validation was relevant to our work, there would be no possible choice except to buy the test suite and run it ourselves on our workstations. Of course, it would be useful to know that our compiler suppliers had run the tests themselves - then we would know what to expect. But only our own results would have any real weight.
Validating a software design process and a software project for SIL is
*always* difficult. I haven't dealt with higher SIL levels, but I did work on a project with lower SIL levels (the software was written in assembly) - continuous and extensive functional testing and a design and development methodology that avoids or detects flaws, are the key elements. Choice of tools is obviously important - but tool validation by Plum Hall is neither necessary nor sufficient.
Don't worry too much about it - it was just a random example.
So can the binary if you try hard enough (or have a virus on your windows machine, as many do).
Clearly any testing or validation on a source code bundle will only be valid as long as the source code is not modified.