ARM IDE

So you have to take "reasonable steps" to validate. As the industry norm is to use certain test suites as part of it that is the standard.

If you can find another test suite of similar coverage (Perennial is some 70,000 tests for a C compiler, written by known people who also develop the standard) quite apart from the build and regression test suites etc along with showing that care has been taken in the development of the compiler (ie the process, specifications and documentation etc) that that is OK.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H
Loading thread data ...

In message , David Brown writes

This is true for compiler companies also. Some customers do get though to the developers (though often the support team is more appropriate)

The trouble is "everyone" thinks they are an expert.

How do you tell them apart.... you the first line support weed them out. BTW in many companies the developers do a spell on first line support, in others the developers see the support calls on cc.

Usually If you can't get past first line support there is a reason for that. The developers probably are aware of you and told support they don't want to talk to you. Sorry if that hurts.

This is true. The developers usually go on these under fake names. Some of the "expert users" are the developers. :-)

Quite so.

A lot of it is where you have come from and where you want to go as much as what you want to do now.

This is my point you can possibly validate a specific GCC binary but not GCC per say. Nor GCC for "arm" or GCC from a part particular supplier.

When we validate compilers we normally have to sign an NDA and have an internal view of their development

This would be essential. I am sure Code Sorcery can supply this. However you would only be validating that part ticular version of the binary form CS. Not GCC or the source.

"Most" that is the problem. GCC is developed asynchronously by multiple sources who may or may not feed their changes back to the FSF

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

In message , Walter Banks writes

My point has been there are no comparable tests done on the GCC compilers in general. One or two specific versions from specific vendors are tested well but that only covers the specific binary tested. NOT the source code.

SO in general GCC is not as a family validatable Just specific binaries if you can get access to the company that does that binary.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

In message , David Brown writes

Possibly, maybe and not normally. That said several commercial compilers do make the library source available its just not FOSS.

So you have an untested and unqualified non-standard library....

The problem is that will out the full regression tests the user has no idea if he has fixed the bug without causing any other problems. See the definition of debugging :-)

formatting link

Normally They don't assume the user is the problem . Asking for a complete example is not inappropriate.

Often what the user reports and what they are actually doing and what they are trying to do are three different things.

I am glad that we agree on that. You are there to USE the tools to develop you products. Not to develop the tools

I don t see how. See above.

Not really. You need to archive the actual binary you are using,. (Possibly with a PC to run it on.

Why would you need the source here?

? What were you talking about?

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

Why "not normally"? Why do you always assume users are incompetent? I know that (unfortunately) many are, but I don't want to get a lobotomy to be allowed to talk to support. Hence I try to give the support guy enough clues that I know what I'm talking about. The more frustrating it is to get bounced off for formal reason ("no FS image included") without anyone actually having *read* the problem.

Right. And that's the reason why I want my library bug fixes approved by a support guy. Plus, there usually are contracts that don't allow me to ship modified code.

For a FOSS library, I'd fix the thing, send the patch to the mailing list, and could in theory forget about it.

I had posted a code snippet. Can you give an example where my "fixed" version is wrong? In this case, I had the ATAPI specs on my side which claim that there is a 32-bit little-endian value. You do not assemble those with possibly signed chars.

If I can make one, I do. In the case I used as an example, the problem was in the interoperation of a huge software stack. A complete, self-contained example would just be inappropriate. In particular, I would have the work to make it, the support guy would have a lot of work to understand it, and to find that the problem is actually in his code, not in my big example.

Take a different, simpler example. Let's assume you find an unchecked 'malloc' in library source. Under very tight memory constraints, the system could crash because 'malloc' returns 0. How would you report that? Would you build a system tweaked so much to exercise that case, or would you just say "Function build_filename() in file open.c does not check the malloc() in line 127 for return value NULL and crashes in this case".

Unfortunately, I've spent too much time "debugging" the tools. If I had tools source, I'd probably take a peep. But I don't insist on it. However, I think when I spend two days trying to make a self-contained example showing off a compiler bug, having the source *could* help.

Stefan

Reply to
Stefan Reuther

In message , Stefan Reuther writes

Exactly my point.... you skipped the full documentation and full regression testing bits. Let alone the language testing (eg Perennial)

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

What's more than annoying is to have an NDA in place with a hardware manufacturer and then not be able to make contact with anyone with knowledge of the product in question. Or contact is made and the contact never replies.

Reply to
Everett M. Greene

So the really awesome thing about closed-source compilers is that you buy a validated binary but the really awful thing about open-source compilers is that you buy a validated binary?

-a

Reply to
Anders.Montonen

This is the sort of problem with discussions like this they get clouded by religious bigotry

I did not mention closed or open source.

We were discussing the problems of validating GCC compilers. GCC being a very large collection of compilers from many sources that has minimal control and trace ability.

The other point was that if you validate for example a Byte craft compiler (since Walter is partaking of this thread) you have the complete compiler development and history in one place and the binary only comes from one place and all the Byte Craft compilers of that version for that target are validated.

For GCC you can not validate "GCC compilers for a specific target" just a specific variant from a specific supplier if you can get the full history and the version you are validation is under similar control to the Bytecraft compiler. Ie if you validated the GCC arm compiler from Code Sourcery it would have no impact or relevance to any other GCC ARM compiler. OR for that matter any Code Sorcery compiler built from source by some one else (even if it was the same version as the one tested)

SOME GCC suppliers can do this and validate their compilers others can't but the vast majority of GCC compilers are virtually impossible to validate.

Recently I was talking to a company doing a safety critical project who decided after some investigation that it would be far more cost effective both in time and money to use a commercial compiler at about

4K USD per seat for their developers rather than a "supported" GCC compiler as the cost of validating the GCC would be far in excess of validating the commercial compiler.

This has nothing to do with open or closed source.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

I think we all agree on that - Plum Hall (and Perennial) validation is only possible on for *any* compiler and *any* company when run on a specific binary with specific libraries. Some compiler developers make a big marketing point out of validating everything and making sure their compilers pass as many of the tests as possible. Other compiler developers use these as tools to test and improve the quality of their compilers, and will do specific tests and validations for specific compiler versions as and when required.

GCC is the "GNU Compiler Collection", and consists of a number of language front-ends, a middle-end in various versions, and a large number of official back-end targets, and a very large number of unofficial back-ends. Then there are a number of development branches and vendor-specific branches, and lots of different libraries. Add to this the large number of different host environments and it is perfectly clear that gcc is not one single compiler, and the idea of using Plum Hall to "validate gcc" is meaningless.

Reply to
David Brown

There is no need for the library to be under any kind of open source license (although it can be an advantage). I dislike any tools which don't include source for their libraries - the source is the final word in the library documentation, and it is sometimes very useful when debugging my own code.

Would you rather use a tested, qualified and buggy library or an untestet, unqualified but correct library? Personally, I am far more interested in making systems that work than trying to make sure I have someone else to blame when they don't work.

Personally, I use a concept called "modular" programming. I think it's quite popular among both professional and non-professional programmers - it's even used by commercial closed-source developers. The idea is that you write parts of your code that do specific things, and you test and document these parts so that you know what they do. That way, you can change one part of your code without breaking everything else. Done well, you don't even have to *test* everything else again until you are doing final validation tests.

Yes, I can see how you've clearly explained how simple it is to change development platform without source code for the tools. You gave clear and specific instructions for how a user can simply move their binary-only tools between Windows on x86, Linux on a PPC, and Solaris on a sparc. Having source code for the compiler tools is obviously a frivolous extravagance in such circumstances.

It may be *possible* to archive the binaries (depending on the licenses, and any PITA node-locking restrictions), and it is often *useful* to archive the binaries, but it is undoubtedly useful to be able to archive the sources as well.

If a compiler company stops development on tools that I use, how can I ensure that I have access to these tools in the future? Perhaps I need to run them on another computer, and can no longer buy a license. Perhaps the tools need a small fix, but are no longer being updated. If I have the source code with an appropriate license (not necessarily open source), I have a way out - I can make these changes myself, or pay someone to make the changes. I am not locked in at the mercy of the tool developers. Many companies consider open source licenses to be a big advantage as a safety net in this way, regardless of the price they pay to get it.

I'm replying to your post.

Reply to
David Brown

Are you trying to say that "validation" of a compiler is dependent on having complete histories of all code that has ever been used in any versions of that compiler? That would seem to contradict the idea that the only relevant factor is exactly how a particular binary build of the compiler handles the tests. And I am correct in assuming that the Plum Hall tests only check the binary, then the history of the code is totally irrelevant for such testing and validation.

Additionally, few software projects have such a clear control of the history of their code and the contributions to them as large open source projects. Collaborative open source projects are carried out in public, and the people who have write access to the source code trees are all vetted by their peers around the world. All commits are discussed and reviewed. Contrary to your beliefs, there is excellent control and traceability in such projects - much more so than in many closed source projects (though there are no rules - both open source and closed source have their share of well-managed and badly-managed projects).

Reply to
David Brown

In message , David Brown writes

How do you know it is correct without testing it? A tested and validated system is better as you know EXACTLY what it will do.

With an untested/validated system you have no idea if or how it works.

:-) Of course you can write modular systems al the best ones are. However I recall a presentation by some one who had the first validated ISO C compiler and I can tell you it is not as simples as you are suggesting.

They can't. If you have tools for x86 that are validated YOU can not simply move the, to PPC or SCPAR with out a complete retest.

Pointless unless you are going to do a full and complete re-test

It is not only possible I know a lot of companies that do this. Also quite often archiving at least one PC/workstation with the system (this is for safety critical developments)

Not really because if you rebuild the sources you need to do a complete re-test of the new system you build. Using any compiler (from the binary) other than the original one used on the original hardware is going to give you a different compiler that needs retesting.

You have the binary

Archive the system.

And fully re-test or validate the compiler... I forgot you don't test your compilers and have no real idea if they are performing correctly

This is a bit of a myth really. In 30 years I have never come across this as a real problem. It happens that compiler companies disappear but I have never seen where it has been a major problem. The times where it does occur it is usually easily got around.

Often porting the code to a more modern compiler is a lot less hassle than trying to rebuild a compiler that is that old.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

You test the binary but that is only part of the validation. You have to show the but history and loot at the other tests, the development process and the documentation etc... It is a two part process.

For safety critical systems you require qualified and experienced people.....

I do have some evidence to the country for GCC but it is not in a public document (it comes from one of the GCC development places)

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

Errrrm... Sorry to wake you up in your Ivory Tower: We started with the assumption that the developer has found a *bug*. In other words, he has just proven that the "tested and validated" thing is *wrong*.

Don't tell me now that the developer is incompetent and doesn't know how to use the thing. I get my salary for knowing what I'm doing, and I think I'm not too bad in that. I wouldn't have a number of confirmed compiler and library bugs on my credit side otherwise (on the other hand, I have several times convinced coworkers that their observation is not a bug. I just happen to have a little experience in compiler construction as well as C++ standardese.).

And don't tell me either that bugs in "tested and validated" things don't happen. They happen.

I can read and write C code quite well. If I can test whether my selfmade function works, I can also test someone else's function. Or someone else's function after my modification.

Here's a quite simple test: compile your project with the official & approved binary. Compile your project with the compiler you built from the source. Compare. (Of course you don't compare the raw ELF files, just the sections that matter, i.e. the stuff that's ultimately loaded to the target.)

You can do this at any time in advance, and you can repeat it as often as you want. Now you know that your self-made compiler produces the same thing as the offical & approved one, and from your own tests, you know that the produced binary is sufficiently correct. What better qualification test for a compiler can one imagine?

Of course, the two compilers might deviate at some time in the future, but then you only know they differ, you don't know which one is right. It might be the official & approved one, or it might not be.

...plus a dongle plugging into an interface no-one produces any longer. Or, a binary which doesn't run on any computer one can buy (Win16 or DOS, anyone?)

Because the compiler maker's test suite doesn't guarantee that it performs correctly, that's a moot point. I have outlined a test above.

Maybe if you buy "100% ANSI C" products. In reality, system vendors sell integrated environments, where you can use the full product only when you also use their compiler. Just take things like inline-assembly. Everyone does it in a different way. Or system building tools: how do you generate the system image containing all your tasks, drivers, processes, etc.

Stefan

Reply to
Stefan Reuther

This can happen Then you fix it.

The choice was a bug in a validated system OR an untested and unqualified compiler which may or may not be correct. You won't know until you test and validate it if it is correct.

I think you mean ISO 9899 not ANSI C...... but we don't need to be precise :-)

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

Maybe we're talking about different meanings of "testing". You said a commercial software vendor had the big advantage of testing against Plum Hall / Perennial, and everything else isn't worth anything. But of course I test my fixes. I test those on *my* test workloads until I'm confident that they work.

But than I'm back to "go", trying to convince the tech support guys that I'm not a clueless first-semester student, but that this source code patch actually fixes a problem...

ISO/IEC 9899:1999, to be even more nitpicking. Still you read "100% ANSI C" quite often. Or just "Standard C", whatever that means...

Stefan

Reply to
Stefan Reuther

I did not say that. I said that Plum Hall and Perennial were two test suites that are widely recognised for testing the language. Has also been pointed out (several times) they are part of a test regime. Not all of it

The authors of Plum-Hall and Perennial have a provenance. Also the commercial compiler will have all the documentation, documented processes (and this will have to be a suitable process), full histories and bug fixes by named qualified and experienced people. Also the validated compiler will be Independently tested by qualified and experienced people

Your "confident" assurances amount to what? Not a lot really? You expect me to bet my life and the lives of others on your say so?

Not convinced me so far.

Not at all. You are wrong. It will more probably be 9899:1990 + A1 + TC1 +TC2 + TC3

Quite so what are you testing to? You see quite slipshod on some points We are discussing validation of compilers and you are vague about the standard you are testing to.

It hardly fills me with any confidence.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

Yes, that is *exactly* what people developing safety critical systems expect you to do.

The compiler is just one of the many tools used by one of the many developers during the design of one of the many parts of any given safety critical system. Whether the particular compiler is Plum Hall tested or not is a tiny drop in the ocean of the required careful development procedures and the required testing.

You seem to be of the impression that it is a critical part, and that Plum Hall testing makes the final product safer even if the compiler and/or library has bugs!

If the safety-critical software developers are doing their jobs, then bugs in the compiler and library will be spotted during *their* testing.

Reply to
David Brown

Okay, let's make it short:

What you're saying is that I should better stop finding, fixing and reporting bugs, because I'm too incompetent for it, and cannot test the fixes in a documented, proven, whatever way as Plum Hall and Perennial do. I should better ship a system of which I know it misbehaves, but which has P&P's blessing. Right?

Guess why I'm writing to tech support. To tell them there is a problem, propose a fix, and have them review and approve it! But this does not work if the 1st level supporter bounces back the report for silly unreasonable formal reasons without even reading it.

I had posted a real example. You have chosen to ignore it. Like the tech support guy. I have then chosen to ignore the bogus product.

Stefan

Reply to
Stefan Reuther

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.