Certified C compilers for safety-critical embedded systems

[plain] char is a character singed and unsigned char are integer types.

there are no C99 compilers bar the (apparently) the Tasking Tricore compiler.

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\ /\/\/ snipped-for-privacy@phaedsys.org

formatting link
\/\/ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Reply to
Chris Hills
Loading thread data ...

Rice's theorem states this. It is a result of the undecidablility of the Halting problem, and it is also related to Godel's incompleteness theorem. I would suggest taking a class on computation theory, but it might suffice to read a book on computation theory and work through the problems.

No it isn't a contradiction. SPARK is not a general purpose language; it is a special purpose language (SPARK is not Turing-Complete hence Rice's theorem doesn't apply to it), which is designed to facilitate deep static analysis.

Reply to
Chad R. Meiners

So are you basically saying that Ada requires the function return type to be bounded and can simply check this against the array bound?

Ian

Reply to
Ian Bell

Ah, so there are run time checks too. I had not realised this. What happens when the check fails and what is the code overhead introduced by the check?

Ian

Reply to
Ian Bell

Interesting argument but a potentialy flawed example. First one pentium and one pic are each one component so no reduction in components. More importantly, though a Pentium may have more transistors in it than a PIC and hence might be seen as more complex, it is not necessarily less reliable. Indeed given the relatives quantities manufactured the pentium might well be much more reliable than the PIC.

Ian

Reply to
Ian Bell

Ian Bell wrote in news:3fef5f9b snipped-for-privacy@mk-nntp-1.news.uk.worldonline.com:

Close. Ada requires the function return type to match the array index type. Ada type matching is much more rigorous than C type matching.

Examples:

type my_index is range 1..10; type beauty_scale is range 1..10;

type my_nums is array(my_index) of integer;

foo : my_nums;

function rand_index return my_index; function rand_beauty return beauty_scale;

I have defined two distinct integer types above. The type my_index is not synonymous with the type beauty_scale. Ada provides NO implicit conversion between the two types. This is what Ada strong typing gives us, a strong separation of data types.

It is valid to make the following call:

foo(rand_index) := 32;

This is valid because the function rand_index always returns a value of type my_index, which is also the index type for the foo variable.

It is invalid to make the following call:

foo(rand_beauty) := 32;

This is invalid because the function beauty_index always returns a value of beauty_scale, while the index type for the foo variable is my_index.

This error is detected by the compiler.

I am sure you have already noticed that the range of values for both types are identical. For Ada this structural equivalence is meaningless. Different types are simply not implicitly converted. The Ada type model is based upon the type name, not the type structure.

Jim Rogers

Reply to
James Rogers

...

True in practice, and also in theory due to limits on memory sizes. But if you were to write a SPARK program that was an interpreter for a general purpose language, verification would only result in proving that the interpreter worked correct and say nothing about programs in the interpreted language.

Not how you would use SPARK in any case, but it is worth remembering. Just as I can write Ada programs that are not SPARK programs but can statically proven to halt, or I can write an Ada program for which no such proof is possible. SPARK just provides a convenient way to write programs that can be proven correct, and to automate much of the proof.

In C it is much, much harder to write programs that can be proven correct, but it can be done. One way is with an Ada to C translator. If I had to produce a large safety-critical system for a chip which did not have an Ada compiler, using an Ada to C translator (several exist) and the SPARK tools might be a decent alternative to an Ada port.

In practice for the small programs that are common on 8-bit chips, I find it easier to validate the generated code to the original requirements.

--
                                           Robert I. Eachus

"The war on terror is a different kind of war, waged capture by capture, 
cell by cell, and victory by victory. Our security is assured by our 
perseverance and by our sure belief in the success of liberty." -- 
George W. Bush
Reply to
Robert I. Eachus

No. He is saying that Ada types and subtypes typically contain information about expected values that will be checked at compile time or run time as appropriate.

If the subtypes "statically match", which does not mean that the bounds are static, it can be determined at compile time that no run-time check will be necessary. And of course, that means you don't have to worry about what happens if the check fails. In fact, years ago in Ada 83, I had a nice tool that detected every place that a run-time bound was tested by the compiler. About one-third of the cases were cases where rearranging the code would allow the compiler to eliminate the check, one-third were really needed, and the rest were bugs. Today I use the SPARK analyzer or other (ASIS) tools to do essentially the same checking.

You are not required to use subtypes which are correctly bounded in Ada. But it is a big help to (static) debugging and program correctness if you do. And in another thread on comp.lang.ada we are discussing that it is a VERY useful property of Ada, not just because it makes it more likely that the program will correctly implement the requirements. It also often results in missed requirements being detected.

I have lost track of the number of times when a "simple question" about the right way to declare a subtype or assign a default value in Ada, has gone back to the design team to write a new requirement, as soon as they figured out what it should be.

My favorite example of this came from a project which was doing a flight control system for a fighter aircraft. They wanted to know if it was possible to implement the system as a round-robin scheduler in Ada. I sent back a template of what the final code would look like, and a set of questions that needed to be answered before the code could be written--or the question answered. A month or so later I called back and asked whether that had solved their problem. The answer was that they had resolved fifteen of the seventeen questions I had asked and they only had two left!

Eventually, they used hard-deadline scheduling because that way they could resolve the remaining questions. Notice that the questions in one sense had nothing to do with Ada. Ada had made the need for answers obvious, but the questions could be stated in terms that were actually independent of programming language and hardware.

"If the tracking of incoming missles exceeds its time slot should it be aborted or allowed to delay the navigation function?"

"If the aircraft state update takes longer than expected--read time-outs on status inquiries due to damage--should it restart where it left off in the next time-slot, continue execution at low-priority, or reset to the beginning for the next time slot but skip the failed check?"

These are not easy questions, and there may be no "right" answer in the sense of one that will always save the aircraft. But choosing the best answer may save some aircraft that would otherwise be lost.

--
                                           Robert I. Eachus

"The war on terror is a different kind of war, waged capture by capture, 
cell by cell, and victory by victory. Our security is assured by our 
perseverance and by our sure belief in the success of liberty." -- 
George W. Bush
Reply to
Robert I. Eachus

I disagree. The basic components may have different reliability figures, but both chips are made up of various implantations, etchings, and aluminum layers, which create a whole aggregate of components of varying reliability. In both cases the user never gets to see the fundamentally flawed components, because of manufacturers final test. I daresay the percentage yield for PICs is considerably higher than that for Pentia, and this is reflected in the price.

You also need to consider the MTBF of components.

Contrast the up times of early vacuum tube computers, later more complex transistor/diode machines, still later SSI (7400 gates etc) generations, and today almost everything is lumped onto one or two chips. The component count keeps going up, but the dodgy interconnects and packaging goes down.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer

... snip ...

You may well have control of 'what happens' via exception throwing and trapping. The overhead is likely to be well under 5% IIRC, as shown by a study in Pascal 30 or more years ago. The point is that with close subtyping the compiler can derive those expressions that require checks, and suppress most pointless checks with ease.

For example, MAX(a + b) = MAX(a) + MAX(b). If this is suitable, it need not be checked.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer

I believe SPARK is a finite language. That is you can only declare finite data structures. In general purpose languages you can declare infinite data structures, which are needed to be Turing-Complete. Most compilers (and computers) only support subsets of general purpose programming language ;-)

Well if you had some nice properties about the interpreted language then the verification of the interpreter would imply that those program apply to the program while they are being interpreted. I know---I am nitpicking, but I have been trained to.

I believe the SPARK examiner is written in SPARK. ;-)

I completely agree. I just get excited about SPARK because it is one of the few formal tools situated toward the end of the verify/validation chain. For instances for a formal methods class, I used Promela and LOTOS to prove a theorem for a safety property about a set of traffic lights for a one lane bridge. With SPARK I was able to able to prove that the premise of that theorem was satisfied by the traffic controller program that I wrote.

I agree. However, directly working with proofs about programs written in C would probably not be a process that you could pick up in a week to finish a class project ;-)

This really isn't writing C programs to be proven correct. It is generating them as an intermediate representation. Once the C code is compiled, you can still use the practice that you suggested below.

requirements.

A wise practice considering that compilers have bugs. However, the point of verifying that SPARK code meets validated requirements is to detect errors sooner to simplify the later stages of development. Hopefully once you VV the SPARK program, your task to VV the generated code is simplified.

-CRM To reply by email, change the "hotmail" to "yahoo".

Reply to
Chad R. Meiners

In comp.lang.ada Robert I. Eachus wrote: : Ian Bell wrote: : :> So are you basically saying that Ada requires the function return type to be :> bounded and can simply check this against the array bound? : : No. He is saying that Ada types and subtypes typically contain : information about expected values that will be checked at compile time : or run time as appropriate.

I'll try one of Robert's ideas. Say I do not ignore the compiler's warning that the function used for array indexing will return values out of range for the array. I might see that the choice of two subtypes has been inappropriate and should be replaced with two different types. They will have overlapping ranges but will allow the same representation in bits. However, the program cannot be compiled any longer, using the same buffer:

procedure p is

type Red_Index is range 90 .. 105; type Light_Red_Index is range 100 .. 120;

buffer: array(Red_Index) of Boolean;

function some_val return Light_Red_Index is separate;

begin if buffer(some_val) then null; -- do something here, in a more real program end if; end p;

  1. if buffer(some_val) then | >>> expected type "Red_Index" defined at line 3 >>> found type "Light_Red_Index" defined at line 4

That is, I might presume that color Red is not too different from Light_Red. I might know that the number of bits used for values in the two types will be the same (char in C terms, say). Still the compiler will not let me use Light_Red_Index's value 101 and Red_Index's value 101 interchangeably.

The range check goes away when both the function and the array use the same index type (the function is implemented to return random values from the index type's range (because the generator has been instantiated with that type.)) The assembly listing shows

8bit register use on i686 when the index range permits, even with numbers ranging beyond 255 (or > 65535 for that matter).

type Light_Red_Index is range 92_000 .. 92_200; for Light_Red_Index'Size use 8;

In short, convenient, checked at compile time, and efficient :-)

-- Georg

Reply to
Georg Bauhaus

He is a LINT-user :-|

Reply to
Peter Hermann
[snip]

Funny you should mention that. We are currently doing exactly this for an aviation project. Tool chain is UML to SPARK to C. Most of the verification activity will be done on the SPARK intermediate "design model". We have found that SPARK to C translation is much easier than Ada to C translation. For example, verification activities at the SPARK level eliminate the requirement to add run-time checks to the C (which is therefore very easy to compare against the SPARK source); furthermore, the use of SPARK laregly eliminates the need for a run-time library so all the C is directly traceable to user-written SPARK statements.

I have submitted a paper on this to Ada Europe. We are also in the process of productising the process (probably in the form of a "generate C" switch for the SPARK Examiner.

regards

Peter

Reply to
Peter Amey

Except yield and component reliability are not (necessarily) related. Furthermore, the greater volumes of pentiums manufactured means that SPC will have identified process and product weaknesses and improved both yield and reliability.

Don't disagree with that at all.

ian

Reply to
Ian Bell

Well, it would be better to say that the compiler won't let you mix Light_Red_Index and Red_Index implicitly. You can always say:

if buffer(Red_Index(some_val)) then...

If the two types are represented the same, then there will be no real run-time cost for the conversion. (If both Red_Index and Light_Red_Index were SUBtypes of Integer, then you could write:

if buffer(some_val) then... and there would be a range check associated with the indexing operation. With Red_Index and Light_Red_Index as separate types, there will be a range check now associated with the conversion instead of the subscripting. But assuming that the representations of the types are identical, the code should be identical.

Yep, and if you do a type conversion as above, the compiler will add the bias back in (or add the net difference if there are two different biases). This means you can write your code in meaningful units and allow the compiler to remember and generate any conversions. This is real handy when you have a sensor that for example, returns a 12-bit encoding of an angle. You can write the code using units of degrees (or radians or grads if you want) and the compiler will convert your constants at compile time. Get a new sensor with a 16-bit ADC, and all you have to change is the type declaration (and of course, recompile and test).

--
                                           Robert I. Eachus

"The war on terror is a different kind of war, waged capture by capture, 
cell by cell, and victory by victory. Our security is assured by our 
perseverance and by our sure belief in the success of liberty." -- 
George W. Bush
Reply to
Robert I. Eachus

I think we are agreeing violently. ;-) My point is that you can use external (from a SPARK viewpoint) storage to make SPARK Turing-Complete. For an example, consider an embedded system that includes a tape drive for data logging. The system as a whole is Turing Complete, given an operator to change tapes, but it wouldn't normally be used in that way.

So you would be able to prove that a package that used the tape drive to implement a stack was correct, but you couldn't validate programs that used the tape as an unbounded store.

Once you get into this situation, which I have run into on several projects you have to decide where to make the cut. I remember one particular (pre-SPARK) Ada project where we validated that the program would not fail if it ran out of logging space, and that the amount of logging space available met the system spec. We just had to live with the fact that the logging requirement was somewhat arbitrary.

--
                                           Robert I. Eachus

"The war on terror is a different kind of war, waged capture by capture, 
cell by cell, and victory by victory. Our security is assured by our 
perseverance and by our sure belief in the success of liberty." -- 
George W. Bush
Reply to
Robert I. Eachus
[...]

This one's pretty easy for PC-lint. Consider:

---begin included file--- C:\Dave>type lloop.c

void f(void) { int foo[10];

int i;

for(i = 0; i < 100; ++i) { foo[i] = i; } }

C:\Dave>lint-nt -u lloop.c PC-lint for C/C++ (NT) Ver. 8.00n, Copyright Gimpel Software 1985-2003

--- Module: lloop.c _ foo[i] = i; lloop.c 10 Warning 662: Possible creation of out-of-bounds pointer (90 beyond end of data) by operator '[' [Reference: file lloop.c: lines 8,

10] lloop.c 8 Info 831: Reference cited in prior message lloop.c 10 Info 831: Reference cited in prior message _ foo[i] = i; lloop.c 10 Warning 661: Possible access of out-of-bounds pointer (90 beyond end of data) by operator '[' [Reference: file lloop.c: lines 8, 10] lloop.c 8 Info 831: Reference cited in prior message lloop.c 10 Info 831: Reference cited in prior message _ } lloop.c 12 Warning 550: Symbol 'foo' (line 4) not accessed lloop.c 4 Info 830: Location cited in prior message

C:\Dave>

---end included file---

The "-u" option tells PC-lint to do a "unit" check, i.e., that the referenced file is not a complete program, so it should suppress errors like "Function f defined but not called."

Regards,

-=Dave

--
Change is inevitable, progress is not.
Reply to
Dave Hansen
[...]

Plain character is an abomination. It is particularly unsuited for character data. You can get singed. ;-)

All three are integer types. All three are 1 byte wide ("byte" does not mean "8 bits").

I'm not sure what you mean. There's no such thing as "byte." uint8_t? Not guaranteed to exist. unint_least8_t? That would work, but I fear relatively few even know it exists, let alone how to use it...

Comeau?

Regards,

-=Dave

--
Change is inevitable, progress is not.
Reply to
Dave Hansen

Your are kidding. And you realy want to use a language which can not distinquish between a number and a character for safety-critical systems?

I does not realy speak in favour of C if in 2003 there is still no major compiler to meet the 1999 standarts.

But then amost nobody knows the standart anyway. Even if you are prepared to pay the $18 to actulay see it, you are hard pressed to find the place where you can actually order it.

The Ada standart, however, is available for all:

formatting link

With Regards

Martin

--
mailto://krischik@users.sourceforge.net
http://www.ada.krischik.com
Reply to
Martin Krischik

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.