Certified C compilers for safety-critical embedded systems

Ah, so Ada is no better than C in that respect?

Ian

Reply to
Ian Bell
Loading thread data ...

ctiticism.

index

The Ada programmer usually uses the range attribute of the array variable for these cases.

declare type My_Integer_Array is array (Positive range ) of Integer; -- Type of array with bounds that are of type Positive (Positive Integers)

function Compute_Array return My_Integer_Array is -- returns an array of variable size ... end Compute_Array;

My_Array : My_Integer_Array := Compute_Array;

Sum : Integer := 0; begin -- Summation of add via array iteration For I in My_Array'Range loop Sum := Sum + My_Array(I); end loop; end;

If you used a literal range in this case, you would trigger a runtime constraint_error exception (index check failed) if your literal range is not contained with the array's range.

However, there are other ways that an array range can be computed.

with Ada.Integer_Text_IO; use Ada.Integer_Text_IO;

procedure DynamicType is

LowerBound : Integer; UpperBound : Integer;

begin

Get(LowerBound); Get(UpperBound);

declare

type New_Bounds is range LowerBound..UpperBound;

My_Array : array(New_Bounds) of Integer := (others => 1);

Sum : Integer := 0;

begin

for I in 1..10 loop -- This will not compile because the compiler -- cannot safety compute the correct type for the -- literal range. Sum := My_Array(I); end loop;

Put(Sum); end; end DynamicType;

In practice most Ada programmers don't use literals because array's have attributes such as 'range, 'first, and 'last, which offer a better alternative to literals or even constants.

Reply to
Chad R. Meiners

ctiticism.

compiler

index

Again, tools like this do exist. One can argue over how well they work but they do exist. Tools like Polyspace Verifier can at times tell you statically that you have a problem in your code.

For example the following psuedo code fragment would likely be flagged as a potential error by polyspace

A, b : float; C : integer; D : array(1 .. 10) of float;

begin A := random_float; B := cos(a); C := integer(C * 9.0) + 1; D(C) := 0; end;

while code like

A, b : float; C : integer; D : array(1 .. 10) of float;

begin A := random_float; B := cos(a); C := integer(C * 9.0) + 1; if C >= D'first and C

Reply to
Jeff C,

Ada performs array bounds checking (logically) on every array reference, and raises an exception if the array index is out of range, unless the programmer disables it. Every Ada toolset I have seen defaults to enabling array bounds checking. I am not certain but I *THINK* ANSI/MIL-STD-1815A (1983) requires the toolset to work that way.

For something like this,

for num in a'range loop a(num) = num; end loop;

the compiler can determine that num cannot go out of range, and suppress the check.

For something like

a(some_function(x,y,z)) = treasure; -- as opposed to trash

the compiler probably can't make that determination, and must compile in a test.

Reply to
John R. Strohm

Ian,

Depends on the relationship of the subtype of the function result to that of the index type of the array.

If they're of the same subtype, or of subtypes derived from the same type ("compatible"), and have the same range, the indexing operation will work because an out of range value would have raised an exception before the function result returned.

If the subtypes are derived from the same type, but have different ranges, a runtime check is performed to make sure the function result is within the stated index range.

Finally, if they're incompatible subtypes (not derived from the same ancestor type), you'll get a compile time error, unless you explicitly convert the function result to a compatible subtype of the index type (then the runtime check will be used as in the second case).

The checks can be explictly suppressed.

Reply to
Ed Falis
[...]

I guess that PC-Lint _will_ detect this even if you use a pointer to an array or a upper limit as parameter etc., I already have seen PC-Lint warnings about possible out of bounds access in similar cases. If there is any doubt, I can verify it if I'm in the office again.

Lint can check the whole application, not only single modules.

Oliver

--
Oliver Betz, Muenchen
Reply to
Oliver Betz

Some of us are in comp.lang.ada.

Reply to
Larry Kilgallen

No, Ada is much better than C in that respect. The concept that Ada supports but C doesn't is best expressed described by example:

function Identity(N: Positive) return Matrix is Result: Matrix(1..N, 1..N) := (others => (others => 0.0)); begin for I in Matrix(1)'Range loop Result(I,I) := 1.0; end loop; return Result; end Identity;

The size of the matrix returned will often be determined at run-time, but both the compiler and any static checking tools can verify that there are no potential out-of-range assignments here. This sort of "statically matching" constraints, where the constraints are dynamic but can be statically determined to be identical is formalized in RM 4.9.1. There are certain cases in Ada where statically matching subtypes are required, but in general Ada programmers tend to use statically matching subtypes even where they are not required. It not only makes for more efficient code, since the compiler can remove constraint checks, but it means that if the code is changed elsewhere, the change does not need to be propagated. This makes code maintenance much easier.

--
                                           Robert I. Eachus

"The war on terror is a different kind of war, waged capture by capture, 
 Click to see the full signature
Reply to
Robert I. Eachus

In comp.lang.ada Ian Bell wrote: :> Wouldn't that become a famous compiler that finds out, at compile :> time, which value a certain variable is going to have? :-) :> : : Ah, so Ada is no better than C in that respect?

In the dynamic Matrix example by R. Eachus a number of dynamic things can indeed be statically studied, it seems. This implies there is a measure for how much can be "found out" at compile time, given the features of language X. (In the above sentence I was only trying to say that there is _no_ real world compiler that can possibly solve the general problem of finding variables' values at runtime, for some program in any programming language, given some input :-) But that is not relevant here I think.)

Splint is an indicator too. Considering maxSet/maxRead (array bounds, at least some), require/ensure (Eiffel contracts), modifies (in, out, in out parameter modes), etc., I suddenly feel very much at home in the language that tries to look like C, though not quite. Splint programs can only be translated by a compiler using the trick of hiding the new language in C comments. I guess that the additional information in the comments is then not available to the C compiler, so the C compiler can only profit from users who try to redesign their software, but it cannot itself profit from the comments...

Will there be profit if a true Splint compiler will evaluate the additional information in the comments? Would the language look like, well, not C, semantically? :-)

-- Georg

Reply to
Georg Bauhaus

In article , Larry Kilgallen writes

Repent! :-) /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\ /\/\/ snipped-for-privacy@phaedsys.org

formatting link
\/\/ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Reply to
Chris Hills

In comp.lang.ada Ian Bell wrote: :> You mean as in :> :> type my_index is range 0 .. some_variable; :> :> -- yes, Ada compilers will perform the check. :> : : That was not what I had in mind and seems to me to be rather dodgy code. i : was thinking more of the return value of some function being used as an : index to the array.

That should depend on what the compiler can find out about the function. For example, if there is "type Index is range 0..359;", then if a function is specified to return values of a constrained subtype of Index ranging from 0..179, and a buffer has been allocated for some Index subtype's values ranging from 180..359, the compiler will warn if you use the function's values as indices to the array, _although_ in C terms the array index values and the function return values are indistinguishable (both unsigned shorts for example). I.e. type Index is range 0..359; subtype First_Half is Index range 0..179; subtype Second_Half is Index range 180..359;

buffer: array(First_Half) of Some_Decimal_Type;

function some_val return Second_Half; -- Second_Half values at random

...

  1. if buffer(some_val) > 10.0 then | >>> warning: value not in range of type "First_Half" defined at line 6 >>> warning: "Constraint_Error" will be raised at run time
Reply to
Georg Bauhaus

But is it possible to write a C program and an Ada program which implement the same algorithm with no residual errors?

My experience indicates that it is not. This is not saying that you can't build an Ada to C translator, or a C to Ada translator, although the first has been done several times, and the second would be much harder. The problem is that if I have a C program "in hand" and go to translate it into Ada, there are usually hundreds of questions that I end up asking. Some of them may be answered in the specificiation for the C program--assuming that such a specification exists.

The net result is that even if I start out to write identical programs in C and Ada, the Ada program ends up much more tightly specified. Whether these additional specifications are meaningful in terms of the intent of the program, or are just "accidental" overspecification due to the fact that Ada asks the question and it is easier to answer it than dodge it, the Ada and C programs implement two different specifications. (Or to be formal, the set of specifications that the C program satisfies will normally be a superset of those satisfied by the "equivalent" Ada program.)

Does this make the C program better? I don't think so. I am much more concerned that set of specifications satisfied includes the "real" specification, which often includes requirements omitted from the original written specification. Ada does a much better job of identifying these implicit requirements and getting them formalized.

The best example of this problem doesn't involve Ada at all. The French developed a programming language used for the Airbus A320, and it took 5 crashes for omissions in the specification to be identified and fixed.

One problem was that the autopilot would maintain a specified hight above ground between waypoints until the last waypoint before landing. Then it would put the plane into the glidepath for the destination runway. The problem occurred when the last waypoint set was not in the glide path, the 'correction' that occurred when the waypoint was passed could result in a stall, or in the case of a landing in mountainous territory diving to the glide path which was underground at the last way point.

Airbus originally diagnosed the cause of the crashes as the pilots selecting the 'wrong' descent mode. But as I said, it was not pilot error, but a missing specification. The glide path for the runway was assumed to be everywhere above ground. More correctly it was assumed that the last waypoint would be the outer beacon for the intended runway. But there are cases where that is not possible, and in those cases, the A320 could be deadly. And in the crash which resulted in the problem being detected, the outer marker beacon was situated on a mountain which extended into the glide path.

As far as I am concerned Airbus's contribution to the analysis of a crash near Strasbourg, France January 20, 1992 was criminal. If you look at all the "weasel words" in retrospect, they knew that it couldn't have been the pilot error they suggested. (Autopilot in wrong mode.) The "rapid" descent that left one second between the altitude warning and the crash was at over a 50% angle over descending terrain. Somehow I don't think the flight crew "overlooked" the rapid descent--other than looking through the cockpit window below their feet.

As it was, it took two more crashes before an investigation concluded that at least two previous crashes, and probably all four were due to missing requirements in the specifications for the provably correct code. For example, in the first crash at a French airshow, a lot of attention was focused on whether the engines spooled up "quick enough" when the crew selected abort/go around. A later crash in Japan in a similar situation, but a "real" landing attempt showed that engine speed wasn't the problem. The Japanese plane didn't strike trees with its tail: "Finally, the engine stalled after the angle of ascent increased to 53 degrees, and the plane fell towards the ground, crashing tail-first." As any pilot can tell you trying to increase altitude in any flight mode, but certainly when close to the ground, pulling the nose much above level is a deadly error. Beyond a certain AoA (angle of attack) the plane will slow down requiring more AoA (or throttle) to stay level. Go back and look at the first A320 crash, and it is clear that a human pilot would have not have increased the AoA to go around, and in (more) level flight the tail would not have hit the trees.

Sorry to spend so many electrons and photons on this, but it is an issue that should not be overlooked. Formal proofs of correctness are worthless if the requirements are not complete. I like a language that asks the questions that the specification forgot.

--
                                           Robert I. Eachus

"The war on terror is a different kind of war, waged capture by capture, 
 Click to see the full signature
Reply to
Robert I. Eachus

... snip ...

And even then, 1000 such 6 nines reliable systems together can be expected to have a combined reliability of 0.999. Now consider the count of individual transistors in a moderately complex processor.

There are two basic methods of improving the end product reliability: Improve the component reliability, and reduce the number of components. This argues heavily against using a Pentium when a PIC will do.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
 Click to see the full signature
Reply to
CBFalconer

How do you think I got *into* comp.lang.ada?!

Me too!

Reply to
Mike Silva

Mike Silva wrote: [snip]

The study was only fully reported at a private MoD briefing. The nearest thing to a public version is in a recent article by Andy German (of Qinetiq) in a recent issue of Crosstalk. The Andy German paper covers experiencess of static analysis of a large number of air systems and shows similar (but not quite so extreme) trends to the C130J figures.

Peter

Reply to
Peter Amey

Praxis has a long-held commitment to using the best technology available and to advancing the state of the software industry as a whole. Our objection to using C for SIL 4 is that it is unsuited to that task. The majority of the proponents of the use of C in this role are C tool vendors; are they too governed only by vested interest?

Neither slide is erroneous. They are clearly dated 2002 and are factually correct for that date. We had two members on the original MISRA-C committee and attended every workshop and committee meeting until the beginning of this year. We still monitor the mailing list and will contribute again when we have something to say.

Wrong. We did. Unfortunately the rather stern view we took of what was needed to make C fully-analyseable (basically, a Pascal subset in C syntax) was not seen as being compatible with the apparent aim of the comittee: as much C as possible with the minimal restrictions needed to plug the biggest holes.

The slide clearly states that this is Rod's personal view and he (with appalling semantics) qualifies the "unique" with "almost". I think is is at least defensible to say that Praxis's experience as: implementors of world class critical systems, language designers; and tool vendors _does_ give us a different (even unique) perspective from most other contributors none of which (AFAIK) has done _all_ of these things.

Perhaps you can recall the context in which Less Hatton's shack/swamp comment can be interpreted in a positive way?

We don't rubbish C. We rubbish magic where logic is to be preferred. We have well-articulated reasons for saying that C is not well suited for constructing high-integrity systems. The proponents of C for this purpose never seem to present their reasons. All we ever hear are: "there are lots of C programmers around"; "we only ever employ the best people and they don't make those kinds of mistakes"; and, the falsehood, "C, with suitable support tools, is as good as anything else".

We don't take this view because we have alternative tools, we have alternative tools because we take this view!

Again we are not trying to rubbish MISRA-C. We continue to believe that it is useful enterprise. If people want to use C for smaller, safety-related systems then having a widely-supported coding standard is a "good thing". We will continue to rubbish the promotion of that coding standard to application domains for which it unsuitable.

We no longer promote a MISRA-C tool because the widely-differing behaviour of our and competing tools and the high level of ambiguity in the MISRA-C rules showed that the market was too immature for tools claiming to enforce MISRA-C compliance. We preferred, in that situation, to say nothing rather than make claims that could not be defended by logic and by reasoned argument. I hope that MISRA-C2 will be better; however, my long experience of designing unambiguous and analyseable programming languages leads me to believe that it won't.

Fundamental misconception. SPARK is wholly unambiguous and therefore analyseable in a formal and mathematical sense. An un-annotated subset of C _cannot_ have this property. Analysis of such a language can, therefore, ony result in the _detection_ of certain kinds of error, it cannot _prove_ that they have been eliminated.

Use, even widespread use, does not imply suitability.

Agreed. Modula-2, in particular, was a nice language from which a secure, SPARK-like sublanguage could have been constructed. Pity it, unlike Ada, never achieved critical mass.

I have pooh-poohed this pooh-poohing enough already!

have a Happy New Year

Peter

Reply to
Peter Amey

In article , Peter Amey writes

Hi Peter,

I have just downloaded your paper. Interesting and well worth reading.

Regards Chris

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\ /\/\/ snipped-for-privacy@phaedsys.org

formatting link
\/\/ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Reply to
Chris Hills

In article , Peter Amey writes

OK

Interesting.... can you explain? Perhaps as a separate thread?

Is that a contradiction or do you mean that you need static analysis but it can never be 100%?

That's got to be from the marketing dept :-)

On its own I would agree.

Good point. However you could argue the same for Ada hence the development of Spark Ada. I think that all that is happening is that there is (thank god) a realisation that unfettered C *IS* dangerous and it should be used with various tools not least a static analyser. Who would buy a word processor these days that does not have a spell chequer?

The original committee stopped along while ago.

That explains why I have not seen any one from Praxis in the last year or two.

Can you send it to me? It would be useful to go over it (again?) before we lock down C2

Yes. We have to work with the language we have. Also the view was taken (generally) that if we went from C to (almost) Pascal in one go we would loose the audience. MISRA-C1 was so successful because it was user friendly. A LOT of companies are using it. Their C has improved.... MISRA-C2 is a step further. Hopefully all the current MISRA users will move to it. MISRA-C3 will be another step forward.

You don't run a marathon by running 26 miles on day one of training.

Whereas mine English is perrfek :-)

OK. Though there are others involved who would argue their corner... IT would make a fun discussion over a bottle of scotch one night but probably not for public consumption.

:-) I think he referring to C99 he has used that analogy at several ISO C meetings. This is why we are still working with C90 at the moment.

Logical. I think part of the problem is that Ada is usually (always?) taught in a high integrity context. C is usually taught is an appalling way.

I have had one lecturer tell me he taught C but used cin and cout in strad of print and scanf as they were too difficult and anyway C was simply a sub-set of C++.... (another candidate for a thread of it's own :-)

Yes if they must use C at least we can improve the usage.

Fair enough.

That I can't disagree with. In fact part of my presentation at the 2002 MISRA forum was the problem that there is no MISRA-C certification of tools anyone could interpret as they saw fit and claim what they like. SOme claimed things there were not (in my view) supportable

Yes.. things have been made a little difficult by some of the claims... "we test 100% of MISRA"*

*That is we test 100% of the rules we thing our tool can test.... :-(

Hopefully. There are a few things in the pipeline that will be announced in Feb/March

It will be better. How much better we will have to see but anything that improves the general level of embedded C programming has to be a Good Thing (tm)

Good point.

The point was that C has been used successfully in these areas. At one time it was said no HLL could be used in these areas.

I was once asked to set up a Modula2 system... in theory the language was good but I discovered that the compiler had been written in x86 assembler, there was a most of a user manual but when I contacted the company that wrote the compiler the programmer had left and they had no design documentation, notes or bug fixes. No evidence of any testing etc.

In theory the language was good but the tools we had were not safe.

Cue the Black Adder Goes Forth* sketch in what happens when you don't pooh-pooh the pooh-pooh... (this will require a new NG not just a new thread)

(Forth? even he had an opinion on languages)

Happy New Year? with you waiting to analyse every line of MISRA-C2... I shall need tranquillisers :-)

BTW Re your Correctness by Construction: Better can also be cheaper... Interesting reading. I also use the line "better is also cheaper" when discussing C programming

Regards & Happy New Year

Chris

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\ /\/\/ snipped-for-privacy@phaedsys.org

formatting link
\/\/ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Reply to
Chris Hills

... snip ...

This may be just as well, since IMO part of the C problem is the extreme reuse and terseness of tokens and the perverse precedence rules.

No language can be proof against errors, it can only supply extra redundancy and increase the likelihood of detection at compile time or detection at run time via appropriate checks. Unfortunately the latter will always require a means of disablement, and the psychology of programmers is such that that will be used unnecessarily.

... snip ...

Again, I consider splint annotatated C to be another language, with an even more unnatural syntax. Also, the best people DO make those kinds of mistakes, and I present myself as evidence. Unfortunately many of the best programmers are somewhat sot in their ways, and will reach for the C system as opposed to learning an Ada or Pascal system, just as not too long ago they reached for the assembler rather than learn Fortran, PL/I, Pascal, or C. Availability also has a hand in here.

I consider the ideal mixture for embedded systems to be a combination of assembly, C, and Pascal. The result can be trimmed down to the abilities of highly incapable hardware, and yet can be made mutually compatible. I think (and I am willing to be corrected) that Ada cannot be trimmed substantially and still be Ada, and even if it can the extra verbiage will deter users.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
 Click to see the full signature
Reply to
CBFalconer

I think the subset of Ada used by SPARK is a good counter-example. It's not the only one in use, since Ada provides pragmas that restrict functionality to achieve a higher level of determinism, or to reduce/remove the need for runtime library support.

- Ed

Reply to
Ed Falis

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.