shame on MISRA

No, negative zero can not be produced by a literal constant, like '0'. Negative zero can only be the result of some expressions where at least one argument is already negative zero, or by directly manipulating the bits.

Reply to
Arlet
Loading thread data ...

This is wrong. It depends on how the adders are configured. The obvious method of subtracting by complementing and adding can generate negative 0. The presence of a negative 0 can force such an output. In fact, the means of avoiding negative zero is to use a subtractor, rather than an adder, and prevent the injection of any negative zero.

--
 
 
 
 
                        cbfalconer at maineline dot net
Reply to
CBFalconer

That's mostly what I thought as supported by the language standard's

6.2.6.2p2-3. Since this one's complement topic has drifted from the original MISRA topic, I have taken it over to comp.lang.c to check my understanding.

I'm sure it'll all come out in the wash, as they say.

--
Dan Henry
Reply to
Dan Henry

The C99 standard says in section 6.2.6.2, paragraph 3:

"If the implementation supports negative zeros, they shall be generated only by:

- the &, |, ^, ~, operators with arguments that produce such a value;

- the +, -, *, /, and % operators where one argument is a negative zero and the result is zero;

- compound assignment operators based on the above cases.

It is unspecified whether these cases actually generate a negative zero or a normal zero, and whether a negative zero becomes a normal zero when stored in an object."

In other words, if a standards compliant compiler generates code for hardware where, for instance, the adder could generate negative zero results from ordinary inputs, then the compiler must not generate a standard add instruction to translate a '+' operator. Instead, it must generate an alternative code sequence that is equivalent to an add, but avoids the negative zero.

(Older standards may have different wording, though, and compilers may not be compliant)

Reply to
Arlet

Seymour Cray's Control Data machines employed subtractors (there were of course ISAs designed by others at CDC); I found 2's complement arithmetic alien after having been immersed in the CDC 1's complement universe for so long. On the 160/8090 series (or the 6000/7000 PPU),

6000 series and 7000 series, '-0' is never generated by arithmetic but only by manipulation.

Regards,

Michael

Reply to
msg

Univac's large mainframes were ones-complement also. For at least some of those machines, multiplying any negative value by zero produced -0 as a product. When I inquired as to why this seemingly erroneous result was produced, I was told that it was a design error in an early machine and was not corrected in later machines so as to maintain compatibility!

Since -0 and +0 were two different values at the hardware level, one occasionally had to add +0 to a value before testing it for zero so as to eliminate any -0 that might be present.

Reply to
Everett M. Greene

... snip ...

My 1965 DAC512 machine used 12 digit excess 3 BCD 9's complement arithmetic. I didn't discover that I could eliminate the -0s with a subtractor until much too late in the design, so zero detection depended on two arrays of diodes and an or gate. You can see the remains of the beast at:

--
 
 
 
 
                        cbfalconer at maineline dot net
Reply to
CBFalconer

Hello,

Chris Hills mentioned in April 2007 in news:$ snipped-for-privacy@phaedsys.demon.co.uk and in news:Xs$ snipped-for-privacy@phaedsys.demon.co.uk "C can be as safe as Ada" and that he would try to track down some documentation to that effect which he believed he had seen. Has such documentation been found yet?

Regards, Colin Paul Gloster

Reply to
MISRA discussion reader

In article , MISRA discussion reader writes

Only 10 boxes left to sort... I am sure we have brought more back from storage than we put in :-)

I found Hatton's Safer C which does show that given a good process (sub-set and static analysis) C is as robust a Ada in the same conditions.

I still can't find an IEE magazine with an item on 61508 which showed that coding errors are a very small part of the over all list of errors and that given a good process the errors in different languages fall into insignificance.

I am still trying to find the references (in quieter moment of rebuilding the office. )

Do you have any evidence to counter the claim that C can be as safe as Ada?

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
/\/\/ chris@phaedsys.org      www.phaedsys.org \/\/\
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris Hills

... snip ...

C doesn't range check. That should suffice.

--
 
 
 
 
                        cbfalconer at maineline dot net
Reply to
CBFalconer

You shouldn't have left them alone long enough to breed.

Robert

--
Posted via a free Usenet account from http://www.teranews.com
Reply to
Robert Adsett

It does when I use it.

--
Al Balmer
Sun City, AZ
Reply to
Al Balmer

Hardly

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
/\/\/ chris@phaedsys.org      www.phaedsys.org \/\/\
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris Hills

Only if there is an idiot programming with no static analysis or coding guide

You make the common mistake with C.

With Ada much is built into the compiler. With C you use external tools. You should always use Lint (or similar) as part of the C build process.

If you like Ada is in "user mode" with restrictions on what you can do. Whereas most people use C in "administrator mode" when C is used properly with the correct environment it can be as safe as Ada

This is because you do use a style guide, a coding standard (subset) and static analysis.

I am still hunting around for the IEE item (from about 4 years ago) about errors in 61508 projects which said that something like 35% of errors came from the requirements phase and only 10% from the coding phase. This was virtually constant across several languages.

The result of the study was that in a good engineering process the language choice was almost irrelevant. I think the languages were maninly Ada, Pascal and C

I await the outcome of the JSF with interest. There they have 30,000 C++ programmers! Now C++ is a damned sight more complex than C.......

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
/\/\/ chris@phaedsys.org      www.phaedsys.org \/\/\
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris Hills

Isn't this obvious ?

Of course, we can argue between coding errors and programming errors, but I would consider a person claiming that he/she is writing secure code because of using language xxx (claimed to be immune of coding/programming errors) to be a greater security risk than a person using a more demanding tool and being constantly on alert.

Paul

Reply to
Paul Keinanen

In article , Paul Keinanen writes

I agree..

One of the universities I know taught programming using modular 2 for all sorts of (spurious) reasons.

1 it was a "safe" language (unlike C) 2 it had an ISO standard (C did not until about a year later) 3 Modula 2 was the industrial version of a "teaching" language, Pascal.

As the language was "safe" if the program compiled it was OK. We were not taught SW engineering of process and testing.

At the same time most of the students were busy teaching themselves C so they could et jobs after the course... unfortunately they learnt C the same way.... if it compiled it was OK.

Most had no idea what lint was. Those that did said the compiler did the checking so they did not need lint... I still hear this today.

Many learn C in a "programming" environment. Ada is usually taught in a SW Engineering and or high integrity environment AFAIK so the whole attitude is different in the learning of it. That is what makes the difference I think not the language per say.

If C were taught in the same way Ada is with the same emphasis on high integrity I don't think we would be having this argument.

Linux seems reliable, remind me which language is Linux written in? :-) There are lots of very robust and reliable C programs in use. There are some Ada programs that fall over. It is down to the SW Engineer doing the work not the language.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
/\/\/ chris@phaedsys.org      www.phaedsys.org \/\/\
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris Hills

Isn't this obvious ?"

No. I concede that a good process can reduce errors made with a bad language.

"Of course, we can argue between coding errors and programming errors, but I would consider a person claiming that he/she is writing secure code because of using language xxx (claimed to be immune of coding/programming errors) to be a greater security risk than a person using a more demanding tool and being constantly on alert."

True.

Regards, Colin Paul Gloster

Reply to
Colin Paul Gloster

Those are errors that are not caught. What more is needed? Read your original query again.

--
 
 
 
 
                        cbfalconer at maineline dot net
Reply to
CBFalconer

In news:8oLVW$ snipped-for-privacy@phaedsys.demon.co.uk timestamped Tue, 22 May

2007 15:23:32 +0100, Chris Hills posted: "[..]

I found Hatton's Safer C which does show that given a good process (sub-set and static analysis) C is as robust a Ada in the same conditions.

[..]

Do you have any evidence to counter the claim that C can be as safe as Ada?"

If we restrict C and Ada so much that C is left only with ; and Ada is left only with null; then we can have a subset of C which is as safe as a subset of Ada. Similarly, below I provide an example of a bad program in C which could be just as poorly written in Ada by someone who deliberately adheres to the rules of Ada but not its spirit. Similarly, one can write terribly unsafe MISRA C code by complying with all of the MISRA rules while not complying with the spirit of MISRA. Was Ada being used properly in Hatton's example?

Even with static analysis tools, a useful subset of C is not as good as a useful and safe subset of Ada. As you use MISRA tools you might have better C tools which would prevent the following (which will trivially generate a compliation error in Ada) so please report your tools' warnings/errors for invalid.c : cat invalid.c int main() { typedef unsigned char quantity_of_apples_type; quantity_of_apples_type apples_in_handbag; quantity_of_apples_type apples_in_rucksack; quantity_of_apples_type total_number_of_apples;

typedef unsigned char quantity_of_oranges_type; quantity_of_oranges_type oranges_in_handbag;

apples_in_handbag = 2; apples_in_rucksack = 5; oranges_in_handbag = 3; total_number_of_apples = apples_in_rucksack + oranges_in_handbag /*Oops.*/;

return -1; }

Jochen Pohl's lint for OpenBSD 3.8 and for FreeBSD 4.11 reported: "invalid.c:11: warning: apples_in_handbag set but not used in function main invalid.c:15: warning: total_number_of_apples set but not used in function main lint: cannot find llib-lc.ln Lint pass2:" (and if I commented out + oranges_in_handbag: "invalid.c:15: warning: `/*' within comment invalid.c:11: warning: apples_in_handbag set but not used in function main invalid.c:15: warning: total_number_of_apples set but not used in function main invalid.c:13: warning: oranges_in_handbag set but not used in function main lint: cannot find llib-lc.ln Lint pass2:").

gcc -Wall -Wextra -Wno-div-by-zero -Wsystem-headers -Wfloat-equal -Wtraditional

-Wdeclaration-after-statement -Wundef -Wno-endif-labels -Wshadow -Wpointer-arith

-Wbad-function-cast -Wcast-qual -Wcast-align -Wwrite-strings -Wconversion

-Wsign-compare -Waggregate-return -Wstrict-prototypes -Wold-style-definition

-Wmissing-prototypes -Wmissing-declarations -Wmissing-field-initializers

-Wmissing-noreturn -Wmissing-format-attribute -Wno-multichar

-Wno-deprecated-declarations -Wpacked -Wpadded -Wredundant-decl s -Wnested-externs -Wunreachable-code -Winline -Winvalid-pch -Wlong-long

-Wvariadic-macros -Wdisabled-optimization -Wno-pointer-sign invalid.c produced only the following warnings/errors: "invalid.c:2: warning: function declaration isn't a prototype invalid.c: In function 'main': invalid.c:2: warning: old-style function definition" with gcc version 4.0.2 20051125 (Red Hat 4.0.2-8) despite the claim on

formatting link
: "[..] GCC provides many levels of source code error checking traditionally provided by other tools (such as lint), [..]

[..]"

Les Hatton claimed on

formatting link
:"[..]

We are exhorted to develop using Java Beans or OO this or UML that and that this will fulfil our wildest dreams. It is arrant nonsense. Such experiments as we have managed to carry out suggest that by far the biggest quality factor in software is the ability of the person developing it. It appears to have very little to do with any technique or even language they might choose to use. In my experience as an employer, it doesn't even appear to have much to do with their educational background either.

[..]"

Though I agree with much of this, I would suspect that someone good at a job would deliberately choose a tool (e.g. a language) which assists with that job, even if literally the tool is not necessary, but merely extremely practical. For example, I can correctly draw a straight line segment with just a pencil and paper, but I would rather try to draw a straight line segment with a ruler (or a set square or a T-square or something similar with a flat edge) and a pencil and paper.

Les Hatton claimed on

formatting link
:"[..]

Even in the world of pure mathematics, we are straying towards an era where computer programs are used as part or indeed all of a proof. An early example of this was the four-colour theorem. However, computer programs are fundamentally unquantifiable at the present stage of knowledge and any proof based on them must be considered flawed until such time as the same level of verification can be applied to a program as is applied to a theorem."

I understand this concern, but an automated theorem prover produces all the steps of a proof which can be manually checked by someone who does not trust the proof. This is not to say that someone would always bother to do so, e.g. Leslie Lamport (someone who thinks many things which I agree with and many other things which I disagree with) wrote on

formatting link
:"[..]

Author's Abstract A method of writing proofs is proposed that makes it much harder to prove things that are not true. The method, based on hierarchical structuring, is simple and practical.

[..] [..] Anecdotal evidence suggests that as many as a third of all papers published in mathematical journals contain mistakes.not just minor errors, but incorrect theorems and proofs. [..] A method for structuring proofs was presented by Leron [5]. However, his goal was to communicate proofs better, not to make them more rigorous. Despite their hierarchical structuring, the proofs Leron advocated are quite different from the ones presented here. They do not seem to be any better than conventional proofs for avoiding errors. [..] 4 How Good Are Structured Proofs? 4.1 My Experience Some twenty years ago, I decided to write a proof of the Schroeder-Bernstein theorem for an introductory mathematics class. The simplest proof I could find was in Kelley's classic general topology text [4, page 28]. Since Kelley was writing for a more sophisticated audience, I had to add a great deal of explanation to his half-page proof. I had written five pages when I realized that Kelley's proof was wrong. Recently, I wanted to illustrate a lecture on my proof style with a convincing incorrect proof, so I turned to Kelley. I could find nothing wrong with his proof; it seemed obviously correct! Reading and rereading the proof convinced me that either my memory had failed, or else I was very stupid twenty years ago. Still, Kelley's proof was short and would serve as a nice example, so I started rewriting it as a structured proof. Within minutes, I rediscovered the error. [..] [..] The style was first applied to proofs of ordinary theorems in a paper I wrote with Martín Abadi [1]. He had already written conventional proofs.proofs that were good enough to convince us and, presumably, the referees. Rewriting the proofs in a structured style, we discovered that almost every one had serious mistakes, though the theorems were correct. Any hope that incorrect proofs might not lead to incorrect theorems was destroyed in our next collaboration [3]. Time and again, we would make a conjecture and write a proof sketch on the blackboard.a sketch that could easily have been turned into a convincing conventional proof.only to discover, by trying to write a structured proof, that the conjecture was false. Since then, I have never believed a result without a careful, structured proof. My skepticism has helped avoid numerous errors. [..] Computer scientists are more willing to explore unconventional proof styles. Unfortunately, I have found that few of them care whether they have published incorrect results. They often seem glad that an error was not caught by the referees, since that would have meant one fewer publication. I fear that few computer scientists will be motivated to use a proof style that is likely to reveal their mistakes. Structured proofs are unlikely to be widely used in computer science until publishing incorrect results is considered embarrassing rather than normal. [..]" and on
formatting link
:"[..] [..] Lots of people jumped on me for trying to take the fun out of mathematics. The strength of their reaction indicates that I hit a nerve. Perhaps they really do think it's fun having to recreate the proofs themselves if they want to know whether a theorem in a published paper is actually correct, and to have to struggle to figure out why a particular step in the proof is supposed to hold. [..] [..]"

Les Hatton claimed on

formatting link
:"Scientific papers are peer reviewed with a long-standing and highly successful system. [..] [..]"

This is fantasy, at least in the domains of astrophysics; space science; computer science; and electronic engineering. I restricted that list of domains to domains in which I have read referred published papers which were flawed: I have heard of similar flaws in other domains. In the domain of electronic engineering, I have also been subjected to people who "trust" what others boast instead of scientifically evaluating the boasts.

Les Hatton claimed on

formatting link
:"[..] The automobile industry is beginning to suffer very high levels of recall based on software failures affecting all electronically-controlled parts of the car including but not limited to the brakes, engine management system and airbags and even made the New York Times last year.

[..]"

I had been unaware of this. It seems to be true, e.g. from one of many stories: WWW-ODI.NHTSA.DOT.gov/cars/problems/defect/results.cfm?start=16&SearchType=DateSearch&date=02/15/2005&type=date&summary=true :"[..]

[..] IN FEBRUARY 2005, DAIMLERCHRYSLER REVISED THE POWERTRAIN CONTROL MODULE SOFTWARE IN PRODUCTION VEHICLES AND ISSUED A TECHNICAL SERVICE BULLETIN (TSB 18-013-05) RELEASING THE NEW SOFTWARE AS A SERVICE REMEDY FOR THE IDLE UNDERSHOOT CONDITION. [..] [..]"

However, Les Hatton does not seem to be aware that Dr. Holger Pfeifer claimed in March 2007 that in practice, the level of verification applied in the automotive industry is still a generation behind that used for aircraft. Could someone actually explain why electronic components are used in the automotive industry? Cars had been running successfully for decades without electronics so one would imagine that applying Ada or MISRA C to a critical part of a car is irresponsible.

Les Hatton claimed on

formatting link
:"Not all is bleak. The Linux kernel is now arguably the most reliable complex software application the human race has ever produced with a mean time between failures reported in tens and in some cases, hundreds of years. [..]

[..]"

I think that Chris Hills will rather sensibly not try to embed Linux into a car.

Regards, Colin Paul Gloster

Reply to
Colin Paul Gloster

In news: snipped-for-privacy@phaedsys.demon.co.uk timestamped Wed, 23 May

2007 12:08:59 +0100, Chris Hills posted: "[..]

As the language was "safe" if the program compiled it was OK. We were not taught SW engineering of process and testing."

Even I do not have that attitude, e.g. Ada can not prevent: function product_of_the_operands_in_Ada (some_operand, another_operand: natural) return natural is begin return some_operand * some_operand; end product_of_the_operands_in_Ada;

"[..]

Most had no idea what lint was. Those that did said the compiler did the checking so they did not need lint... I still hear this today."

See the quotation from

formatting link
which I posted in news:f31fo4$slp$ snipped-for-privacy@newsserver.cilea.it

"[..] the whole attitude is different in the learning of it. That is what makes the difference I think not the language per say."

That is a good point, but the language does help.

"If C were taught in the same way Ada is with the same emphasis on high integrity I don't think we would be having this argument."

People who are Ada advocates and who not their own bosses who are forced to use some C still advocate Ada. When I do things, I make mistakes. When I do things in Ada, I make mistakes. When I do things in C, I make mistakes. Tools for Ada help me to eliminate mistakes sooner rather than too late. I am not the only Ada advocate who argues this way.

"Linux seems reliable, remind me which language is Linux written in? :-)"

Not that the distinction matters here, but Linux is not written in C: it is written in GNU C.

"There are lots of very robust and reliable C programs in use."

Probably.

" There are some Ada programs that fall over."

Definitely.

" It is down to the SW Engineer doing the work not the language."

Not exclusively.

Reply to
Colin Paul Gloster

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.