Source code static analysis tool recommendations

Yes, I agree with that logic. Then it will be "an integer constant expression corresponding to the type uint_leastN_t", in that you will get the same behaviour for the expression "u + x" for any x, when u is either:

uint8_t u = 1;

or

#define u UINT8_C(1)

But you can also say that by adding a U, you turn the constant into type "unsigned int" rather than "int", and thus it will not be promoted to a signed int.

I don't know if the standards are clear enough here to require one interpretation or the other. The gcc implementations I have looked at (some with glibc for Linux, some with newlib for embedded ARM) all follow the first version with no U for sizes less than an "int". An older CodeWarrior with its EWL library seems to take the other version and add a U. Based purely on "which of these to I trust more to understand the standards", my vote will be with gcc/glibc/newlib - the CodeWarrior C99 support and library has never impressed me with its quality.

Reply to
David Brown
Loading thread data ...

And then they renamed it to "essential type" in MISRA 2012 - perhaps because when doing MISRA C++, they discovered that C++ already uses the term "underlying type".

And yes, I fully agree about it messing up the C code and how people understand it.

But it will not affect the meaning of the code to a C compiler, which only deals with C types. As you say, it can affect C checkers - but not a C compiler, at least not when running in standard modes.

I have seen bugs in a certain massively popular, expensive but nameless (to protect the guilty) 8051 compiler in the way it handled integer promotions. I don't remember the details (it was a colleague that used the tool), but it is conceivable that the compiler was running in some sort of "MISRA mode" with different semantics for types than standard C.

Reply to
David Brown

Exactly. Now tell that the people who want cool downward pointing charts on their PowerPoints to justify the money spent on the checker...

Actually, the Windows API isn't too bad. The combination of rules that built 16-bit Windows are borderline genius (e.g. how do you do virtual memory on an 8086 that has not any hardware support for that?). The Win32 API has async I/O, wait for multiple semaphores, etc., all of which was bolted-on in Linux years later.

But having a C API (i.e. no inline functions), and evolving that through years, will lead to monstrosities like the above. I would have written

inline LONG makeLong(WORD a, WORD b) { return ((LONG)a >>>> In MISRA C, the literal 1 is a char not an int?

Not as long as its own type system defines a subset. A coding standard cannot define a '+' operator that adds two strings. But it can surely say "although the C standard allows you to add an 8-bit unsigned and a

16-bit signed to get 32-bit signed, I won't allow that".

Exactly.

(Just today had another discussion because Klocwork spit out new warnings. One "remove this error handling because I think this error does not occur", one "remove this variable initialisation because I think this is a dead store", two "this variable isn't used [but I see it being used the next line]". This surely justifies leaving everything behind and pet the tool.)

Stefan

Reply to
Stefan Reuther

That is not what I am talking about. There have been plenty of useful (and some imaginative) features in Windows API, no doubts there. (The async I/O functions needed to be in Windows because it was originally pretty much single-tasking and single-threaded. Linux didn't have nearly as much need of it because you could quickly and cheaply create new processes, and because of the "select" call. Async I/O was essential to Windows, and merely "nice to have" on other systems.)

The mess of Windows API is the 16 different calling conventions, the

23.5 different non-standard types duplicating existing functionality (while still impressively managing to mix up "long" and pointers, leading to it being the only 64-bit system with 32-bit "long"), function calls with the same name doing significantly different things in different versions of the OS (compare Win32s, Win9x and WinNT APIs), and beauties such as the "CreateFile" call that could handle every kind of resource except files.

C has had inline functions for two decades. The lack of inline functions in the Windows C world is purely a matter of laziness on MS's part in not providing C99 support in their tools. It is an absurd and unjustifiable gap.

However it is written, there should be no pointer types in there!

But yes, there are entirely understandable "historical reasons" for at least some of the baggage in the WinAPI. The design started out as a complete mess from an overgrown toy system rushed to the market without thought other than to cause trouble for competitors. The Windows and API design has been a lot better, and lot more professional, since the days of NT - but it takes a long time to deal with the leftovers. There has never been a possibility of clearing out the bad parts and starting again.

Yes, that's true. But what it /cannot/ do is say that if you add an

8-bit signed and a 16-bit signed you will get a 16-bit signed (on a 32-bit system). It can't change the rules of C, or the way the types work - all it can do is add extra restrictions.

I have had more luck using gcc, and its increasingly more sophisticated static warnings. At least then I know the static error checking and the compiler agree with each other!

Reply to
David Brown

Am 07.02.2018 um 09:21 schrieb David Brown:

I'm quite sure I'm talking about that very same compiler when I say: no, those weren't bugs. They were the amply documented effect of that compiler being run in an explicitly _not_ standard compliant setting.

No. It was running in a "let's not even pretend this micro is actually big enough to efficiently implement C without seriously breaking a large fraction of the rules" mode.

Many, maybe all the C compilers for small 8-bitters have such a mode, and often that's their default mode because, frankly, the standard-conforming mode would be nearly useless for most serious work. OTOH, those micros, and the projects they're used in, are generally small enough that you don't really have to go all MISRA on them.

Reply to
Hans-Bernhard Bröker

That is entirely possible. As I say, it was a colleague that asked for help, wondering if his C code was wrong. The C was correct, the compiler was generating object code that did not match the C. But it could well have been as you say, and the compiler was running in a significantly non-standard mode.

Certainly there are some things that standard C requires that would be very painful for handling in these brain-dead devices. A prime example is re-entrant or recursive functions. Without a decent data stack (and, in particular, SP+x addressing modes), it is very inefficient to have local variables in a stack on things like an 8051. So most compilers for these kinds of chips will put the local variables at fixed addresses in ram. Functions cannot then be used recursively. However, dealing with this efficiently does not need a change to the language supported - the compiler can analyse the source code and call paths, see that all or most functions are /not/ used recursively, and generate code taking advantage of that fact. The lazy way to handle it is to say that any recursive functions need to be specially marked (with a pragma, attribute, or whatever). This will be used so rarely in such code that it is not a problem.

In the case I had seen here, I think (IIRC) the problem was that the compiler was doing arithmetic on 8-bit types as 8-bit arithmetic - it was not promoting it to 16-bit ints. IMHO there is no justification for this non-standard behaviour. It is simple enough for the compiler to give the correct C logical behaviour while optimising to 8-bit generated code in cases where the high byte is not needed.

I have no problem with compilers requiring the use of extensions or extra features to get good code from these devices. Having a "flash" keyword, or distinguishing between short and long pointers - that's fine. Making "double" the same as "float" - less fine, but understandable. But changing the rules for integer promotions and the usual arithmetic conversions? No, that is not appropriate.

Other "helpful" ideas I have seen on compilers that I consider broken are to skip the zeroing of uninitialised file-scope data (and hide a tiny note about it deep within the manual), and to make "const" work as a kind of "flash" keyword on a Harvard architecture cpu so that "const char *" and "char *" become completely incompatible.

Reply to
David Brown

Being able to create processes does not help if you actually need threads to be able to efficiently communicate, e.g. between a GUI thread and a background thread. 'select' does not allow you to wait on semaphores, meaning you have to emulate this with a self-pipe. Now that we have 'eventfd' and native threads - bolted-on years later - it starts to make sense somehow.

This sounds vastly exaggerated. I haven't tried Win32s, but so far the only difference between Win9x and WinNT APIs was an occasional error return in wrong format, INVALID_HANDLE instead of NULL or something like that.

POSIX APIs give you beauties such as 'int' vs 'socklen_t', I/O functions that take a 'size_t' size but return a 'ssize_t' result, error codes such as EINTR, the whole mess of 'lseek'/'lseek64' and 'off_t'/'off64_t'. Everyone got their dark corners.

Windows has existed for two-and-a-half...

In general, I agree, but unfortunately sometimes even gcc disappoints.

Last two weeks' disappointments: '1

Reply to
Stefan Reuther

Yes, indeed - gcc and its static error checking is excellent, but very far from perfect.

That can be true. But I think using any tool for QA requires a careful understanding of the tool and its options. I would not say that gcc is a good static analysis tool - I would say that gcc used appropriately with a careful choice of flags and warnings is a good static analysis tool. And I would say that about other tools too - /no/ static analysis tool I have looked at works as I would like it without careful choices of flags.

Reply to
David Brown

That part is a direct copy of the early Apple APIs.

Reply to
Clifford Heath

Or any compiler, in fact. There were three C compilers for

8086 at the time they bought one. The only two that were any good told Microsoft to bugger off, so MS bought the worst one available and proceeded to abuse it mercilessly, thereby holding back the industry for decades and producing mountains of unreliable rubbish that they foisted on their unwilling victims.

Clifford Heath

Reply to
Clifford Heath

MSC 5.x was fairly horrible, but 6.00ax was pretty good, I thought. I used both the DOS and OS/2 versions pretty extensively BITD.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.