Normal numbers in, weird numbers out

Neither -Wconversion nor -Wsign-conversion are in -Wall or -Wextra, as they would produce many false positives in a lot of code. There are a good many warnings in gcc that are not in -Wall or -Wextra, but can still be useful if you have a code style that fits them. It is worth spending the few minutes it takes to read the documentation page - those few minutes will save you hours of bug fixing.

You probably want to look at the page based on the version of gcc that you are using, from . New warnings get added with every version of gcc.

Reply to
David Brown
Loading thread data ...

First, he knows he is not using such a system, and the code is specific for the target he has and will never be ported to such a system.

Second, he knows the range involved. It will fit happily inside any type.

Third, if he were to increase the range to being beyond the limit of int on his system, then the compiler would warn about it.

If you are doing stuff that has to support massive arrays beyond the size of int, or writing code that might interact with such massive data structures, then size_t is usually the appropriate type. But you know when you are writing such code - and size_t is a poor choice for a type in code like this. (Just as "char" would be a poor choice, even though the code would work correctly.)

Reply to
David Brown

On a 64 bit machine you could have a vector of 10 billion ints. So if x[k] can be any int from the vector then k can't be 32 bits.

Reply to
Paul Rubin

What is poor about it?

Reply to
Paul Rubin

IMHO, it's not a good habit - because "size_t" is for measuring sizes of memory areas or arrays, and thus gives the wrong impression when you use it like this. I am fairly confident that you don't use size_t consistently for /everything/ that is used as an array index (for one thing, if you have any calculated indexes, then you have been using "int" or "unsigned int"), so it looses any advantage there.

But as I said, it's not a bug - I just saw it as a strange choice of type.

If you are using C++98 or C++03, forget half of what you have learned and move to C++11. It is a /much/ better language in a number of ways, with many improvements for embedded programming.

Key amongst the changes, in my opinion, are:

"constexpr" that lets you write functions that are guaranteed to be evaluated at compile time (I'm sure you can see the use of those). This has been improved in C++14.

"auto", which infers the type of something from its initialiser. This is particularly useful when the real type is messy template stuff, such as for iterators, but it works perfectly well for local data too.

Range-based for loops are interesting, especially with things like std::array.

Lambda functions may or may not be to your liking. I haven't used them in C++ yet, but I use them a lot in Python.

Some improvements in function syntax for specifying return types are nice, especially for complicated templates.

Various improvements to classes and special functions make classes safer and simpler, such as "override", "final", and delegated constructors. Explicit conversion operators are also very nice, especially for making bool conversions. You can explicit mark special methods as "deleted" or "default".

A "nullptr" constant makes overrides safer since you can distinguish between integer 0 and a null pointer.

Strongly typed enums are safer and have better scoping. And being sort-of classes, they can have methods.

User-defined literals can be fun, especially combined with constexpr and the new class features.

There is some atomic access and threading stuff in the standard library now, which might be of interest.

Static assertions are official, so there is no longer a need for an ugly static_assert macro.

In the library, there are tuples (which can give an easy way to get multiple return values from a function), hash tables, and regular expressions.

There are new, safer, and more efficient smart pointers.

Your homework, now that your debugging is finished, is to look up C++11 (and C++14, which is a minor addition) and learn about it.

Reply to
David Brown

I had the impression from

formatting link

that it was recommended for such indexes. Obviously if someplace else you use an int as an index, that implicitly asserts that the int is in range, which it had to have been to work in the first place. OTOH, using size_t when convenient means fewer things that could possibly go wrong. The compiler may even notice when the loop size is statically bounded, and use a narrower index variable (if it's not eliminated completely).

Reply to
Paul Rubin

It is a poor choice - IMHO - because "size_t" is a type associated with large memory blocks, and is a type that could well be bigger than and less efficient than "unsigned int". In C, the type for "just a number" is "int" - it is the type of the "0" and "3" in the for-loop.

In this particular system, "size_t" will be either "unsigned int" or "unsigned long", both of which are 32-bit (but they are still subtly different - their pointers are incompatible). The code will work fine, with the same object code as if "int" or "unsigned int" were used. But to my mind, "size_t" gives the wrong impression or implication.

But I am happy to admit that this is a matter of opinion, not an objective fact.

Reply to
David Brown

Yea. Right. I've lost track of how many times I've pulled my hair out over code where the original author "knew" that, but it _did_ end up being ported to "such a system".

--
Grant Edwards               grant.b.edwards        Yow! WHO sees a BEACH BUNNY 
                                  at               sobbing on a SHAG RUG?! 
                              gmail.com
Reply to
Grant Edwards

Am 24.11.2015 um 19:08 schrieb Tim Wescott:

Just for the fun of it, here's what PC-Lint has to say about that code:

--- Module: wescott.cpp (C++) readBuf[n] = (_scale * ((_readings[n] + _count / 2) / _count) +

32768) / 65536; ^ wescott.cpp 13 Info 737: Loss of sign in promotion from int to unsigned int readBuf[n] = (_scale * ((_readings[n] + _count / 2) / _count) + 32768) / 65536; ^ wescott.cpp 13 Info 737: Loss of sign in promotion from int to unsigned int readBuf[n] = (_scale * ((_readings[n] + _count / 2) / _count) + 32768) / 65536; ^ wescott.cpp 13 Info 713: Loss of precision (assignment) (unsigned int to int)

And that's without even trying to fine-tune any options except message formatting. All I did was extend the snippet to make it compilable, and feed it to the tool.

Lint nailed that bug with its first shot, without even aiming properly. So maybe the forefathers had it right when they coined the mantra:

Lint early, lint often: lint is your friend!

Reply to
Hans-Bernhard Bröker

Y'know, now I'm torqued at gnu.

All means all. Not "most", or "practically all" -- it means _all_, dammit.

So is there a -Wallreallyeverything flag?

--
www.wescottdesign.com
Reply to
Tim Wescott

You make a compelling argument. I may take you up on it.

--
www.wescottdesign.com
Reply to
Tim Wescott

That doesn't seem so important here, unless three suddenly becomes much larger.

It might also be that size_t is 16 bits on machines with smaller address space, but still 32 bit int.

-- glen

Reply to
glen herrmannsfeldt

(snip on size_t)

String theory people might have it up to 10, but that will still fit within int on all C implementations.

Are there some with sizeof(size_t) < sizeof(int)?

-- glen

Reply to
glen herrmannsfeldt

Unfortunately, different people have different ideas about what should be a warning and what is perfectly good code. So "-Wall" means "warnings that almost all people will be happy with", and "-Wextra" means "some extra warnings that some people will like". And then there are even more warnings that only a small proportion of people want.

Go through the manual page once, and collect a list in your makefile. Then you can re-use it later - it only really needs to be updated for new compiler versions, or if you have to work with code that doesn't compile warning-free (a common problem with third-party code that doesn't share your ideas of what warnings to use).

Reply to
David Brown

If you are using gyros, then by definition you are in a 6 dimensional universe: x, y, z, pitch, roll, and yaw. 8-)

George

Reply to
George Neuner

PCLint can be a useful tool, but be aware of a few problems and limitations here.

PCLint warnings have the same advantages and disadvantages as gcc (or clang) warnings - if many are enabled, you risk a lot of false positives in valid code, and if few are enabled, you don't get as good checking as you would like. In this particular case, the defaults in PCLint find the problem while gcc's -Wall -Wextra do not - but in general use PCLint will need tuning just like gcc in order to get the checks you want and avoid ones you don't want.

Whenever you use an external checker separate from the compiler, you have issues of feature support. Does your checker support the language features you are interested in, perhaps from the newest standards? Does it support compiler extensions you want to use? You may find your checker is restricting the code you can write - and you may find your code needs conditional compilation to get clean compiles and clean lints.

In general, I would say your first priority is to use language features to enforce code correctness. Avoid memory leaks by using smart pointers. Avoid mixups between integers and enums by using strongly-typed enums. Avoid accidents by using "const" wherever possible. Avoid buffer overruns by passing arrays (and especially arrays encapsulated within a struct) rather than pointers, and using sizeof to get the sizes right. Make liberal use of static assertions (using a macro version pre-C11 or pre-C++11).

The second priority is to use the compiler's warnings to the best of its abilities. Read the manual covering the warning flags. Enable everything that makes sense. Use #pragmas to temporarily disable warnings around code that you know is correct, but unavoidably triggers warnings that you otherwise want enabled.

External checkers like PCLint are a third line of defence. They can be useful, especially if they provide warnings that cannot be checked by your compiler (such as MISRA checking if you are using gcc). But they are a complement, not a replacement.

Reply to
David Brown

That's a meaning of "all" that I'm not familiar with. It certainly violates the "principle of least surprise".

More seriously, this kind of shenanigans is an indication that castles are being built on sand and that things are being pushed beyond a reasonable limit.

While it is reasonable to build bungalows on sand, it becomes ridiculous when two or more stories are added on top.

Reply to
Tom Gardner

Feel free to take this up with the gcc developers if you like - they have several mailing lists that anyone can join or post to, and where the developers are in direct contact with users (unlike for almost every commercial development tool). If you don't like mailing lists, the news.gname.org newsserver makes the mailing list into a newsgroup.

I don't know any tool where there is a switch that turns on /all/ warnings. I don't believe I have ever used a compiler where I did not want to make an explicit choice of warnings in order to get the warnings I want, and avoid false positives - at least, not for a compiler that had a decent range of warnings with fine-grained control.

But an even better idea is that if you are going to use a development tool, it pays to look a little into the documentation rather than merely guessing your way through the process. gcc's documentation here is pretty good (not perfect, of course, but not bad).

I suppose the "-Wall" option could have been called "-Wmany", which might have been less of a surprise to you. But I would be very surprised if anyone really wanted /all/ the warning options enabled - so what would be the point of having a "-Wabsolutely-all" option?

Reply to
David Brown

Naming /is/ important. So is understanding your tool. So is having confidence that there's nothing lurking where you didn't think it was necessary to look.

Given an extremely complex tool (such as gcc), a prime use of naming is to give a big hint as to what can be /ignored/ in any context. Without that you are left in the invidious position of having to know everything before you can start to ignore bits.

My preference would be for -Wall to mean what it implies, rather than -WwhateverWeFeelLikeThisRelease. If individual warnings are unhelpful, then disable them explicitly either by command line arguments or annotations in the code.

Reply to
Tom Gardner

In my experience, unless you are using a very limited compiler, there will always be a variety of options that you want, and a variety of options that you don't want. In the same way that you don't want to use "enable all optimisations", you don't want "enable all warnings". Using "-Wall" is a bit like "-O1", and "-Wall -Wextra" is a bit like "-O2". In each case, you get a reasonable selection of warnings or optimisations, but not everything.

I think it might make sense to change "-Wall" and "-Wextra" to "-Wlevel1" and "-Wlevel2", for example.

(I am not against the idea of changing then name "-Wall". I am merely against making the assumption that "-Wall" would really mean /all/ warnings. I also think it would be good to put "-Wconversion

-Wsign-conversion" into -Wextra.)

There a lot of individual warnings that you would have to disable in that case - usually it is better to start from -Wall -Wextra and work from there. And gcc - rightly, I believe - does change the selection for -Wall and -Wextra from release to release. In most cases it is a matter of adding a new warning to gcc, and then putting it in -Wall or

-Wextra if the developers thing it is appropriate there. Very occasionally, a warning will be moved out if there is good reason (such as too many false-positives).

But of course, you don't change the compiler release you use for compiling a project without a significant amount of thought and testing

- a look at the release notes and the manual for the compiler and updating your Makefile should not make the process much harder.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.