A C++-Language Question (Maybe C, too)

We tried that, several messages up. Apparently it's legal in C but not C+

+.
--
My liberal friends think I'm a conservative kook. 
My conservative friends think I'm a liberal kook. 
 Click to see the full signature
Reply to
Tim Wescott
Loading thread data ...

static_assert is part of C++, not C, so it?s no surprise you can?t use it here!

Chris

Reply to
Christopher Head

It's part of C as of the C11 standard: "7.2.3 The macro static_assert expands to _Static_assert." _Static_assert() is defined in 6.7.10.

Reply to
Rich Webb

And that $998 is for a single-user, node-locked workstation version. If you want multiple users, the price is $1998 for one user, $5K for five users and $10K for 10 users.

That's right: 10 separate node-locked licenses are actually _cheaper_ than a single 10-user license.

Why they think Linux/Unix users have more money to spend than Windows users rather baffles me. [I suppose since we didn't have spend as much money on the hardware, and the software was free, one might think we'd have more money left over for a lint program.]

But, I think they believe they're still in a world where a Unix software development workstation costs $20K instead of $300. I wonder what color the sky is in that world...

--
Grant
Reply to
Grant Edwards

Another theory says that you cannot enforce a node-locked license by playing dirty USB dongle tricks, using kernel mode drivers etc. under Linux as good as you can under Windows, so you have to expect some unlicensed copying and compensate that with the price tag. At least I heard that from a software vendor a while ago.

How much the price tag encourages unlicensed copying is another story.

Stefan

Reply to
Stefan Reuther

PC-Lint is node-locked by the terms of the license, not via a dongle or PC-unique magic number plus vendor-supplied magic key. Unlike pretty much all (strike that, make it plain "all") of my embedded compilers.

It seems to be a common misconception, as I thought the same thing, myself, until I actually bought and installed it. I think that it does ask for a number from the CD case but it has been a while since I upgraded machines so I won't swear to even that. At any rate, much less painful than moving the compilers that *are* node-locked.

Reply to
Rich Webb

I'm not entirely sure if/how they try -- the license is only good for a particular machine, but I didn't notice any mention of license servers or the like. About 15 years ago I worked with one Unix app that required a license server. Never again. Ever.

Could be.

For me, it just encourages "doing without".

--
Grant Edwards               grant.b.edwards        Yow! PUNK ROCK!!  DISCO 
                                  at               DUCK!!  BIRTH CONTROL!! 
 Click to see the full signature
Reply to
Grant Edwards

Verily, thou art as an heretic, for thus spake Henry Spencer in the ancient of days:

The Ten Commandments for C Programmers

  1. Thou shalt run lint frequently and study its pronouncements with care, for verily its perception and judgement oft exceed thine.
--
I don't have a citation for when the Commandments first appeared but 
I'm guessing the 80's sometime; lint itself seems to date from the 
 Click to see the full signature
Reply to
Rich Webb

I'd happily run lint if I had one. I just don't have $1K to spend on one.

There's no open-source "lint" programs (that I was ever able to find) other that splint, which is useless. I spent several days working with splint, even to the point of hacking on the source code. I couldn't get it to do anything worthwhile, and was told on the splint mailing list that it wasn't _meant_ to do anything useful (in the context of finding bugs in source code).

Not that I'm aware of. Can you point to one?

I'm pretty sure that the GNU philosophy is that warning a user about C/C++ code is something that belongs in gcc. It would be nice if gcc had a "whole program" C "lint" mode where it would compare function definitions with function calls. If you're careful about using header files, and don't make mistakes, then gcc warnings are pretty good. OTOH, if you're careful and never make mistakes, you don't need warnings of any sort from any program.

--
Grant
Reply to
Grant Edwards

For C, there's an option -Wmissing-prototypes that warns if a non-static function does not have a prototype defined. This catches most cases where something is not defined in a header file.

There's also -Wmissing-declarations

Reply to
Arlet Ottens

To be brutally honest about it, it has the air of "eat your broccoli". If you're starting from a clean slate, it's very valuable, but if you're dealing with old crufty codebases, it can be a budget-buster. Inserting less than useful casts everywhere might be good, might be bad, and for a legacy codebase that's already widely installed, it might be a negative value.

It's just the sort of thing you have to evaluate carefully in context of an organization. It is not an unalloyed good.

Here's a general list of static code analyzers:

formatting link

I think of the "mopping up the warnings" phase of coding as a negotiation between the programmer and the compiler.

--
Les Cargill
Reply to
Les Cargill

The problem is when there _is_ a prototype, but it's wrong. This obviously only happens when you get sloppy and something's declared in two different .c files rather than in a single .h file. But, that's exactly the sort of sloppiness that "lint" is supposed to catch.

--
Grant Edwards               grant.b.edwards        Yow! Mr and Mrs PED, can I 
                                  at               borrow 26.7% of the RAYON 
 Click to see the full signature
Reply to
Grant Edwards

Here's the manpage:

And this probably is its source code: (It's been a while I've toyed around with it hence I'm not 100% sure.)

Yep. Especially C++ (and to a lesser extent, C) cannot easily be parsed partially. The parser has to know the machine the code is going to run on. At least, you spend weeks building mockups for system headers if you cannot run it on real system headers. And then there's things like the static assert mentioned in this thread: it works by testing properties of the target machine and making the code legal or illegal. Thus, having warnings in the compiler that ultimately generates the target code sounds logical.

One (partial) solution to this program is called C++.

Another (partial) one could be to utilize the preprocessor. Do things like this: #define myfunction(a, y, z) myfunction_(x, y, z) void myfunction(int x, int y, char* p); and it will be an error to call 'myfunction' without a declaration (because it's been renamed in the object file), or with the wrong number of arguments (because preprocessor macros are picky about that). Disadvantage: does not allow function pointers easily.

Stefan

Reply to
Stefan Reuther

To Tim's original question, here's the way I've usually dealt with this issue. Advantages include no possibility of missing/wrong #defines for error indices, easily accessible count, consistent name definition, makes it easy to segregate multiple parts of a logical data structure that have to be allocated in different kinds of memory (RAM vs flash), etc. Uses bog-standard syntax and works in C and C++ across all compilers I know of.

Hope you find this helpful, Best Regards, Dave

// ========================= errorCodes_def.h ========================= // Define error codes and their names: DEFINE_ERRORCODE(_name, _text) DEFINE_ERRORCODE(no_status,"no status message") DEFINE_ERRORCODE(user_fault,"user fault test") DEFINE_ERRORCODE(motor_over__limit,"motor supply over limit") DEFINE_ERRORCODE(motor_under_limit,"motor supply under limit")

// ========================= errorCodes.h ========================= typedef enum { #define DEFINE_ERRORCODE(_name, _text) ERRORCODE_ ## _name , #include "ErrorCodes_def.h" #undef DEFINE_ERRORCODE Count_of_errorCodes } ErrorCode_enumeration_T;

// ========================= errorCodes.c ========================= const char ErrorCodes[] = { #define DEFINE_ERRORCODE(_name, _text) _text , #include "ErrorCodes_def.h" #undef DEFINE_ERRORCODE Count_of_errorCodes };

Reply to
Dave Nadler

On Sunday, April 7, 2013 3:02:17 PM UTC-4, Dave Nadler wrote: Whoops, delete next to last line:

Reply to
Dave Nadler

That sort of style might be acceptable to some people, but it breaks a number of rules that many people are very strict about regarding macros and include files. First, macros should never be re-defined - their definition should be consistent every time they are used across a program. Secondly, include files can be included at any time, in any order, and any number of times. As I say, different people have different rules and opinions - but I personally would never allow such code near any program I write.

mvh.,

David

Reply to
David Brown

The flags "-Wmissing-prototypes -Wredundant-decls" give you that. I always insist that functions and data are either "static" to a module, or non-static but have a matching "extern" in the module's header file (which is always included by the implementation file). These warning flags pretty much enforce that (although you could put the "extern" declarations in the C file if you wanted to be perverse).

Reply to
David Brown

While those are good rules for general purpose include files, I find this idiom useful enough that I allow it. I do give these sort of include files a different extension to indicate that they are "special".

What alternative do you propose that guarantees that a set of co-dependent items will have consistent values?

The differing values of the macro have VERY limited scope (due to the #undef) and are generally well documented in the include file. Except for this case, my general rule is that a macro will be defined in only one location, so it will be automatically consistent.

Also, include files can NEVER be included at any time, as in most cases a "normal" include file will cause serious breakage if they are included at any point other than "file scope". These special include files are normally never included at file scope (although I have on rare occasions had one pass generate objects and a second pass generate external decelerations in a header file or an array with pointers to these objets, so the generation of objects and externs would be at file scope).

Reply to
Richard Damon

We did that for some projects, but it causes other annoyance (like your editor/IDE doesn't do C-syntax, your grep patterns miss the file, other silly stuff), so now we just use ".h"

Right.

Anyway, we started using this idiom after the umpteenth time the constants and data diverged, or differing sub-structures of a single logical structure diverged - either of which will cause nasty bugs and be hard to catch. Most important is to keep tightly related declaration info in one place, even if the use of macros offends some. Fewer bugs ! Easier maintenance !

We have many projects that build for multiple platforms (which can be multiple hardware targets plus simulation/ regression test on a workstation). That typically requires different definitions for many macros... To minimize chaos, we try to EITHER:

- keep all the definitions in one file with clear groupings for the different platforms, or

- have multiple header files with the same name in different directory trees (ie different copies of "FreeRTOSconfig.h" for each platform) But you have to choose one of these approaches and stick to it.

Hope that's helpful ! Thanks for the thoughts, Best Regards, Dave

Reply to
Dave Nadler

GCC with all warnings enabled gives the same kinds of diagnostics that the classic lint programs did, so there's no need for "GNU Lint". These days there are much fancier static analyzers that run separately from compilers, that can be thought of as "lint on steroids" but are usually not described that way. You could view C++ as C with a bunch of improved static analysis based on a more serious type system, along with a bunch of other good and not-so-good things.

When it comes down to it, C is just an unsafe language, and its practitioners have to accept that fact and make allowances for it in their development processes and coding style. It works ok in the embedded realm where the programs are small enough that the benefits of the low level machine access still outweigh the added development hassle of dealing with the pitfalls. Relatively few big new programs are done in C these days, it seems to me.

Reply to
Paul Rubin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.