C++ syntax without C++

Hi,

What, specifically, can I use of C++'s syntax without incurring the associated "costs" that C++ brings with it? I'm ideally looking for criteria that I could *mechanically* enforce (a preprocessor to scan for "features" that are "beyond C" in cost/complexity and flag them for excisement).

Of course, this bends the issue of "implementation defined" in some cases -- I'll worry about specific toolchains, later.

E.g., using "//" comments should have no impact on the code generated.

I *suspect* I should be able to use namespaces and similarly not changed the code perceptibly -- beyond what simply prepending the namespace's identifier to all identifiers *IN* that namespace?

The exception syntax is particularly appealing. But, I suspect that implementations can vary a lot and bring other cruft into the equation once I start down that path.

What else is "free"? Safe?

I.e., disallowing the "class" keyword probably goes a long way to keeping cruft out of the binary... are there other insidious things that can drag cruft back *in*?? (e.g., startup code)

Thx,

--don

Reply to
Don Y
Loading thread data ...

There's probably articles and books on this.

C++ can be fairly slim and trim, if you're careful.

  • Using classes and inheritance doesn't drag in much cruft.
  • Using virtual functions drags in a teeny bit of cruft (each new type -- type, not object -- has a virtual function table)
  • Using exceptions drags in cruft, the kitchen sink, the world, and various planets
  • Using "new" drags in malloc, free, heap management, exception handling, etc.

If you get into the habit of looking at your map file, you can watch it grow as you add things, and get an idea of what's getting sucked in.

--
Tim Wescott 
Control system and signal processing consulting 
www.wescottdesign.com
Reply to
Tim Wescott

Sorry, I'm not thinking in terms of some big library, RTTI, etc. that gets dragged in when I use "feature X". Rather, all the stuff that happens "in the whitespace" between statements, operators, etc.

E.g., anonymous objects, constructors/destructors, etc.

In C, I can look at the code and tell you with reasonable confidence how much is happening (cost) in a given statement. With C++, you have to *know* the cost of each constructor/destructor, dependant classes that get dragged in, overloaded operators, etc. The syntax hides a lot of "mechanism".

E.g., in Limbo, I can write greeting := "Hello" greeting += ", World!" and have no *real* clue as to what the actual cost is (in time or space) (i.e., does the initial declaration allocate extra space above and beyond len("Hello")? If so, does it allocate enough to append the ", World!"? Or, must another allocation and copy take place?? Will the GC be invoked *now* -- or *later*??)

By contrast, C's syntax maps semi-"transparently" onto the underlying hardware. (perhaps the biggest issue being the presence of any "helper routines" to implement types not directly supported in the native hardware).

Returning to my question...

Use of // comments should cost nothing!

Reply to
Don Y

(If C++ is the answer, what was the question?)

What about templates?

And the combination of templates+exceptions.

Even the designers of templates didn't realise the monster they had created. They were gobsmacked when someone created a short C++ program in which the /compiler/ very slowly emitted the sequence of prime numbers /during compilation/. They hasn't realised that they had created a Turing complete language.

And let's not consider how long it takes for compilers to become available. IIRC around a decade ago it took 6 /years/ for the first complete compiler to appear after the standard had been published.

And then there's Nick Maclaren's infamous "objects diatribe" Search "comp.std.c++" for "maclaren diatribe" and you will find these excerpts from his 2006/02/02/ posting and the subsequent discussion.

So we have a situation where two pointers refer to the same object, but accesses via those pointers are not equivalent.

We deduce that access via two union members at once is illegal only if it is obvious, which implies that the eleventh commandment applies to the C++ and C99 standards. But most people deny this is the case.

I assert that the C++ and C99 standards are not self-consistent in this area.

Reply to
Tom Gardner

I think you should start by buying a book on C++ and learning /real/ C++. Obviously you should concentrate on the sort of features you are going to use in the end. But your posts here show a serious lack of knowledge and understanding of C++, as well as very limited ideas of what compilers do and how they work.

Having said that, I will give the best answers I can.

The biggest step towards getting efficient (compact and fast) C++ code is exactly the same as for C code. Get a decent compiler, learn how to use it, and in particular learn to use its optimisations appropriately. Compilers will generate big and ugly object code if the optimiser is not enabled - this applies even more so to C++. For C++, you also have to learn how and when the compiler generates code from templates - done right, you should not see any code duplication or unused code (you might see it in assembly listings, but it should be removed or combined by the linker).

Aim for C++11. It has a number of improvements that make it easier to write correct code, and avoid accidents ("explicit" conversion operators and constructors, "enum class", better control of automatically generated member functions, static assertions, etc.), as well as convenient features like "auto", "const_expr" and lambdas.

Always be wary of library code. (This applies to C as well as C++.) Test things first to make sure you are not bringing in more than you need. But don't assume library code /always/ means big and inefficient.

The key "cost" in C++ is exceptions, followed by RTTI. Any decent C++ compiler - especially for embedded systems - will let you disable these. Exceptions are very hard to use /well/ - you really have to think about them throughout your program, and be aware that they can do unexpected things at unexpected times if you are not careful. They are basically undocumented "gotos". They can also lead to a lot of extra code space (holding all the rules for handling exceptions at different places) and sometimes also stack and data space, and they can severely limit optimisations because the compiler no longer has a full picture of the control flow. So my advice is to disable them in the compiler - then they cost nothing.

Almost every feature of C++ is "free" - exceptions are the only feature I know of that "costs". For everything else, the cost depends on use and misuse. Sometimes these costs are hidden, at least at the time they are used - but they are not /extra/ costs.

(It is worth emphasising again that you need to enable optimisation - it is easy to write code that makes extra copies of temporary objects, which will be eliminated by the optimiser but will give code bloat without optimisation.)

For example, when you declare a new object of a class, its constructor is called - you don't see it when you use the class, but you /do/ see it when you define the class. And the cost is pretty much the same as if you used a plain C "struct" and called an "init" function on it.

(As noted above, C++11 has features to help avoid accidentally doing more work than you intended.)

One point to consider here is inlining (either explicitly, or implicitly by defining functions inside the class definition). Inlining can lead to significantly smaller code as it eliminates the call overhead and gives more scope for optimisation (such as constant propagation, dead code elimination and strength reduction). But done badly, it can lead to overall code increases due to duplication.

It doesn't add /any/ cruft at all. If you haven't used "virtual", then a class (derived or not) takes the same data space as a C "struct", and the member functions are identical to C functions taking a pointer to the struct. You only get "cruft" if you are accidentally calling extra conversion operators, constructors, etc. You avoid that by carefully defining your classes, and using "explicit", "default", "deleted".

Calling a virtual method is also slower than calling a non-virtual method (though if the compiler can figure out the exact type at compile-time, it can skip the virtual table lookup and call the method directly). But using virtual functions is typically more efficient than alternative mechanisms such as function pointers or selector functions. So when you actually /need/ a virtual method, it is an efficient system.

(Many C++ programmers use "virtual" all over the place, "just in case". But we are talking about efficient embedded C++ programming here, not the bad habits of desktop programmers!)

They are not necessarily /that/ bad - I'd stop at the kitchen sink.

"new" drags in heap management, just like "malloc" - but not exception handling if it is disabled. And like "malloc", you can write your own memory handling if you want. (For example, you could have a memory handler that does not support free/delete, which is fine when you only ever allocate memory at program startup and never need to free it because your program never stops. Such memory handlers are very small and simple.)

Absolutely! (This applies to C programming as well.)

Anonymous objects are usually eliminated, or at least minimised, by the optimiser. Before complaining about them, write some code where you can see they are actually causing code bloat. Then try and redo the code in plain C - it is unlikely to be much smaller, and will certainly be more complex. Then ask yourself this - is it better with simple source code that generates more object code than you expect, or is it better with longer and more complex source code that generates about the same object code? If you prefer the more explicit and longer source code, then that's fine - write that in C++.

Remember, apart from exceptions and RTTI, /nothing/ in C++ costs /anything/ unless you use that feature.

If you don't like code from constructors, don't use them. Default constructors will zero out the members - but the compiler should eliminate redundant stores (if the constructor is inline). Default destructors won't do anything at all.

There is very little (if any) cost to this, compared to the equivalent C code of declaring a struct, manually initialising its members and/or calling an "init" function. What changes is /where/ you describe this to the compiler, your fellow developers, and yourself. With C++, you state the initialisation requirements in the class definition, and the compiler enforces them automatically when the class is used. With C, you state it in unenforceable documentation and comments in the headers, and must write it explicitly each time you use the struct.

That's true - as long as you never use any functions, or any features not supported directly by the target (such as floating point on a small micro). In other words, that's a common misconception.

It is certainly true that C++ has more such cases - every definition of a class object can be thought of as a "function call".

Overloaded operators are no more and no less than function calls with a different syntax.

So when you are writing code where efficiency is important, it is vital that you know about the class types you are using. But how is that different from C programming, where you need to know about the functions you are using?

And how is that different from writing the roughly equivalent C code?

char* greeting = (char*) malloc(6); strcpy(greeting, "Hello"); greeting = (char*) realloc(greeting, 6 + 8); strcat(greeting, ", World!");

Do you know the /actual/ cost of malloc and realloc? Either you know exactly how these functions are defined in your particular toolchain, or you guess. Do you know the /actual/ cost of strcpy() and strcat()? These can be defined in many different ways too.

As shown above, that's a mixture of myth and misunderstanding - at best, it only applies if you never call a function for which you haven't studied the source code. And exactly the same applies to C++ - the only difference is that some of the "function calls" are less obvious when they are used (though equally obvious where they are defined).

Namespaces are not handled by the preprocessor, but you are correct that they cost nothing.

No, you don't.

Neither are local variables in C. The compiler does all sorts of stuff with them, such as putting them in registers, re-using slots, eliminating them entirely, changing their types or codings, allocating and deallocating stack space at different points in the function, etc.

The compiler will do the same thing with C++ class objects. Small class objects will go in registers and be optimised just like small C variables - and big ones will get manipulated in much the same way as C structs.

And where they are trivial enough, they will get eliminated by the compiler if they are not needed. If the constructor effects /are/ needed (after all, you do want to initialise your variables at /some/ point), then the constructor is just as efficient as C assignment for initialisation.

Only if the destructors have an effect - in which case you really do want to run them!

I am surprised that you are regurgitating the decades old anti-C++ propaganda, while missing the "big" points.

C++ is harder for syntax checking. C++ compilers are generally very good at spotting a compilation error, but very bad at telling you where the error /really/ lies. But they have got a lot better in recent years.

C++ is harder to debug, especially at the assembly level. Name mangling, inlined code, etc., makes it a lot harder to follow.

C++ toolchains are often expensive (sometimes ridiculously so compared to C toolchains), and are often out of date.

C++ code is often (but not necessarily) pointer-heavy as member functions effectively take a pointer to the object as their first argument. This is no different from a C function that has a pointer parameter, but the style is more prevalent in C++ code. This is costly for small cpus that have poor pointer support (8051, AVR, etc.).

Then there are the two big "code bloat" features that you haven't mentioned - inlining, and templates.

"inlining", as noted earlier, can be a win or a loss - sometimes dramatically so. Use it carefully, thinking about how the code will /actually/ be used and what the resulting generated object code will look like. This applies equally to C99, of course.

Templates can be done badly and lead to a lot of code duplication. But they can also be done well, and lead to simpler source code and smaller generated code.

For example, you might have a class that wraps a microcontroller's UARTs to give nice functions like sendChar(), bufferReady(), etc., that map directly onto accesses of the UART's registers.

You've got something like this from the normal C headers:

typedef struct { volatile uint16_t baud; volatile uint16_t control; volatile uint8_t status; volatile uint8_t data; } UART_T; #define STATUS_BUFFER_READY 0x02 ... #define UART0 (*(UART_T*)(0x10000)) #define UART1 (*(UART_T*)(0x10100))

In C, you define your general functions like this:

bool bufferReady(UART_T *pUart) { return (pUart->status) & STATUS_BUFFER_READY; }

and you call it like this:

#define uartDebug UART0 #define uartPC UART1

if (bufferReady(uartDebug)) ...

This works nicely, and you've got a flexible API that supports multiple uarts.

In C++ with a class, you have this:

class Uart { private: UART_T *pUart; public : Uart(UART_T *p) { pUart = p; } bool bufferReady(void) { return (pUart->status) & STATUS_BUFFER_READY; }

and you use it like this:

Uart uartDebug(UART0); Uart uartPC(UART1);

if (uartDebug.bufferReady()) ...

That's also fine, and the code size and speed will be virtually identical to that of the C code.

With C++ templates, you can do this:

template class Uart { public: static bool bufferReady(void) { return (P->status) & STATUS_BUFFER_READY; } }

and use it as

Uart uartDebug; Uart uartPC;

if (uartDebug.bufferReady()) ...

In use, the templated code is as clear and easy as the C++ and C code. But there is a huge difference in the generated code - with the templates, the compiler knows exactly which addresses you are talking about - there is no longer any function call, and the complier can test the single bit directly without any pointers.

Reply to
David Brown

I think that is the problem with most "new languages". They (designers) seem to think "more is better" and end up with these languages that need *tomes* to completely describe -- and whole careers to "master" (i.e., beyond "proficient").

Part of the beauty of C was its relative simplicity. If you had a basic understanding of the iron beneath, you could fathom what the code was doing. Likewise, you could see that there were well defined parallels *up* from that hardware.

Now, languages treat the "programmer" as an idiot -- who might forget some little detail (so, they'll add gobs of syntax and runtime to prevent this?).

At times, it is educational to take small problems and implement them in a variety of languages -- just to remind yourself what sorts of cruft you have to *remember* (i.e., don't look at the "compiler/linker" output until you are *sure* the code is done... then see what you forgot! :> )

Dreg that it is, it is still hard to beat Dartmouth's

10 PRINT "Hello, World!" 20 END

decades later! :-/

Reply to
Don Y

David, your posts are always "you don't know what you're doing/saying /asking". Yet, you know absolutely NOTHING about me, my credentials, my accomplishments, goals, etc. And, from *my* experiences with you, apparently nothing about the things you so quickly mouth off about.

Go f*ck yourself.

Reply to
Don Y

And what, exactly, is the disadvantage of this? Granted, it is not often you want to generate prime numbers slowly at compile time - but you might consider it convenient to generate other values at compile time using templates. It is usually better to spend more effort at compile time to avoid doing the same calculation at run-time on the embedded system.

And of course, if you don't have any need for that kind of template, don't use them - it costs nothing.

Exactly - let's not consider it. Who cares how long things took in the past, when there are good C++ compilers available /now/ ? It is perhaps a problem that there are many compilers that don't yet support C++11 (or at least the useful features of C++11), but there are also those that do.

Reply to
David Brown

The disadvantages are the implicit consequences...

If the "expert designers" of a language didn't realise what they had created 1) what chance to mere mortal users have of understanding it 2) what's the chance that /all/ language implementers will agree the /same/ interpretation of the specification

Both of those consequences have materialised and are important deficiencies

You destroy your own point when you note that it isn't just the past! It is clearly a continuing problem.

So, when you choose a C++11 compiler, which features can you rely on as being implemented correctly?

If you (try to) port a program last compiled with a different compiler, how do you determine which of the implicit assumptions made the programs author are not valid with the different compiler?

Reply to
Tom Gardner

I presume by that you mean a subset of C++. Probably a different subset to many other people, which brings severe problems when you want to expand your team or enter maintenance mode.

C++ optimisation brings different problems: programs start behaving subtly incorrectly because the programmer didn't understand the particular interpretation of C++ semantics that had been chosen by the compiler implementer.

Don't take my word for it. Do listen to those that have far more experience of these problems at the sharp end. For mere starters, I refer you to comp.arch and comp.std.c++

And there has been at least one C++ compiler for 2/3 of that time :( (As opposed to a myriad of compilers which support different subsets of C++)

Is there a *complete* C++11 compiler yet? One that the vast majority of people regard as complete?

I think you are missing different "big points".

Reply to
Tom Gardner

I don't yet use c++ in embedded, but

1) 2011 is not that long ago 2) nobody is forcing you to use C++11 vs older standards 3) gcc in fact supports it AIUI.

This is a problem in general for most languages (all of them for all I know).

You can of course control the standard being compiled for, e.g. gcc -std=c++98 and so forth.

--

John Devereux
Reply to
John Devereux

Well, that was a surprise!

You are, of course, completely wrong about what I know about you - you post a great deal, and thus everyone who reads c.a.e. knows at least something of your experiences and knowledge. You can't tell us so much over the years, and both give advice and ask for advice, without leaving some impressions.

I don't know if you mean that /all/ my posts in c.a.e. are "you don't know what you are doing/saying/asking", or referring to all my replies to /your/ posts, or just to this post and another recent one on capabilities. But as a general rule, if I feel that someone is mixed up or unclear in what they are saying, then I think it is reasonable to ask them if they have thought through things properly. When someone is having difficultly figuring out the details and consequences of a complex solution, the first thing to do is go back a step and try to understand the guts of the problem, and then ask if there is not a simpler solution. (And I know I don't need to tell you this - I am telling you that this is what I was doing regarding the capabilities posts. And you'll note that I dropped out of that thread when it was clear that you /had/ determined that the complex solution was necessary.)

I stand by impression that your C++ knowledge and experience is limited

- or why you would be asking the questions about the supposed overheads on basic C++ features if you were familiar with them already?

But on re-reading my post, I can see I was unnecessarily sarcastic, impolite or patronising at points, and for that I apologise. My intention was to help you and correct misconceptions - my post would have been better with some parts omitted or completely re-worded. It is a shame that you have got hung up on those sections.

If you want to discuss the rest of the post, snipping those bits, then that would be fine - we can see if we agree or disagree on the technical parts.

Reply to
David Brown

I thought it an excellent post, in fact it inspires me - for the first time - to try c++ myself for embedded.

Most times I see c++ mentioned here it is with distain, it is nice to see another point of view.

He has always been spot-on AFAICT. Certainly in the areas I do know something about.

Totally uncalled for Don.

--

John Devereux
Reply to
John Devereux

What I mean is that you have to try writing, compiling, and analysing C++ code of the sort you want to use, using the sorts of tools you want to use. If you want to know exactly what sort of code is generated for different types of source code, you have to try it out - you don't guess based on ideas about the "inefficiencies of C++" that you have heard somewhere.

There are good reasons for not going overboard with enabling the most enthusiastic optimisations, as they can make debugging far more difficult. But both C and C++ are very inefficient if the compiler is not allowed to do its job - though the effect will normally be worse for C++.

You have to understand the language you are using - there is no way around that. You also have to understand your tools, and let them help you - full use of your compilers warnings and checks (and possibly help from additional tools like pc-lint) will catch a number of errors early.

But if you think you can disable optimisations and write "looser" code with missing "volatile", alias issues, signed overflows, etc., just because it "works fine" without optimisations, then you will be in trouble sooner or later. Optimisations are not an on/off switch - they are just an indication to the compiler about how hard it should work. Optimisations that are "-O2" level on one compiler might be always enabled on another toolchain.

Exactly the same thing applies to C - if there is any difference with C++ it is only a matter of degree.

Of course it is not a good idea to push the limits of the language, the tools, the optimisers, or the human programmers. Understand that code that behaves incorrectly when optimised is incorrect code - but you have to strike a balance in order to find problems in a sensible timeframe.

I have done so (not those particular newsgroups, but other sources).

I can understand that some programmers have a lot of difficulty with the less obvious parts of C and C++. And there are very few programmers that don't have trouble at some point with the most subtle cases. But if a programmer is going to have trouble regularly when enabling optimisation, then I would question the choice of C++ at all unless they have more experienced co-developers to help find their mistakes.

gcc (from around gcc 4.5 for a lot of the useful features - before C++11 was finish, and more complete support with later versions).

Of course, there is always a question of how complete is "complete" - there are not many compilers that /fully/ implement all of C99. But I think gcc C++11 was "complete" in your sense by 4.7.

Perhaps that's true, and if you know of others that are not mentioned then bring them up. I think some of /my/ "big points" can be show-stoppers on the choice of C++ for embedded development projects, while I did not see anything serious in the OP's original worries, and I don't see /your/ worries about optimisations interacting with subtle bugs as a big point.

Reply to
David Brown

You don't have to understand all the details in order to /use/ the features. Also note that different people can work with different sorts of code - more experienced developers can make template libraries, less experienced ones (or "differently experienced" ones) can /use/ those libraries.

And if you don't want to use these features, then don't use them.

I actually find it hard to believe that the designers of C++ templates were "gobsmacked to find it was Turing complete". It is very easy to make systems that are Turing complete - it would be easy to make the C preprocessor Turing complete by adding recursion on macros. It might have surprised them to find that you can do useful compile-time calculations using templates (I'm sure there are more useful ones than computing factorials...).

Implementers go to quite a lot of effort to get this sort of thing right. They also use third-party test suites (like Plum Hall) to help (and yes, gcc gets tested using these tools tool).

My point is that a lack of good C++ compilers is a thing of the past - obviously it will take time before a large number of compilers have full (or reasonably full) C++11 support. But I think - and hope - that 6 year delays should be a thing of the past too.

One reason for that is the prevalence of gcc in such a wide range of systems. It is usually PC/desktop compilers that get support for the latest standards first, and embedded toolchains come later. As gcc is used in both types of systems, and has had a long tradition of implementing new features early (a number of features of newer C and C++ standards are simply formalisations of pre-existing gcc extensions), there is already at least one toolchain providing almost-complete C++11 support on a wide range of targets. Other toolchain vendors will need to catch up or risk losing market share to "more modern" competitors.

I try to avoid implicit assumptions as much as possible. In most cases, if you use a C++11 feature in a program and compile it with a toolchain that does not support that feature, you will get compilation errors. I think you would be hard pushed to write code that was sensible, useful, compiles cleanly on C++11 and C++03 or C++98, and has different effects when compiled with different standards.

Reply to
David Brown

I think that is the main issue for me. C is a very simple language, yet it took me a long time to really know it. And there are still some corners that are mysterious even now.

C++ seems pretty much impossible to learn fully (and I have been using it for 10 years, on and off, for PC programming).

[...]
--

John Devereux
Reply to
John Devereux

C is like the Japanese game Go - in theory, it is very simple, but in practice it can be very subtle and complicated.

C++ is certainly much bigger, especially if you include the libraries. But like C, C++ is designed so that features you don't need and don't use will not cost anything (the exceptions being exceptions and RTTI).

As long as you are programming by yourself, or within a team that agrees on what they know and don't know, then you can simply pick the parts you want to use. If you don't need multiple inheritance and virtual inheritance, then don't use them - just pretend they don't exist. You can do a lot by changing your .c modules to .cpp and then just gradually adding bits of C++ that you think make your code better. Remember your aim - to write better, more correct code (or more /obviously/ correct code) that is easier to understand, debug and maintain, and is hopefully as small and fast or even better. You are not being judged in a C++ contest - if all you use is strongly-typed enums then that's fine too.

For me, there have been a few things keeping me back from C++ for anything more than experimental work in embedded systems.

One is that we used smaller micros, like the AVR - while the same code will be as efficient in C++ as in C, non-inlined class member functions are going to be inefficient due to the cpu's poor pointer support making it hard to take real advantage of the language without high costs. With more ARMs (and some PowerPC) on our boards, that's not a problem any more.

Another is that C++ has had some issues where it is hard to get the choice of features /just/ right. For example, it is hard to make a class that allows some conversions between different types while disallowing others (the classic case being a conversion to bool to allow if (x) tests without allowing conversion to other integer types). C++11 gives you more control over that sort of thing. In general, C++11 has a number of small features that I think makes it a significantly better language.

Toolchain support is awkward for many devices, especially for debugging. C++ support is often expensive (for CodeWarrior, it makes the difference between free versions and $5000 versions - you need to get a lot of benefits from C++ to justify that). While gcc has excellent C++ support, it can sometimes be a bit of work to get particular gdb builds, debugger hardware, and target devices all working correctly.

Finally, compilers (including my favourite, gcc) have been getting better - error and warning messages are clearer, and code generation has improved. A key point is that LTO is getting solid enough that proper modularisation should be possible (i.e., inline function definitions can go in the cpp file rather than the header) - though I don't know yet whether the result will be debuggable!

Different developers have different needs and preferences, of course, but I think the language and the tools have reached the stage that C++ is a good choice for the sort of work I do.

Reply to
David Brown

I am programming by myself so that is fine for me.

Yep, I don't see me using anything smaller than a M0 ever for a new project, probably always M3 or bigger. An STM32F030 is $0.32 and I don't design Christmas cards or Happy Meal toys.

Yes I am going to have to experiment. The "gcc arm embedded" project claims to support c++ out of the box but we will have to see.

formatting link

One thing I disliked about the c++ "style" is having all variables used by the class defined in the header.

--

John Devereux
Reply to
John Devereux

From a theoretical point, I don't think this matters. We use computers all the time despite the Halting Problem. We use arithmetic all the time despite Goedel's Theorem.

From a practical point, I agree. Herb Richter's column in the C/C++ Users' Journal actually turned me off using C++ by demonstrating, monthly, that you were screwed if you didn't have a Guru in the shop, and I was no guru. OTOH, aspects of C++ work beautifully well in Arduini, so I'm always tempted to get in.

Mel.

Reply to
Mel Wilson

It is worse than that. You have to understand which /implementation/ you are using and, as you yourself indicate below, even which version of each implementation.

And just how do you know exactly how far it is safe to push? The bugs caused by the "next push" of optimisation will of course be subtle, infrequent and very hard to spot.

And I can understand that some programmers don't realise just how ambiguous parts of the standard are!

The trouble is that most programmers are in that position :( Even highly experienced and intelligent ones :(

So that was a "no" then. Thanks for making my point.

And still is a "no". Thanks for making my point a second time.

Complete is a binary attribute, not a grayscale adjective! Unless you are prepared to argue that grey is white :)

And more interestingly, it appears there *never can be*. Why? Because, according to some that have sat on the standardisation committee, even the committee can't agree within themselves what some parts mean and how different parts interact! Sometimes they thought they did -- until a third party got person X to explain feature F to person Y, and vice versa. At which point X and Y realised they had a different valid understandings!

??? I really don't see how you can believe that!

And you have neatly failed to address some of my points.

Reply to
Tom Gardner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.