C++ syntax without C++

There's no GC in standard C++. It's true that some of the storage is allocated on the stack, the rest on the heap. I've never seen an implementation of sprintf that had a large internal buffer. The one-time instantiation costs would be approximately borne in either method - you have to load the strings into C char buffers or C++ strings somehow. C++ string usually slightly over allocates space, but will reclaim it or expand it as needed. There's an interface for controlling that more directly. But C++ heap allocators are usually pretty good, especially for small objects.

And almost everything else is at least to a degree a problem in straight C as well.

But my point was that while you're more-or-less taking a position that C++ has too much overhead, the example you offered indicates the opposite. And finding real bottlenecks requires measuring, no matter the language.

Nor, at the end of the day, do I have much use for factor-of-two changes in overhead, and you'd be hard pressed to convince me that C++ introduces anywhere near that much (carefully selected microbenchmarks aside). C++ very often allows you to produce more function, and more reliable function, in less time. If that comes with a bit of overhead, so be it - on the vast majority* of projects a few percent reduction in development costs will overwhelm the extra cost for a slightly bigger CPU.

*On the off chance that you're working on one of the tiny handfuls of projects that sells more than a few tens of thousands of units, that's subject to some revision.
Reply to
Robert Wessel
Loading thread data ...

With difficulty, and the World's Most Dangerous Cast Expression I'm pretty sure you can. And, because it's the WMDCE, it'll show up in a code review like a city on fire.

I'd have to try it to be sure, but I'm pretty sure that

const int bob = 10;

  • reinterpret_cast(&bob) = 42;

would compile and execute. Of course, if your tool chain puts 'bob' into flash or some protected memory space then when the code does execute, Bad Things may happen.

Some C++ expert can pipe up and tell me if I'm wrong, or you can try it out with gnu C++.

You do know that for class members C++ has 'mutable', which basically states "a field that can be changed even if the object containing it is declared to be constant", yes?

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

No, you're conflating issues in different posts. Here is the post that presented the example:

[discussion of languages used in school vs industry elided]

----8 Nor, at the end of the day, do I have much use for factor-of-two

What do you do when the bigger CPU costs considerably more (DM+DL) because the resources that are needed *with* it force it out of a SoC and into a multicomponent design? Or, if it suddenly doesn't fit in the volume set aside for it? Or, operate within the power budget? etc.

I've worked on projects that sold *7* digit quantities (and one designed for 8 digit quantities)! 10K is a walk in the park for damn near anything consumerish.

If some business entity opts to commercialize the automation system (the design will be fully "open"), I suspect they won't be keen on having to redesign everything for "consumer quantities" -- because I wasn't watching costs while undertaking the design. :-/

Reply to
Don Y

The /compiler/ is no problem - gcc is an excellent C++ compiler with good code generation (on most targets) and top-of-the-range support for modern C++ features. There can be some advantages to alternative toolchains (IAR, Green Hills, Keil/ARM), depending on your needs, but these are far from cheap. And if you like some variety and a bit of a challenge, LLVM is also an option.

The bit that can be difficult is getting the debugging hardware to work along with your choice of chip and your choice of IDE. In many cases, you get your toolchain in a package with a debugger (usually gdb) and an IDE (usually Eclipse), set up with support for certain hardware debugger interfaces. If the package supports the hardware you have got, or you can get the hardware the package supports, it's easy. If you happen to have different parts, then you might have to do some work to get everything to talk together.

This is a fundamental limitation of the way the language works - to be able to allocate objects, the compiler needs to know the details at compile time, so all the members of a class need to be declared in the headers. The same applies to inlined functions - LTO technically doesn't let you use inline functions from another module, it lets you use them as normal C or C++ non-inline functions during compilation, and then inlines them at link time.

There are ways around this. You can put private class variables in a separate header file and #include it in the main class definition, which hides things a bit. The PIMPL pattern uses a "pointer to implementation" as the only private variable member of a publicly visible class, and puts the real implementation class in a module - but that means heap allocations and extra indirections for everything (causing a big drop in code efficiency). You can also do weird things like have your class definition in the header have dummy data of the right size, and use casts to cast to convert to the real data inside a module - but that is even messier and uglier than PIMPL, although it saves the heap usage.

Put more simply, a lot of people dislike C++'s limitations on modularisation, but we are stuck with it!

Reply to
David Brown

Considering that the ARG (the Ada language maintainers) is operating on a volunteer basis, that reason doesn't seem likely.

Greetings,

Jacob

--
»You have to blow things up to get anything useful.« 
                                  -- Archchancellor Ridcully
Reply to
Jacob Sparre Andersen

It is actually quite common in embedded systems for a simple function invocation to have massive unintended overheads in just the same way. For example, the first use of malloc can pull in error handling, which pulls in stdio, which pulls in just about everything. The code can go from 4k to 64k with one innocent line.

I dont see a real qualitative difference here, you have to be careful if you care about code size and speed. c++ just provides some extra ways of shooting yourself in the foot. Each "feature" of a language can have consequences, c++ has more "features".

--

John Devereux
Reply to
John Devereux

Sorry I should have been more specific, I meant particularly that the linker and startup code explicitly supports c++. Some of the other/older cross-gcc projects did not.

I can't see the debug hardware being a problem (openocd driving the dongle of the day for me).

I suppose I am going to have to try eclipse again, each time I try it seems slick and powerful yet when I need to actually get work done I go back to emacs! :)

For a long time I had been using the Insight debugger (a graphical version of gdb). But recently I have switched to plain gdb, it seems fine for my purposes once you get used to it. and comes with all the toolchains. Also the gdb mode of emacs (my editor) for a more graphical interface.

But if all the c++ symbols come out incomprehensible that is a dealbreaker.

But can the private header "add" members to the main class? Wouldn't you have to #include the private .h from within the body of the class definition in the public .h?

You see that sometimes in C, especially with networking code when a frame is binary data at a lower level but interpreted as a structure at a higher level.

[...]
--

John Devereux
Reply to
John Devereux

The other significant of *checked* exceptions (think Java) is that they force you to realise that a library method can fail for certain reasons that you didn't even know existed.

The downside is, of course, more text. In a modern IDE for Java that amounts to two keystrokes per function!

Unfortunately in the Java world too many people don't like to think about failure (let alone deal with it), so there's a strong movement to code everything as unchecked exceptions. Justification: all you can do is go "oh bugger" and stop/fail anyway, so why have the hassle... Framework manufacturers love to encourage that attitude :(

Reply to
Tom Gardner

In a well-designed coherent language, each feature enhances the others...

... but we are talking about c++ so of course you are right.

Reply to
Tom Gardner

True enough - but if you can use some features of C++ to give you better code along your way to that goal, then that's a good thing. If you try to grasp over too much C++ too quickly, or over-use it, you can quickly end up with lasagne code. I have seen such code in practice, with so many delegates, proxies, factories, handlers and so on that it was very difficult to find any code that actually did any /useful/ work!

Developers and development teams need to pick a balance that suits them and their way of working - we are all different in that way.

Yes, that's true (I probably wasn't very clear on that). I have not used C++ for any serious embedded system as yet - I've used it for small test systems, I've used it on desktops, I've done "compile and examine the assembly" experiments on a few different targets, and I have worked with customers' programs in C++ (debugging, enhancing, writing low-level parts, etc.) on several targets. But I have not yet used it for serious development work on my own embedded projects. However, I hope to do so in the near future - as I said, the language, the tools, and the target micros that we use are getting to the point where C++ will be a better choice than plain C.

Another issue that limits me, personally, from using C++ much in real projects is that I /do/ like to know all about a language and the tools before using them. The idea of learning and using the language piece by piece can work very well for some people (that's why I suggested it as an option), but I know that I prefer to understand a lot more, and experiment a lot more, before using it in real code. I want to get things like that Uart template class in place and study the exact generated assembly code from it on different targets. That means a lot of homework, because I can't justify doing much of that sort of thing on paid time (paid time work should be "good enough", not wasted on aiming for "perfect").

All this means that my experience of writing C++ is limited, and I haven't designed my own big C++ systems. If someone needs advice on that, then I am not the best person to help (though I can tell you what /not/ to do, having seen mistakes made by others) - and comp.lang.c++ may be better than c.a.e. But here we have been discussing the low-level parts and the generated code - and I have look at that sort of thing more than a lot of people. Programmers who learned C++ first will have far more experience than me at C++ programming - but are seldom very keen on reading the generated ARM (or whatever) assembly with a view to getting near-optimal object code.

There will always be problems of some sort - the job would be boring without them!

I understand your point, however.

I don't know about any general "current position" here, but I would say it is best avoided when possible - just like with C. I like to use const when I can. Sometimes in C++ that can lead to duplicated code when you need to write the same member function in a "const" and "not const" version. I don't know any good way around that.

Reply to
David Brown

It is moving in that direction, see

formatting link

Of course it will be a heroic failure, except in a few rare circumstances that aren't encountered in the real world.

Reply to
Tom Gardner

Try interviewing some recent graduates from a non-engineering background - i.e. most computer courses.

If they've even heard of FSMs, it will be something along the lines of "aren't they something in compiler parsers?"

As for asm, forget it. Scratch that - they don't have any knowledge to forget.

And all distributed computing problems are completely solved by the use of distributed transactions hidden inside the framework du jour.

Scream.

Reply to
Tom Gardner

IIRC the un-reconcilable positions were along the lines of: - we, the library creators, need it because we have to be able to do X,Y,Z (where X,Y,Z were perfectly reasonable practical requirements)

- we, the code generators, cannot allow that because we couldn't do necessary optimisations because it would re-introduce the possibility of destructive aliasing

Both are very reasonable positions.

In C/C++ you are screwed, and that's one reason why so many people are subtly bitten when they turn the compiler optimisers on. With luck they notice they've been bitten, but often they don't.

Reply to
Tom Gardner

Some people get their ego-boosts and reputation-boosts via such means. Some people have their corporate performance reviews enhanced by saying "I am on committee X".

I have no knowledge of the Ada committees whatsoever.

Reply to
Tom Gardner

These are important too, of course. But I think you'd have to go back to a /very/ old version of gcc to see such problems.

There are some gcc targets with missing support for some parts of C++. For example, the avr port does not support exceptions (AFAIK), because no one has yet implemented the library functions to handle all the details.

Changing religions is always difficult...

It's good to have choice.

Insight is really just another graphical front-end for gdb - it just happens to be linked directly with it, rather than running as a separate process (like emacs gdb mode, Eclipse debugger, ddd, gvd, etc.) Sometimes I like to Eclipse's debugger front-end to gdb, sometimes I like to use "pure" command-line gdb. It depends on what I am doing at the time. Again, choice is good - you are not limited to one or the other.

gdb should handle the name mangling/demangling fine. Overloading can make it difficult to identify functions easily (i.e., "b foo" to put a breakpoint on foo() - which foo() are you talking about?), but you get that in C too if you have multiple static functions with the same name. In such cases, a gui for point-and-click breakpoint settings is probably easier.

No, and yes.

Reply to
David Brown

Have you never found that the users of your products understand it better that you, as the creator? I know I certainly have. And for something as complex as a programming language, and with so many users (and readers of his book), it would be very surprising if other people didn't find mistakes or better solutions.

Reply to
David Brown

Yes I expect it was avr I was thinking of.

[...]

That's all right then.

[...]

OK, thanks.

[...]
--

John Devereux
Reply to
John Devereux

If "they" take "it" and are able to use it /successfully/ to solve problems you didn't know about -- that's wonderful

If "they" take "it" and are /accidentally/ able to do things that /subtly/ fail to solve their problems -- that's awful.

Reply to
Tom Gardner

I used to be a Insight user for many years as well until Insight development stopped, at which point the tightly integrated front end became a liability which bound me to a increasingly older version of gdb. I also wanted more flexibility with cross compiled/remote target debugging.

After looking around I switched to ddd, which, while clunkly, works well for my needs. You can use the same ddd front end binary with both native gdb and cross compiled gdb binaries because you can specify the back end gdb binary to run on the command line.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

I may be missing something here (or maybe this was some time ago). :-)

In gcc/binutils, const variables are placed in .rodata and you can then use a linker script to determine where you want the .rodata section placed in memory.

In other words, it's a standard feature of gcc/binutils (assuming I understood correctly what you are saying). Is this actually _not_ a standard feature of some other compilers ?

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.