reasons for preferring C over C++

Almost all of my embedded programming is in C (with older stuff in assembly) - I've only done a small amount of embedded C++ for specific customers. But I think now is a good time to re-evaluate that and look towards C++ for more embedded applications. There are three reasons for that: C++11 is a significantly better language than C++98, C++ compilers are significantly better than they used to be, and modern small micros (i.e., Cortex M3/M4) are more powerful and deal better with pointers than old small micros (such as the AVR).

Yes, C++ is evolving - and I think the recent step to C++11 has brought it forward significantly. For comparison, how many people noticed the new features of C11?

Reply to
David Brown
Loading thread data ...

Am 16.10.2014 um 18:19 schrieb Stefan Reuther:

... and consider yourself a lucky bastard if the problem was as brutally evident as a SIGSEGV. Now try debugging a tiny, yet inacceptable numerical inaccuracy in such an environment. Or a random, about-once-every-three-hours, timing infraction that's been known never to happen when the debugger is attached to the program.

Among computing tools, the programming language C has been said to be the analog of a surgical scalpel: it's deceptively small, while ultimately very powerful --- and it makes a world of difference whether the persion wielding it knows how to handle it or not. One user may save a patient's life with just a few deft cuts, while the other will kill someone, most likely himself, even faster, with a single clumsy one.

If I try to extend that analogy in the direction of C++, my imagination tries to bring up some nightmarish contraption like a handheld, large-diameter buzz saw with scalpel blades for teeth, a 20 kW engine and nitro-glycerine for fuel: I can accept that might be a very useful tool to someone who could manage to operate it safely, but I can't make myself believe that anyone ever could. Not even with all kinds of safe-guard mechanisms added to the design.

Reply to
Hans-Bernhard Bröker

It was Chuck Moore who said about his FORTH language that it is an amplifier for the abilities of the programmer, the good as well the bad. C++ i just the bigger hammer. Given to someone who knows what to do, it's just the stronger tool, given to the wrong people they create the bigger disaster. C++ has more features to misuse and people who barely understand the basic concepts immediately jump on using the newest feature. For good reason MISRA or DO178 do not allow many of the fancy features.

I am using C, C++ and assembler for safety critical code. Using the features of C++ wisely it helps to write clear, correct programs and the overhead is negligible.

The reason not to use C++ are mainly the unavailability of a decent compiler for smaller micros. And not everything is ARM with megabytes of RAM and ROM, sometimes there is just 1k of RAM.

--
Reinhardt
Reply to
Reinhardt Behm

Last time I had that was in C, and was fixed by -ffloat-store :-)

C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do it blows your whole leg off. -- Bjarne Stroustrup

Stefan

Reply to
Stefan Reuther

me too.

A few weeks ago I was put forward for a job requiring:

extensive low level embedded experience in writing code in resourced limited environments, specifically ARM Cortex chipsets (tick, tick, tick)

The project turned out to be a "bionic" watch - no not one that turn into a human being, one that monitors your biometrics.

I though "great that sound really me", and then they revealed we are a hardcode C++ shop (because some of their other products are advantaged by that) and you're going to have to do an online test.

My C++ experience is limited to the embedded environment and I'm up to speed on creating OO class structures but not much else. I don't do standard library stiff (just like I don't carry the C library manual around in my head), streams or containers. These things don't seem to have much place low level embedded code, but if you believe different fell free to explain why

And guess what, the test was 80% on these types on things and predictably, I failed.

Looks like they are going to get the employee (well freelancer) that they deserve

tim

Reply to
tim.....

If you mean STL containers, yes they are useful in all but the smallest environments and (to a limited extent) even in those. They will do stuff like automatic resizing and range checking, and they allow you to use containers of multiple types without having to paste code all over the place. OO on the other hand has become somewhat unfashionable in C++.

I think it's sufficient to just get a recent C++ book as STL is not really hard to use. cppreference.com is also pretty good. And if it was an online test you took, unless it said otherwise I'd think it was ok to use reference materials while taking the test.

Reply to
Paul Rubin

My knowledge is very out of date and probably plain wrong, but although /you/ don't have to make duplications in the source code, doesn't the /compiler/ have to expand them in the object code.

Probably because they seen "objects done right" in other languages! :)

Reply to
Tom Gardner

C++ templates do work like that, so you indeed get bloat in the object code (just as if you'd manually duplicated code like you'd have to in C), but the source code becomes more uniform and maintainable. The bloat isn't a law of nature but rather reflects C++'s design goal of zero-overhead abstraction.

Other languages like Haskell can avoid the bloat by supporting polymorphic functions implemented by passing type info at runtime, taking a slight penalty in speed. I'm not sure how Ada handles this. Ada has generic but I don't know how they work. There's also some generics in C11 or C14 that looked nice though I don't remember any details by now.

No I mean at least among some PL geeks, OO has become unfashionable in general, not just in C++. They see it as a 1990's thing that didn't fulfill its promises. Smalltalk has faded to obscurity and Java is a post-Cobol Cobol, etc.

Reply to
Paul Rubin

tim..... wrote: (snip)

I probably believe that when doing a programming test that you should either have access to library documentation, or problems should not need it. Others might disagree, though.

There are way too many stories about interviewing in general having unreasonably expectations. Not that the people aren't good enough, but that the problems don't test the right thing.

-- glen

Reply to
glen herrmannsfeldt

The rules for Ada generics are specified so that generic units can either share code between instances, or can use the "macro" approach and generate specific code for each instance. Some compilers use shared code, others use the macro approach. I believe there have been some compilers that let the programmer choose which method to use, but I don't know if that can be done in any current compiler.

The GNAT compiler uses the macro approach (specific code for each instance). This tends to produce faster code, at the cost of more code, of course.

It is sometimes possible to divide generic Ada units into two parts: a non-generic part that has most of the complex code, and a small generic wrapper that contains the code that is duplicated (and separately optimised) for each instance. One way to do to that is to use tagged-type polymorphism, similar to the Haskell way.

--
Niklas Holsti 
Tidorum Ltd 
niklas holsti tidorum fi 
      .      @       .
Reply to
Niklas Holsti

define "useful"

the syntax is awful - completely unintelligible to someone who has never seen it before (and sometimes even if you have)

and for what:

to save me writing a discrete function to perform the task

Do you mean between types. In a small embedded system who cares? I'm going to be using bools, 8 bit bytes and 16 or 32 bit words. Occasionally I might need floating point arithmetic for precision, but I'm (almost) never going to need to "output" those, I don't have the means to do so.

big deal

but I won't have enough types for this to be a problem

No it's not hard to use, but it is like 1000 discrete facts long. And the tests always ask you about the most obscure ones.

I have sufficient knowledge in my head to use the C library without looking at the book in my day job. But whenever I've done C tests they always pick on a function that you have used once (or never)

at 2 minutes per question you don't have enough time for more that a cursory glance

tim

Reply to
tim.....

Oh I know, but I just thought that this case the tested skill set was so unnecessarily far away from the original job spec, that it deserved an airing.

AIH the job that I am currently on hit a few roadblocks and I didn't became available when I thought I would, so I never would have been able to take the job - and, despite the interesting application and otherwise excellent match to my skill-set, I was lukewarm in the first place as I wasn't sure that I wanted the 1.5 hour commute on the train into the city each day (driving to that location would take even longer!), so I wasn't overly disappointed to have failed!

tim

Reply to
tim.....

The problem is that they do that automatic resizing out of your control. If you really got plenty of memory, and can heavily over-allocate, this will not be a problem. But in a high-availability system I don't want to run against a wall after 48 hours of uptime due to memory fragmentation. Being able to statically allocate my memory is a good thing in such an environment.

To me, that's the main difference between desktop and embedded software: if desktop software crashes, you always have an operator who can restart it or maybe add another memory module.

OO is still a very convenient way to group things together.

Just because you can do everything with abstract interfaces and factories doesn't mean you have to. Java doesn't have much extra cost for doing that (because calling a class method is expensive anyway), so people do it all the time. In C++, a class wrapping two ints can be as efficient as having just these two ints, without a pile of virtual methods in the way.

Stefan

Reply to
Stefan Reuther

I'm not sure if that's true now or not.

gnu C has a way of defining a function as "link this only if you need it" (with attribute ((__weak__)), IIRC). It basically makes a function act like a function in a library, even though it's been compiled into a .o file. If they use that method with templatized methods, then only one will get included in the link, even if multiple copies are scattered around in the object files.

I don't KNOW that they use that, but I'd be surprised if they didn't.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

I guess that can help a bit if only some of a template's methods are used at any given type. But, basically, there are two conceptual ways to implement something like template methods: as polymorphic functions with an amount of runtime dispatch on types (guaranteed safe, but takes a slowdown from the dispatch), or as "macros" (no slowdown, but code bloat, like from function inlining). C++ uses the "macro" approach, while with other languages (like Haskell and maybe Ada) it's up to the compiler and can sometimes be controlled by the user. So even with ((__weak__)) the bloat happens in the cases where the method is actually used.

Reply to
Paul Rubin

It seemed pretty natural to me. std::vector is a vector of floats. std::vector is a vector of ints, and so on.

Well yeah, you get tested, standardized implementations of a bunch of datatypes that occur over and over in real world programs. Who wants to write a new function, debug it, maintain it, make others maintain it, figure out and maintain the implementations of such functions written by others, stop groaning after finding there are dozens of implementations of essentially the same function in any large codebase, etc. Imagine the C library didn't standardize printf, so everyone wrote their own function for printing numbers in decimal. Yeah you could, but it gets in your way at more levels than you might expect.

No, I mean the STL vector class will let you grow the vector, like: std::vector v; // v is now an empty vector v.push_back(3); // v now contains the element 3 v.push_back(4); // v now has two elements (3 and 4)

In very small systems (less than 4k of ram, say), yes you probably do want static allocation. In these days embedded programming encompasses those very small systems (8 bitters, Cortex M0) at one end where C is workable. Then there's a midrange (ARM Cortex M3, say) where C++ and the STL bring benefits. And finally there's systems with embedded Linux or equivalent, which is not much different from desktop programming and where C++ itself is too low level for lots of apps (I've programmed these systems in Python and have been interested in using Erlang on them). Tiny systems are a niche even in embedded software now.

Given the amount of bugs in real code caused by subscript errors, range checking is a good thing too.

Look up "typeful programming" and start using more types. It will decrease your debugging time :).

I've found cppreference.com to have enough info to use STL effectively.

In that case they're trying to check your experience level.

Reply to
Paul Rubin

Yes, sure, this is fine. Of course there's an STL container class (std::array) for statically allocated arrays too.

Yes of course, if you're in the niche of programming critical systems that have to run nonstop with no backup, then that has to affect your coding technique. More typical embedded products have reliability requirements comparable to conventional desktop or server software, so you can use much less expensive development processes and incorporate something like a watchdog timer onto the board, to do an automatic hard reset in the event that something jams. Of course you can only do that if the worst consequence of an unexpected reset is annoyance rather than disaster. But that encompasses an awful lot of products.

OO in this context means designing programs around inheritance trees rather than values and composition. Of course classes are essential in C++, but the current preference is to handle polymorphism with template generics rather than subclasses and inheritance.

Reply to
Paul Rubin

The usual behaviour for g++ is to instantiate the template in each object file, and leave it to the linker to remove duplicates.

If the linker doesn't support that, there are switches to either track template instantiations so that they can be compiled and linked separately (-frepo), or to suppress automatic instantiation and require the code to explicitly instantiate any templates which it uses (-fno-implicit-templates).

formatting link

However, that isn't what Paul was talking about. While exact duplicates can be removed, if you instantiate a template several times with different parameters, each distinct set of parameters will get its own customised version of the template code.

This allows for maximum optimisation, as each individual instantiation has all of its parameters fixed at compile time. But it does result in code bloat relative to languages which generate one set of code which handles polymorphism by accepting parameters at run time.

The designer of the template may be able to minimise bloat by identifying large sections of code which are only weakly dependent upon the parameters. E.g. there could be portions of code which only care about the type's size, and generating different versions for different sizes would have negligible benefit (particularly on a system with a fast multiply). So the template code could delegate such tasks to a non-template helper function which takes the size as a (run-time) parameter.

Unfortunately, this isn't particularly straightforward for container templates, as core operations such as copy, move, assign, swap, etc may have to use the relevant type-specific methods or overloads, and using indirect calls for such primitive operations would hurt performance badly.

So if you want to go down that route, you have to manually "dissect" the parameters using type traits such as is_trivial or is_trivially_copyable etc in order to delegate to less-specialised templates where possible while retaining full specialisation where necessary (or where the alternatives are impractical).

Reply to
Nobody

I'm out of date. Does C now have a standard "string" that everybody uses?

Reply to
Tom Gardner

Including integration time, plus acceptance/handover time!

Reply to
Tom Gardner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.