C++ in embedded systems

Hi.

I have followed some discussions about C++ deployment in embedded systems and I learned there are several criticism on C++ itself and its deployment in such systems. Someone mentioned he designs using OOD but implements in C and Assembly, which would be my approach as well (maybe because I started as hardware designer and tend to consider hardware limitations when designing software :-) ). I work with people that have software engineering formation though and they tend to use every resource of C++ without concerns with performance in the embedded realm, leaving eventual optimizations to the end of the design which we know it's much more difficult and neither practical nore echonomicaly wise. I don't think these folks would go back to C even using OOD techniques so I'd like to get the best possible from C++ advantages and avoid its drawbacks. I would appreciate if you could comment on this and give hints on what to avoid when programming for embedded systems in C++. Suggestions of books and links will be very much appreciated as well.

Thank you in advance for your help.

Elder.

Reply to
ih8sp4m
Loading thread data ...

My personal (and possibly controversial) view is that OOD is a complete waste of time in general and absolutely useless for embedded systems. there a many reasons why I hold this view but some of them are:

  1. Objects bear no relationship to actual objects
  2. Information hiding and the other OO so called 'attributes' are counterproductive to embedded development where the intimate connection between the hardware and the software must never be diluted.

Ian

Reply to
Ian Bell

Hi,

I don't want to polemicate here, but saying that OOD is a complete waste of time is false.

Bad or misunderstanding of OO drive project to the wall. Ok. Good and well implemented OO Design save time, and let your project be very very more maintenable.

But you have to master it...

Bye.

StepH.

Reply to
StepH

That was probably me. I avoid C++, but I embrace OO and code in C/assembler.

In response to other posts in this thread - it's not OO that's the problem, it's *bad* OO that's the problem. To be specific, OO is really nothing more than good decomposition. I tend to think of it as "structured data" - I don't see it being any more/less difficult/controversial than structured programming (and I remember when my colleagues thought that was a passing fad too, and swore by their GOTOs). It's just a question of breaking data and code down into "objects" in the sense of good modularity. Exactly how doesn't matter, but bad decomposition is always bad - OO or not.

Re C++: my own (again controversial) view is that C++ is a *terrible* implementation of a good idea and tends to lead to poor OO and poor code. My criticisms of C++ include: - use of the heap (heaps have no place in embedded systems, since malloc can fail at runtime) - operator overloading (renders code hard to read and frequently misleads) - poor hiding of private elements (since they're in the header) - the difficulty in reading code without having every single header file open - too much emphasis on runtime (late binding, name mangling etc etc) - exception-handling (no substitute for good error handling) - bloat in general

Regardless of the buzzwords, *good* code means that every module has a defined public interface that tells you everything you need to know abouit that module. If it's well done, you shouldn't need to look under the hood - just use the interface. My view (to repeat myself) is that software engineers often fail in this - and OO makes this all the more painful, but it's essentially poor design whichever way you look at it. (If you need an example of bad OO, consider MFC.)

The bit that matters: - OO saves me time. It renders my projects clearer and more maintainable. - C++ does not seem to save programmers time. I've seen teams struggle with comprehension before the first release, let alone later. - My own stated objectives are clarity and hence maintainability. I don't really care too much how this is achieved, so long as it *is* achieved.

Steve

formatting link
formatting link

Reply to
steve at fivetrees

In embedded systems, pretty nearly any view is controversial, because the field covers such a wide variety of needs and capabilities.

I'll give you two instances in a current 32-bit project of mine where OOD is useful:

  1. Network interfaces. When I was working on the first version of this project, our hardware had one wired Ethernet interface. I chose to implement the network functions and variables as a set of "methods" and variables encapsulated in a network I/F "class". Soon after release, we started to work on units with dual Ethernet, and units with a mix of 1,2 or 4 Ethernet ports and 0, 1 or 2 wireless Ethernet ports.

There are two places where this has saved me effort:

a) Startup code. Instead of having to call disparate functions to bring up the interfaces and load configurations, I simply call the load-config and bring-up methods of each object.

b) UI code. I wrote a single piece of UI code to edit network interface configuration. It's easier for me to call the same function with different config pointers than it is to craft slightly different functions for the same net result.

  1. GUI The GUI is encapsulated in a "GDI" object, which contains methods to identify if the environment is compatible with the GDI's code, and various display parameters, as well as primitives for line draw, flood-fill, various simple shapes, image scaling and rotation, text rendering, and other functionality. So far, we've written two substantially different GDIs optimized for different types of framebuffers. I'm looking at a third GDI that will use OpenGL.

In both cases, I believe that the thought and planning that went into encapsulating the relevant data and functions were an enormous help in keeping platform-specific code isolated from platform-dependent code. We're looking at porting down to an ARM or XScale design (from x86) in the near future, and in this world every day of development time counts. I don't believe there is any significant overhead; one extra

32-bit pointer on the stack is about it.

At its simplest, OO implementation can just be deciding to put everything in a struct and passing a pointer to that struct to every function that needs it (equivalent of C++'s "this") rather than declaring a bevy of global variables.

Reply to
Lewin A.R.W. Edwards

: - use of the heap (heaps have no place in embedded systems, since malloc : can fail at runtime)

You can do all allocation statically in C++ as you would do in C, too. Nothing is forcing you to use the heap in C++.

Think of C++ as 'OO C, forgetting STL and templates' - execpt you have inheritance and the possibility of creating interfaces ('pure virtual' functions) nor need to pass 'this' pointer around explicitely.

Both decrease the error possibility and add safety by eliminating the need for a switch-case -statement (compiler does it compile time).

Inheritance is a compile time mechanism in C++ and doesn't cost anything runtime, a virtual function call is one memory access more (a lookup to vtable).

: - operator overloading (renders code hard to read and frequently misleads)

This is true. Operator overloading is very often used in a wrong way, but somewhere it fits.

: - exception-handling (no substitute for good error handling)

Isn't exception handling error handling also?

: - bloat in general

Care to emphasise? Using class or function templates can cause this, but if you implemented the same functionality not using templates, and created a version of a template function for all data types you need, you get the same amount of code execpt that the C++ compiler does it for you.

Reply to
Jyrki O Saarinen

One thing I forgot to mention: I am using the traditional C/asm combination. I don't believe C++ brings anything to the party except additional headaches. I haven't read any conclusive proof that C++ is more maintainable than C, and my personal feeling is that C++ is slower to develop and harder maintain.

Reply to
Lewin A.R.W. Edwards

IMO, the language itself is just way to big and complex. Almost nobody understands it well enough to use it effectively and safely. Comapre the C++ language spec with the Modula-3 spec, or the Scheme spec (through in CLOOPS even), or the Smalltalk spec.

--
Grant Edwards                   grante             Yow!  Darling, my ELBOW
                                  at               is FLYING over FRANKFURT,
                               visi.com            Germany...
Reply to
Grant Edwards

I disagree. I think that in embedded work, it's import for the programmer to have a thorough understanding the the language and what the compile is going to do. From what I've seen, that's possible with C, Modula-[23], Ada, and various other languages. It doesn't seem to be possible with C++. The _language_ itself is too complex and baroque to understand completely enough for it to be used effectively in some embedded environments.

--
Grant Edwards                   grante             Yow!  I know how to get the
                                  at               hostesses released! Give
                               visi.com            them their own television
                                                   series!
Reply to
Grant Edwards

Agreed. I've spent some time getting familiar with the code generated by C+ compilers for various constructs. Mostly, this has been with compilers designed for the x86 used in PCs, but it's not limited to that. There's also a few articles and a book or two around which discuss the implementation details, too.

Templates instantiated for a data type often cause functions I don't actually use to be generated __and__ linked, for example. I know that if everything is done which should be done and the linker and compiler work together flawlessly, it's possible to correctly eliminate almost all of this. But what chance is there that an embedded compiler (other than GNU?) tool chain will get the time put in, here? Even in the case of x86 compilers, where there is a great deal of effort and time spent in development, they fail to get this done well enough for many smaller embedded systems.

For these smaller systems, too, partial template specialization is almost a requirement. Who provides that for embedded work?

So, don't use templates, you say?? Okay. Then what about exception handling?

The compiler still generates defensive code even when I don't use any syntax in my code for them; due, in part, to the separate compilation requirement, if nothing else.

So, don't use exceptions, right?? But a properly functioning C++ compiler must generate correct code for source code unit A when it has absolutely no idea what kind of exception handling may be required in source code unit B, compiled separately and perhaps at a very different time. And linkers I'm aware of don't cover this issue, either.

Take this sequence of code, found as part of some function in some arbitrary source code file:

. . foo (); String s; foo (); . .

For purposes here, let's assume this fragment occurs as part of a function sitting in a module that nowhere in it uses or handles expections in any way. Let's say, in fact, that this module *could* be compiled by a vanilla C compiler, except for the use of class objects illustrated above. That's the only aspect of C++ that's in use, here. In fact, let's say that this use of a string is the *only* non-C aspect in the explicit code. Okay?

Now assume that foo() is an external procedure and that the compiler has a declaration for, but does not know its definition at this point in time.

The C++ compiler sees the first call to foo() and can just allow a normal stack unwind to occur if foo() throws an exception. In other words, no extra code is needed at this point to support the unwind process. So this call to foo() requires nothing special, no C++ "fluff," so to speak. The compiler generates nothing it wouldn't generate if it were a C compiler.

But once String s has been created, the C++ compiler knows that it must be properly destroyed before an unwind can be allowed if an exception occurs later on. It doesn't matter whether the function in this module uses exceptions. Or even if the module itself ever does. It only matters that the exception *might* happen, even outside the module, because foo() is an external function and the C compiler cannot verify that foo() doesn't call some other function which *can* throw an exception.

So the second call to foo() is semantically different from the first. If the 2nd call to foo() throws an exception (which it may or may not) in foo(), the compiler must have placed code designed to handle the destruction of String s before letting the usual unwind occur. This means that the first foo() gets coded up differently than the second foo() call.

Now, the above is a case where a common, garden variety embedded programmer would imagine that the two calls to foo() would take the same time, require the same resources, and otherwise be the same.

How about this simple bit of C++ code:

struct T { ~T() { /* assume some non-trivial code here */ } }; extern void foreign (); void example (); void example () { T s; foreign (); }

This specifies a destructor for T which must be called in case of exceptions. example() calls a foreign function which may generate exceptions. Since the compiler of this code has no way to be certain that foreign() cannot throw an exception, it must assume that it might. Accordingly, the compiler needs to generate the necessary code to handle the destruction call to obliterate the object T if such an exception does occur in foreign().

In other words, exception handling quite often comes at a price, even in functions that the programmer knows cannot generate exceptions. And this is especially true because of the semantics of classes, and their associated destructors and constructors.

Unlike C's malloc, C++'s new is supposed to use exceptions to signal when it cannot perform raw memory allocation. In addition, so will . (See Stroustrup's 3rd ed., The C++ Programming Language, pages 384 and 385 for the standard exceptions in C++.) Many compilers allow this "proper behavior" to be disabled, of course. But if you stay true to C++ form, you will incur some overhead due to properly formed exception handling prologues and epilogues in the generated code, even when the exceptions actually do not take place and even when the function being compiled doesn't actually have any exception handling blocks in it.

Stroustrup has publically lamented this practical reality.

So, don't use classes with constructors and destructors?

Right.

Some more thoughts,

When a C++ function returns an object an unnamed compiler temporary is created and destroyed. Some C++ compilers can provide efficient code if an object constructor is used in the return statement, instead of a local object, reducing the construction and destruction needs by one object. But not every compiler does this and many C++ programmers aren't even aware of this "return value optimization." I know that they should be and that when you use C++ features, you need to know what they cost you. But it remains one of those common risks which C exposes (because it doesn't support these semantics directly) and which C++ can too easily hide.

Providing an object constructor with a single parameter type may permit the C++ compiler to find a conversion path between two types in completely unexpected ways to the programmer. This kind of "smart" behavior isn't part of C. C++ compilers can automatically generate constructors, destructors, copy constructors, and assignment operators for you, with unintended results.

A catch clause specifying a base type will "slice" a thrown derived object, because the thrown object is copied using the catch clause's "static type" and not the object's "dynamic type." A not uncommon source of exception misery (assuming you can afford exceptions in your embedded code, in the first case.)

Passing arrays of derived objects to a function accepting arrays of base objects, rarely generate compiler warnings but almost always yields incorrect behavior.

Since C++ doesn't invoke the destructor of partially constructed objects when an exception occurs in the object constructor, handling exceptions in constructors usually mandates "smart pointers" in order to guarantee that constructed fragments in the constructor are properly destroyed if an exception does occur there. (See Stroustrup, page 367 and 368.) This is a common issue in writing good classes in C++, but of course avoided in C since C doesn't have the semantics of construction and destruction built in. Writing proper code to handle the construction of subobjects within an object means writing code that must cope with this unique semantic issue in C++; in other words "writing around" C++ semantic behaviors.

C++ copies objects passed to object parameters. For example, in the following fragments, the call "rA(x);" may cause the C++ compiler to invoke a constructor for the parameter p, in order to then call the copy constructor to transfer object x to parameter p, then another constructor for the return object (an unnamed temporary) of function rA, which of course is copied from parameter p. Worse, if class A has its own objects which need construction, this can telescope disasterously. (A C programmer would avoid most of this garbage, hand optimizing since C programmers don't have such handy syntax and have to express all the details one at a time.)

class A {...}; A rA (A p) { return p; } // ..... { A x; rA(x); }

longjmp doesn't have a portable behavior in C++. (Some C programmers use this as a kind of "exception" mechanism.) Some C++ compilers will actually attempt to set things up to clean up when the longjmp is taken, but that behavior isn't portable in C++. If the compiler does clean up constructed objects, it's non-portable. If the compiler doesn't clean them up, then the objects aren't destructed if the code leaves the scope of the constructed objects as a result of the longjmp and the behavior is invalid. (If use of longjmp in foo() doesn't leave a scope, then the behavior may be fine.) This isn't too often used by C embedded programmers and they should make themselves aware of these issues before using them. So don't use longjump()?

How many people using C++ know exactly how the vtable mechanism works? In the face of multiple inheritance? With virtual base objects? How many people how a C++ compiler supports dynamic casts or know what causes the C++ compiler to generate (or not generate) support for it? What mechanism does a C++ compiler use for exception handling? What does it cost? Where? If one stays away from a mechanism, how much of it is still present?

And all the above is just from the cuff -- there is much more.

Jon

Reply to
Jonathan Kirwan

You should check out the Rationale for Embedded C++. It gives a good overview of which standard C++ features were left out and why.

formatting link

I think they drew a pretty good line about which features are and aren't useful for embedded systems. Although I don't understand why they felt it was necessary to eliminate wchar_t.

Reply to
Dingo

Excellent post, Jon. Enlightening.

Steve

formatting link
formatting link

Reply to
steve at fivetrees

... snip ...

Kindly do not toppost.

All absolute statements are false. What is polemicate? Maybe something to do with polecats? Something Dubya said?

There may be a language barrier, and if so I apologize for any unintended offense. The phrase just struck me as amusing. All versions of polemic, (polemics, polemist) known to me are nouns, not verbs.

After which I shall waffle and emphatically state the ambiguous polemic that OO in embedded systems is _usually_ a waste of time. In PICs, certainty appears. Otherwise cavil.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer

Strange statement, which of course I totally disagree with ;). I do OO in C and assembler all the time. With real projects. I have no desire to use C++. OO is a design issue, not a coding issue. (C++ is not just a coding language - it could be considered a very much higher-level language than C++, and indeed that's part of my problem with it.)

However I guess a) you would imply that I'm not using OO "well" and b) it depends on exactly how you define OO. I use OO in the sense of encapsulation, data-hiding, polymorphism, inheritance etc. I don't use all the other gubbins that tends to get associated with it e.g. operator overloading, multiple inheritance, etc. I use the features that benefit clarity and maintainability, and avoid those that adversely affect them.

Here's briefly/roughly what I do: - one class per module (usually defined as a private struct of member variables and function pointers) - no project-global variables (ony the interface member functions are public) - polymorphism involves passing one more variable (which effectively is the index for a state machine)

Not hard.

The need for discipline is true of all design, surely? Once mastered, why can't it be sustained?

Steve

formatting link
formatting link

Reply to
steve at fivetrees

C++ is often criticized because it is bloated and too complex. The K&R book, complete with tutorial, is about 200 pages. A good C++ book is well over 1,000. How can anyone actually master a language like that?

C++ started out with a good idea and then tried to become all things to all programmers. Now, does that mean that we should dump C++? Think about the logical progression of languages. In the beginning we had data items, then arrays of like-kind data items, then structures of data items of differing kinds and sizes. The next step is simply a structure that not only contains the data items, but also the functions that operate on that data. We call that a class.

The class should be used for the same reasons that you would use a structure; it clumps things together. Is it a necessity? No; you could clump things together using separate files. An example of this in C is FILE. It is an I/O structure with all of the functions, such as fopen, needed to operate on the data. Although using classes is not a necessity, neither is the use of a structure. Try writing a C program without using a structure.

Now about your issue of the difficulty of optimizing after the fact. Tony Hoare stated, and restated by Donald Knuth: "Premature optimization is the root of all evil". The original quote was, "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." Why optimize until you know where the bottle neck is?

I came from the era when all embedded code was written in assembly; processors just weren't fast enough. Now the processors are faster and the compilers are better. I now write in C, C++ and assembly.

The most important thing to remember is that we write programs to make money. A programmer who doesn't use the tools of the time will take longer to market and loose revenue. Just write good code, know when to use C, C++, assembly, and know when to use which tools. You don't want to become one of today's programmers who need a PC with the speed and horse-power of yesterday's Cray Super computer just to run a simple word processor.

Marty Pautz

Reply to
m pautz

In the sense that it is saying something controversial, it causes a stink... So yes, it has to do with polecats.

--
#include 
 _
Kevin D Quitt  USA 91387-4454         96.37% of all statistics are made up
  Per the FCA, this address may not be added to any commercial mail list
Reply to
Kevin D. Quitt

Of course I seriously mis-spoke in the above. The problem is not OO, but the implementations, such as C++ or Java. The OO metaphor itself can be gainfully applied to any language, including assembly and PICs.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer

... snip ...

Replacing a few words in your statement, we have: "doing hi-level programming in a non-hi-level language demands ..." This applies to using assembly, and equally to using C.

The better programmers in both languages impose exactly that sort of discipline on themselves, and get very irritated at languages that do not allow them to ignore those disciplines when needed. Unfortunately many such programmers have a poor perception of 'when needed' and thus leave many incipient bugs and insecurities.

The C programmers attitude to Pascal, and the inverse, are examples. The languages are highly similar, with very similar end capabilities. Escape hatches in Pascal need to be carefully thought out to preserve reliability. External routines, file drivers, etc. come to mind. A system which allows controlled intermixing is Ada.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer

Modula-3 is another good example.

--
Grant Edwards                   grante             Yow!  My uncle Murray
                                  at               conquered Egypt in 53
                               visi.com            B.C. And I can prove
                                                   it too!!
Reply to
Grant Edwards

Yes it is controversial. Whether you accept it or not, the OO approach is the current vogue. Graduating Software Engineers are conversant with OOA & OOD. A common language, UML, has been developed so that analysis & design can be better expressed. There are a plethora of OO tools available now.

Also why is software developed for embedded system any different in quality requirements from that of other fields? It isn't. The same process that is adopted in the OO methodology is applicable for embedded development. The name of the game is to produce "quality" software. OO in itself doesn't guarantee this, but it's the associated process & approach (in using modelling) that empowers the Software Engineer with the ability to achieve this goal.

Don't quite understand this comment.

I disagree. Sure for embedded systems there is an association between the hardware & software, but like any software system the hardware can be mapped to drivers. I can imagine that one can write software for an embedded system where hardware access is spaghetti-ed throughout the code. If you do, then one is not writing their software in a fashion that would make it easily testable. Sure one has In-Circuit Emulators & alike, but I regard them as belated options in testing.

Where I work, we write the software such that it can be readily ported to other platforms which may employ different micros and/or different hardware and/or different operating systems. To remotely achieve this, one has to delineate the hardware from the software application. Also I'm not talking about monster applications but embedded systems based on micros like Hitachi's SH-1, Tiny H8 & Motorola's 6805.

In a current project with the Tiny H8, we are achieving about 98% structural (branches & conditional) unit test coverage of the Software Application, with a test harness that runs in a console window on a PC and about 95% structural unit test coverage of drivers that run on the target system. That's the current status, but our goal is to achieve

100% coverage -- which we expect to do. The integrated system does fit & run on a Tiny H8, but unfortunately, there isn't enough ROM space to accomodate a full test harness on the target system & so it has to be broken up.

Ken.

+====================================+ I hate junk email. Please direct any genuine email to: kenlee at hotpop.com
Reply to
Ken Lee

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.