C++ in embedded systems

... snip ...

Possibly because I am MUCH less facile in C++ than in C. It seems easy to generate ungodly amounts of convoluted code with very few lines in C++.

The hashlib tests exposed the failings of free in several systems, which used O(n) time where n is the number of memory blocks in play. Some of the tests exercise this by eliminating the final hshkill call. This induced me to write a malloc package with O(1) free performance, which is DJGPP specific, but will probably port to many Linux systems under GCC. Also available on my site. How, if at all, this will bite on C++ systems I do not know. I suspect it will. The problems will appear when something upward of 50k items have been stored, and will be obvious sooner on slow machines. The end effect is that hashkill is O(N*N) where N is the count of items stored, and becomes O(N) with my nmalloc package.

I would be interested to know if runtests (on Linux/Unix) or runtests.bat (on Windoze/DJGPP) function on your system.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer
Loading thread data ...

I seem to recall seeing it in some exception-handling code. I might be wrong. Anyway, the fact is that exception handling might need to construct object off the stack for proper behavior. Some compilers might use a part of the data-segment for that, others might use the heap. Here's an example code: ============================= #include #include

void *operator new(size_t aSize) { printf("new\n"); return malloc(aSize); }

class C { public: C() { printf("C::C\t\t0x%08x\n",this); } C(const C &cc) { printf("C::C(C)\t\t0x%08x

Reply to
Andras Tantos

You seem to be the exception to the rule. As many others have said, C++ is a large language, and thus, hard to learn all of it. You don't seem to have this problem. Clearly, different people have different capacity to effectively use C++. But after reading 80 or so responses, I've concluded that there are quite a few misconceptions about C and C++. This is not surprising considering this is comp.arch.embedded. Because I assume most respondants have a hardware background and moved into software. You, on the other hand, probably have a software background and moved into hardware. This is certainly the case for me.

Because of my background, I would like to clear up some of the misconceptions. The biggest misconception I've seen is the concept of object oriented programming. I get the feeling that not many people realize that C++ is a multi-paradigm programming language. There are many programming paradigms.

formatting link
contains a list of them. One important one not covered in this site is the generic programming paradigm.

C++ supports three different programming paradigms; procedural, object oriented and generic. Procedural programming is supported through the backward compatibility with C. Object oriented programming is supported through the classes and all of the features associated with them. Generic programming is supported through templates. With three supported paradigms is it any wonder that the language is large? But this doesn't necessarily mean the language is hard to learn. Which is a subjective evaluation in any case. I.e. learning music is hard for me, but that doesn't mean sheet music should be thrown out.

Once you understand that C++ supports multiple programming paradigms, the other misconceptions become obvious.

  1. C++ is hard to learn. This is only the case if you are trying to use all three paradigms at once. There's no requirement for this. It also explains why the C++ standard is multiple times larger than other languages. Name me another language that supports multiple paradigms and then we can compare.

I should also note that much of the C++ language standard text covers the standard library. Which is an order of magnitude larger than most other languages. The C++ language proper is mostly equivalent in size to other languages if you only compare the paradigms individually.

  1. C can simulate features of C++ (encapsulation, inheritance, etc). Any language can simulate most other languages. But this doesn't mean you are shifting paradigms. People assume that just because they use encapsulation (abstract data types), that they are doing object oriented programming. This is a major misconception. Object oriented programming is not just about encapsulation and inheritance. It is a methodology, a paramdigm shift. Object oriented programming is a way of looking at the problem, and a way of looking at the solution to this problem. This is explained in the URL link above.

For those who have ported your C software to C++ and declare you're doing object oriented programming, ask yourself this question: did I perform a paradigm shift? If the answer is no, then you aren't doing object orient programming.

  1. Rudimentary inheritance can be achieved with nested structures in C. This is a mistake people often make with object oriented programming. First of all, nested structures in C is equivalent to C++ data composition. This is not inheritance. Second, inheritance involves more than just inheriting data variables, it also involves inheriting interfaces. You can't do this in C. When people talk about C++ inheritance, they are often talking about the latter.
  2. C++ is bloated (binary wise) Whether your resulting binary is bloated or not is an implementation issue, not a language issue. You can easily bloat any binary using any language. For example, if I did a copy and paste of a block of code in C, then I've just bloated my final executable. The difference between this and inline functions, and templates in C++, is that it is a lot easier to make this mistake in C++. But this IS a mistake by the programmer, and not a inherent problem in the C++ language. You can't fault a language because there are a lot of poor C++ programmers.

Of course, one could say that it is these faults in C++ that are producing so many poor programmers. True, a good language should make it possible to use it effectively. But ultimately, this is a catch 22. Are the faults creating poor programmers, or are poor programmers faulting sound features of C++? I guess it depends on who you ask.

  1. C++ is slow (compile and execution) For compilation, this is a problem with the tools, not the language. Though the language makes it harder for the tools. As for execution, this is the same as #4 above. Don't blame the language for the programmer's limited abilities.

But note, even though most of the slow down is due to poor programming. There are inherent performance problems in C++, simply due to the object oriented paradigm. The explanation for this is that, any performance lost, you gain in ease of maintenance, and you can always use a more powerful processor.

I personally disagree with this line of thought. As any embedded system engineer knows, you don't always have the extra 100 watt power supply for the Pentium4, or the extra 5cm square of real- estate for the fan. Simply asking for a bigger processor is not a solution to a performance problem in an embedded system.

Of course, the only thing this means, to us embedded software engineers, is that we can't use C++ in all applications. I.e. I wouldn't use C++ in a Microchip PIC solution, even if I could find a compiler. I believe someone else already said the same thing. I don't see how anyone could find this as any surprise. But given a 32bit processor, are there any doubt that C++ can provide some benefit? Even if you're only using the procedural programming paradigm?

There is also the issue of whether to use object oriented programming vs. procedural programming. But this is a separate debate, because there are other languages besides C and C++ that supports these paradigms. So I'll leave that debate for another day. ;-)

--jc

--
Jimen Ching (WH6BRR)      jching@flex.com     wh6brr@uhm.ampr.org
Reply to
Jimen Ching

Python, Modula-3, Ada, Scheme, etc. All are vastly smaller than C++ and support multiple paradigms (most at least three of the following: functional, procedural, objects, generic).

--
Grant Edwards                   grante             Yow!  ... A housewife
                                  at               is wearing a polypyrene
                               visi.com            jumpsuit!!
Reply to
Grant Edwards

In article , Greg Comeau writes

I am getting confused.. I see C99 and C98. What is the difference?

which one (if either ) is ISO C?

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\ /\/\/ snipped-for-privacy@phaedsys.org

formatting link
\/\/ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Reply to
Chris Hills

features

should

you

because

Well said. That's totally my viewpoint too.

Also I just found this on Jack Ganssle's (excellent) site at

formatting link

inheritance"; of those three, encapsulation is the easiest and most powerful tool for building well-written, easy to understand code. It's equally effective in assembly, C, or C++. Bind "methods" (code that accesses a device or data structure) with the data itself. (i.e. HW oriented) programming very well.

Again, fully agreed.

Steve

formatting link
formatting link

Reply to
steve at fivetrees

I'll duck from which part is fair, but yes, when all is said and done, it's up to vendors, and it is good to see that it is not always the so-called mainstream companies.

--
Greg Comeau/4.3.3:Full C++03 core language + more Windows backends
Comeau C/C++ ONLINE ==>     http://www.comeaucomputing.com/tryitout
World Class Compilers:  Breathtaking C++, Amazing C99, Fabulous C90.
Comeau C/C++ with Dinkumware's Libraries... Have you tried it?
Reply to
Greg Comeau

I'm not sure what you're asking, since I'm not sure if you have a typo above. There is no C98, but there is a C++98. That's the first C++ standard. The second one is C++03. C has worked similarly, and in fact is more involved. In short, the first C standard was C89, which was "ANSI C". It (the same document with some editorial changes) quickly because the first "ISO C" in 1990, hence it being called C90. Progress to C occurred a few times including 2 Technical Corrigenda to C (I think C94 and C96), an Amendment (C95), etc. with the last being a "major" revision of ISO C in 1999, hence C99, which is supposed to replace all previous versions from a formal viewpoint. This doesn't include variants such as POSIX aspects, but that's the general nutshell from an ISO perspective. Please note that these CXX names are slang references, and not a formal name, though they do reflect distinct changes are discussed above.

--
Greg Comeau/4.3.3:Full C++03 core language + more Windows backends
Comeau C/C++ ONLINE ==>     http://www.comeaucomputing.com/tryitout
World Class Compilers:  Breathtaking C++, Amazing C99, Fabulous C90.
Comeau C/C++ with Dinkumware's Libraries... Have you tried it?
Reply to
Greg Comeau

No you don't have to. But his point is (or should be) that there are many non-OO aspects to C++. Also, it's great that OOD et al can apply to say C programming, but a significant point is that C++ has features which map more closely to the above mentioned paradigms being that the "modeling" notions are more directly supported. Ramification therefore include different techniques, different idioms, different ways of thinking, etc.

Indeed.

Not in and of itself. Of course, if choosy == narrow-minded, then one (not necessarily you) could be limiting yourself. Of course too, on this same note, neither C nor C++ is the end all to programming either.

We are, unfortunately, creatures of habit, and crawling out of something comfortable, even for our betterment, is often not easy. :) Enjoy it :)

--
Greg Comeau/4.3.3:Full C++03 core language + more Windows backends
Comeau C/C++ ONLINE ==>     http://www.comeaucomputing.com/tryitout
World Class Compilers:  Breathtaking C++, Amazing C99, Fabulous C90.
Comeau C/C++ with Dinkumware's Libraries... Have you tried it?
Reply to
Greg Comeau

I believe handling all the pathological cases is hard. But, remember, C++'s software exceptions are only part of the picture - the compiler also has to deal with platform specific hardware and OS exceptions.

As I said in my previous post, most of the difficulty comes from mixing objects with exceptions.

Yes and no. If the pointer type is fully specified there is no problem. If not, then optimization depends on type inferencing by the compiler. The compiler statically knows the type of the function when the pointer is created because it knows both the class and function referenced.

If the overloads differ by object parameters and the function is called with a derefenced base class pointer, there is no way for the compiler to decide statically what type is being passed.

The C++ spec. does not *disallow* this kind of runtime overloading, but I don't know of any compilers that implement it. Every one I have seen simply goes with the static type of the pointer.

Absolutely. I was just responding to Jonathan's question about the cost of dynamic casting. The costs depends on the class heirarchy ... it is quite low for single inheritence models, just a few instructions, but multiple inheritence models may be much more expensive.

George

Reply to
George Neuner

Four ... you forgot functional. However, C++ is primarily a procedural language with support for OOP. IMO, its support for both functional and generic programming is poor and its support for modularity is effectively nonexistent.

Many other languages support multiple paradigms: Modula 2 & 3, Oberon, Ada, Lisp and Scheme, etc. Some of them do it much better than C++.

Your point about the standard covering more than the language is a good one. The coverage of the core language is only about 30 pages and is comparable to other languages.

The C++ standard is enormous because it tried to address the compatibility concerns of legacy C developers for every conceivable platform whilst simultaneously trying to create a far more expressive language with [at least partially] incompatible semantics. If C++ had taken a different direction, we might not be having these debates.

I don't think people misunderstand this at all. All of programming is about defining the data to be manipulated and the functions which manipulate it. Object abstraction is not any different from type abstraction. Where the two methods differ is on the relationships of types to one another.

Barring inheritence, there is a 1:1 relationship between the abstract type model and the object model. Including inheritence, there is still a 1:1 relationship, but the abstract type model suffers due to implementation complexity. However, whether inheritence is even a necessary component of OO is still an open debate.

You can implement full C++ inheritence in C ... it is just very messy. The first C++ compilers compiled into C as an intermediate step. There are a number OO and functional languages that were originally compiled into C before native compilers were written for them.

Bloat is a valid (though manageable) criticism. Most C++ compilers link in all of a class's methods whether or not they are actually referenced in the application. Heavy use of templates creates many nearly identical copies of code and interacts badly with the method linkage problem because most templates are used to create new classes. Not everybody knows that intrinsics are inlined.

Is it a programmer mistake? I don't think so. Obviously I believe developer have a responsibility to read the compiler documentation, but most of these issues are not discussed in documentation and have been discovered empirically by people who, from necessity or curiosity, disassembled their code to find out what was going on inside.

George

Reply to
George Neuner

snip of some very well written stuff about C++ misconceptions

Forgive me but IIRC the original poster I believe posed that very question, not in those words exactly, I think he said is OOD the best approach for embedded systems.

Ian

Reply to
Ian Bell

Agreed.

That's definitely possible to predict what code a C++ compiler might generate from various construct. I've been doing that for more than 12 years - approximate half of that time developing embedded systems. The embedded systems where mostly with performance compareable to a 16 Mhz

80386, 2 MByte ROM, 2 MByte RAM.

Of course there are "some embedded environments" where C++ is inappropriate. The same is true for all languages. It can be because the target machine have very limited resources and _very_ tight control is requiered. It can also be because of some company policy which specifies the use of another language.

The C++ language can be understood and used effectively in embedded environments. It takes skills and care to write high performance code in any language. Educations and understanding is a prerequisite for using C++ effectively, but there are a lot of sources to learn from.

Kind regards

Mogens Hansen

Reply to
Mogens Hansen

Hi!

at

What we're talking about is this:

class A { public: virtual int func(int a) {} };

class B: public A { public: virtual int func(int a) {} };

int call1(class A *a) { a->func(); }

int call2(class A a) { a.func(); }

int main() { class A a; class B b; call1(&b); call2(a); }

Let's suppose this is the whole program and the compiler knows it (using link-time code-generation or whatever). Now, in call1 the compiler has no idea what class it gets called with, so it has to do an indirect call (at first sight). In call2 however 'a' is allways class A and not class B, so it should be possible to use a direct call. However, by tracking down pointer types the compiler can deduce that call1 is allways called with a pointer to 'B', so a direct call should suffice there too.

So far, so good. now, let's see what kind of information the compiler needs to do that!

First of all, it needs to know that the parameter contains a field - 'vtable' - that is constant amongst all 'class A' type parameters but different for 'class B' type parameters. Note, that this element is filled in in A::A, it's not a fixed value as far as the back-end (BE) concerned. It just happens to be filled-in with the same value over and over again. The way the BE can deduce it to build a call-graph and propagate all constant values over the call edges. Note, that to do this, you need to track calls between compilation units, since the constructor, the call-site for constructor, the member function implementation and the call-sites for the member-functions are almost allways in different compilation units.

It also has to know that this is a pointer to a function table in the data segment, that's hidden from the user so that there's no chance that a conformant C++ code can modify it. It also has to know that the pointer in that table happens to point to A::func for class A and B::func for class B.

With this much at hand the BE can figure out, that call2 is allways called with a parameter which has a const field (vtable), and that field points to a const array of pointers and the value of that array-element that the body of the function later on refers to happens to be the address of A::func.

If all this information is handed over from the front-end (FE) to the BE than it's possible to solve the case of call2. Note, that most of the object-oriented stuff is resolved in the FE and the transfer-language between the two is usually a symbolic assambly-like langauge. GCC call it RTL, others call it otherwise. This lagnuage contains references to a symbol table that might contain type information but it might not be detailed enough to convey all this information. Also, note all the inherent information needed about the values in the structures. You not only need type information but the value of two (const) fields to be able to do this optimization.

Now, let's move on for the case of call1. If we can deal with call2, it's relatively easy. All you need to do is to build the call-graph of your whole program (again), and propagate the type-information over the call edges. Once you've done that, you'll see that call1-s parameters actual type is always the same. At this point you can process the function in the same way you did it with call1 and finally eliminate the indirection.

All in all, the following is needed:

- detailed type information about structs (classes). Especilly type info of all members.

- const-ness information about data-segment variables and struct (class) members

- value of const data-segment variables (that's usually available)

- const-propagation among function calls (and compilation units)

- type-propagation among function calls (and compilation units)

I'm not saying it can't be done. I'm saying it's not common at the moment. Also: it's only possible if you somehow compile the whole program at once and not using the traditional 'compile all modules individually and link them later' strategy. I don't know how many compilers support that. As far as I know GCC doesn't and VC++ does. However I don't know if even VC++ can do this optimization.

The technique is called 'devirtualization'. There are other methods, not just the one I've decribed. If you're interested, do a google on it.

I don't have the C++ spec at hand but I would be surprised. This would involve pre-generating overloaded functions for all possible type-combinations in case of templates which would lead to an exponential explosion of code-size. Also, it requires RTTI, and lengthy lookups before each function call, for each parameter type. Also, if there's more than one parameter types to match the lookup can be fairly complex. I would be really surprised if C++ would allow such an implementation.

If C++ allows that, and one compiler vendor implements that, the same code compiled with their compiler would behave radically differently than the same code compiled with other compilers. I don't think the C++ standard commity would have given that much freedom to the implementors. It's either required or forbidden. But again, I don't have the C++ std. to look it up.

regrads, Andras Tantos

Reply to
Andras Tantos
[8 There's also a few articles and a

The book Inside the C++ Object Model Stanley B. Lippman ISBN 0-201-83454-5 describes in details what (pseduo C) code a C++ compiler might generate for various constructs. Although the book is prestandard (1996) it contains a lot of usefull information.

Also have a look at Technical Report on C++ Performance

formatting link
which is written by the C++ Standard committee

  • to give the reader a model of time and space overheads implied by use of various C++ language and library features, * to debunk widespread myths about performance problems, * to present techniques for use of C++ in applications where performance matters, and * to present techniques for implementing C++ Standard language and Standard library facilities to yield efficient code

Could you give a specific example ?

I think that we need to be carefull about what we are talking about here. We can be talking about: 1. Function templates 2. Non-virtual member functions of class templates 3. Virtual member functions of class templates

  1. Function templates like template void foo(const T* t); are only implicit instantiated if they are used. Thus for Standard compliant C++ compilers, it's a non issue. It's even hard to image how it could be an issue for non compliant compilers.
  2. Non-virtual member functions of class templates like template class bar { void foo(const T* t); }; are only implicit instantiated if they are used. In fact the compiler are not even _allowed_ to instantiate the non-virtual member functions if they are not used. It is not requiered that the member functions is compileable at all, if it is not used like shown below

#include

template class bar { public: void foo(const T* t); };

template void bar::foo(const T* t) { const char* c = t; while (c && *c) std::cout correctly eliminate almost all of this. But what chance is

Pretty good I would expect. Many embedded C++ compilers (like Green Hills, Microtec, DIAB) are based on the EDG front-end

formatting link
which, for at least the last decade, has only implicit instantiated member functions of class templates which are required to exist.

If you are using a C++ compiler, which does not have the behaviour as required by the C++ Standard and commonly implimented by a lot of compilers you should either complain to the vendor or change compiler vendor.

Which x86 compilers are you refering to ? On the MS-Windows platform the 2 most used compilers are probably Microsoft C++ and Borland C++. Since approximate the middle of the 90's those compilers have not instantiated not used, non-virtual member functions. And note that these compilers where behind the state-of-art in this respect at that time.

A lot I would expect. We have been using partial template specialization for embedded projects for more than 5 years using a compiler based on a rather older EDG frontend.

No. The combination of templates and inline functions is a very efficient way of writing high level, high performance code - also for embedded systems. I've been doing that for many years.

There are many libraries which uses compile-time polymophism (as opposed to run-time polymophism like virtual functions) to provide both a higher abstaction level and performance at the same level as C or better. The C++ Standard library function std::sort is a classic, simple example but there are many others.

No - understand the consequences of using exceptions, before deciding. Both in terms of performance and in terms of programming style. You should know how to write exception safe code before using exceptions in C++. It is well understood and well described and has been so for years. See The C++ Programming Language, Special Edition Bjarne Stroustrup ISBN 0-201-70073-5 appendix E (that appendix can be downloaded from Bjarne Stroustrup's homepage

formatting link
And Exceptional C++ Herb Sutter ISBN 0-201-61562-2 and More Exceptional C++ Herb Sutter ISBN 0-201-70434-X

Exception comes at a price - even if no exceptions are ever thrown. But so does every error handling strategy. Even ignoring error handling comes at at price :-)

It is possible to compare error handling strategies with respect to various parameters like * code size * execution performance when no error happens * execution performance when an error happens * amount of effort requiered by the programmer * how error prone are the error handling strategies However it is not resonable to compare a program, which handles error conditions correct with a program which does not handle error conditions.

Let's be a bit more concrete and compare an example which rely on constructor/destructor and exeptions with an example which rely on error codes and explicit initialization and clean up and error codes. Let's take an example which opens 3 files and processes the files - both examples are C++ but the style is different:

void foo_throw(); int foo_code();

void bar_1(const char* f1, const char* f2, const char* f3) { ifstream is1(f1); if(!is1) throw runtime_error("unable to open first file"); foo_throw(); // might throw

ifstream is2(f2); if(!is2) throw runtime_error("unable to open second file"); foo_throw(); // might throw

ifstream is3(f3); if(!is3) throw runtime_error("unable to open third file"); foo_throw(); }

int bar_2(const char* f1, const char* f2, const char* f3) { FILE* is1 = fopen(f1, "rt"); if(!is1) return 1; int result = foo_code(); if(!result) { fclose(is1); return result; }

FILE* is2 = fopen(f2, "rt"); if(!is2) { fclose(is1); return 1; } result = foo_code(); if(!result) { fclose(is2); fclose(is1); return result; }

FILE* is3 = fopen(f3, "rt"); if(!is3) { fclose(is2); fclose(is1); return 1; } result = foo_code(); fclose(is3); fclose(is2); fclose(is1); return result; }

In my opinion "bar_1" is far simpler than "bar_2". The use of constructors/destructors _garanties_ that there can't be any resource leaks. The use of exceptions means that error handling doesn't clutter the normal flow and there is no way that we can forget to test for an error.

[8 an exception occurs later on.

It depends ... What's the declaration of "foo" ? 1. void foo(); 2. void foo() throw (); 3. extern "C" void foo();

The first declaration says that "foo" might throw anything, thus the compiler has to deal with that in some way. For the second and third declaration the compiler knows that there is no way an exception can propagate out of the function, thus it doesn't have to generate anything extra.

[8 first.

I don't quite understand what you mean.

Of course the program flow is different if an error occurs, if the program is supposed to behave correctly. It doesn't matter whether you use constructor/destructor or not. It doesn't matter whether you use exceptions or not.

No. "foo" gets coded exactly the same way - it's the very same function that is called.

With modern high quality compilers the call to "foo" is not coded differently for the first and the second call. Neither calls are augmented with error handling code - there is only the "call" assembler instruction. The exception handling code is only executed if an exception is actually thrown - there isn't even a test to check if an excetion is thrown. I have verified that with Intel C++ V7.1 and Microsoft Visual C++ .NET 2003 on MS-Windows and g++ V3.2 on Linux. The compilers for MS-Windows are of course are not specificly for the embedded market, but there is no reason why embedded compilers shouldn't be able to do the same thing. The Intel compiler is based on the EDG front-end - just like several C++ compilers for the embedded market. It is likely that the exception handling code is handled by the front-end.

The program behaves differently depending on whether an error condition occurred. But that's not a matter of exceptions/not exception - it's a matter of program correctness.

Who would expect that the the two calls take the same time and require the same resources if the first call succeed and the second call fails ?

If both calls to "foo" succeed it is fully possible and not unlikely that the two calls would take the same time and require the same resources. In fact they do with the compilers mentioned above.

See "Technical Report on C++ Performance" chapter 2.4 for at detailed description the "code" approach and the "table" approach to exception handling.

If there is no way that "foo" can fail, you can tell the C++ compiler that in the function declaration with an empty throw specification.

Again, if there is no way "foreign" can fail, we can specify that by: void foreign() throw();

Sure. But T::~T is going to be executed in any case, simply because "s" has been constructed.

Why not tell the compiler what he/she knows with an empty throw specification ?

Constructors/destructors are a huge help. It makes it far easier to write correct programs.

Yes. That means that * You don't have to clutter your normal program flow with error handling code * There is no way you can forget to handle an out-of-memory situation

For references - not for pointers.

It doesn't have to be that way. See "Technical Report on C++ Performance".

If no error occurs you will have error handling code, which is never executed. The same is the case if functions return error code: if(!foo()) { // error handling code goes here return; }

[8 can afford exceptions in your embedded code, in the first case.)

You can most definitely write code in C++ which behaves unexpected. But why wouldn't you always catch exception objects by const reference ? try { // ... } catch(const std::exception& x) { // ... }

[8 and destruction built in. Writing proper code to handle

Often you dont' have to write anything - as shown above. It's a simple matter of using an effecient coding style.

When writing C++ you must deal with C++ semantics. Does that suprise you ?

[8 compiler to invoke a constructor for the parameter p, in order

Of course you can write sub-optimal programs in C++ - the language doesn't prevent that. But why should we ? You can simply write: A rA(const A& p) { return p; } to prevent the compiler from creating a copy when parsing the argument p to the functions. A lot of compilers will do return value optimization.

[8 objects? How many people how a C++ compiler supports dynamic

It sounds like FUD to me. A lot of people that I know has at least an understanding of the principles of what code the compiler might generate. Of course there are a lot more people who doesn't. My mother doesn't know - but she doesn't program embedded devices in C++, so it doesn't matter. It doesn't take a genious to understand the principles of what code the compiler might generate. It does however take education - but the sources are available.

Kind regards

Mogens Hansen

Reply to
Mogens Hansen

Do you mean support or enforcement? C++ supports these things, but does not enforce them; enforcement is left to the programmer and the project's coding standards. C++ still retains C's flexibility, so, additionally, there are some good middleware choices out there (CORBA, ACE, etc.) should you need to extend these paradigms across a distributed system.

I'll go with the notion that ADA offers better support than C++ for object-oriented programming, but the thread started as a discussion of C versus C++.

I'll also go with the notion that C++ is only as good as the compiler. So far, in my experience, only GCC and M$-Studio really meet the criteria of an adequate compiler, with Studio in second place. If you start on a C++ program with a marginal compiler, you will probably wind up finishing with a C project and hating C++.

Reply to
Ian McBride

[8 > is almost a requirement. Who provides that for embedded work?

for

Sorry. I was thinking of full member specialization.

But still I would expect several C++ compilers for the embedded market to support partial template specialization.

Kind regards

Mogens Hansen

Reply to
Mogens Hansen

[snip]

Not my intention. I was attempting to point out that the OO methology was more likely to have supporting tools & Software Engineers that have been trained to use it.

Yes you are correct -- drivers are an implementation of a design goal. The context of my point was a response to the previous poster's comment that for embedded systems, that the dilution of the association of the hardware & software was undesirable (in my paraphrasing).

Strictly speaking it isn't OO, but I see a close association with the total Software Development Process. Chances are, if you're into OO then you would have probably read Fowler & Scott's "UML Distilled". From there, if you're into producing "quality" software then you may have perused Rational Unified Process methodology.

Now none of these "quality" methodolgies need be exclusive used with OO, it's just that this is where it all started for me.

Ken.

+====================================+ I hate junk email. Please direct any genuine email to: kenlee at hotpop.com
Reply to
Ken Lee

Well, I'll admit that there was mention of OOD. But the last few sentences of that post asked for comments on using C++ in embedded systems.

In any case, even if the original author wanted comments about procedural vs. OO, that wasn't my intent in my response. I only wanted to clear up the misconceptions. I'm not really interested in the procedural vs. OO debate. Both are good, both have their place. It's not whether we should or shouldn't, but when. And that's all I'll say about that...

--jc

--
Jimen Ching (WH6BRR)      jching@flex.com     wh6brr@uhm.ampr.org
Reply to
Jimen Ching

I said and meant "support" in the sense that C++ allows it to be done with varying levels of difficulty.

I was responding to the poster's missive to:

"Name me another language that supports multiple paradigms and then we can compare."

I agree completely about compiler quality being essential to the experience. I have used a few compilers that were so bad I would have given up on C++ if they were my first experience with it.

VC vs GCC - I would qualify which versions of the compilers. IMO VC6 is the equal of GCC 3.2.1 ... I haven't used any GCC later than that. VC6 also offers, I think, slightly easier access to MMX and SSE for Intel machines (which are what I care about).

George

Reply to
George Neuner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.