C vs C++ in Embedded Systems? - Page 2

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
Re: C vs C++ in Embedded Systems?

[...]
Quoted text here. Click to load it

As well as the wrong answer.  Or at least, not the answer that was
probably intended...

Regards,

                               -=Dave
--
Change is inevitable, progress is not.

Re: C vs C++ in Embedded Systems?
Quoted text here. Click to load it

Any halfway decent C compiler will generate only load/store/move sort
of instructions for that.


Re: C vs C++ in Embedded Systems?
Quoted text here. Click to load it

As written in those three lines, of course, but it was only meant to
illustrate that all of the numbers were integers.  If it helps, consider
this instead:

int divide(int numerator, int denominator)
{
    return numerator / denominator;
}

Ed


Re: C vs C++ in Embedded Systems?

Quoted text here. Click to load it

I prefer C++ when I have the resources because I can use the STL and have
all the other benefits of object oriented programming. You can always drop
back into C if you want to. However on Linux I find that using C++ libraries
makes programs 10 times bigger and compile 10 times slower. On small
processors you don't have the option of C++.
Peter



Re: C vs C++ in Embedded Systems?
On Fri, 28 Jan 2005 19:17:55 -0000, "Peter"

Quoted text here. Click to load it

My understanding is that STL relies heavily on using dynamic memory.

In small embedded/realtime systems (not using virtual memory), using
dynamic memory can cause severe memory pool fragmentation, which would
case a memory allocation failure sooner or later. In systems designed
to run continuously for decades (or until the hardware is replaced),
the use of dynamic memory can be a quite critical question.

Paul
  

Re: C vs C++ in Embedded Systems?

Quoted text here. Click to load it

My understanding is that there are modern coping garbage collection
systems (for languages that don't mind having things move around at run
time[*]) that are supposed to not break as long as there is at least twice
as much physical memory as the worst-case active memory load.  So if
you're prepared to pay that price (allocate no more than half of available
memory at once), you should be able to use dynamic memory and garbage
collection "forever".  I know that Eiffel has been used, with such a
garbage collector, in some ample-memory embedded systems, like laser
printers.

[*] That's pretty much the "safe" languages that don't let you muck about
with pointers as though they were integers, or make up pointers with
integer maths: the lisps & schemes, the Pascal family (including ADA?
dunno) and Eiffel, the ml family, Java, C#, etc.  Almost the set {not C, C++}...

--
Andrew


Re: C vs C++ in Embedded Systems?
On Mon, 31 Jan 2005 09:58:12 +1100, Andrew Reilly

Quoted text here. Click to load it

In order to be able to move the dynamic objects, the object can not be
accessed directly through pointers. Each object must be accessed with
a handle, that is used to index a pointer table to get the final
address.

The required indirect indexed addressing mode can be quite costly
depending on available addressing modes and available registers and
will cause an increase in code size. Also each active object needs a
pointer in the pointer table, which must be in RAM, since the dynamic
memory manager must be able to update it, when a data block is moved
(reshuffled).

However, in embedded and/or realtime systems, there are often
interrupts and task switching, which are quite problematic, if dynamic
memory objects can be moved. You either have to disable the interrupts
during dynamic memory data moves, which often is unacceptable due to
the long latencies or the interrupt service routines _must_not_ be
allowed to access any memory allocated dynamically.

Even if the dynamic memory reshuffling is done in the lowest priority
(NULL) task, in a pre-emptive OS, an interrupt can cause a high
priority task to become runnable, which should suspend any lower
priority tasks. However, if the dynamic memory reshuffle is in
progress, this task switching must be disabled, until the reshuffle is
completed, which may delay the execution of the runnable high priority
task longer than acceptable, even if a single block is reshuffled at a
time.  

Quoted text here. Click to load it

Suggesting such huge extra allocations would not be very realistic in
many embedded systems with low resources.

Paul


Re: C vs C++ in Embedded Systems?
Quoted text here. Click to load it

No, that's much too strong a requirement.  The real requirement is that
the garbage collection system be able to find every pointer that it needs
to relocate, so that it can be changed to follow the allocation.  Double
indirection through handles is one obvious way to do that, but simply
ensuring that all pointer variables actually point to allocated memory.
That's a natural consequence of the "safe" languages that I mentioned, and
so those languages get to use normal, one-way pointers, even to memory
objects that can be moved when needed. It's a requirement that can't be
ensured with C or C++.

Quoted text here. Click to load it

I'm prety sure that the last approach is the one usually used, although I
believe that finding ways to relax this constraint is an avenue of active
research.

Quoted text here. Click to load it

True, that's why those systems make do with static allocations, and
languages that support them: C, C++ (if you're careful), Forth, assembly,
etc.

Cheers,

--
Andrew


Re: C vs C++ in Embedded Systems?
On Mon, 31 Jan 2005 21:04:57 +1100, Andrew Reilly


Quoted text here. Click to load it

Unfortunately, you seem to have missed my point. I was not discussing
the problem of garbage collection (if a programmer does not know when
to delete an object, that programmer should not be hired), but rather
the problem of dynamic memory squeezing to allow a sufficient amount
of contiguous memory.
 
Using some ad-hoc method in which the code is disassembled to find out
all memory references would not be acceptable due to the memory size
and CPU power limitation, not to mention some rare cases, in which
there might be some innocent looking bit patterns that could be
misinterpreted. In an ad-hoc system, you might be able to achieve
99.999 % reliability,  but in many systems, this is simply not enough.
I have written several assemblers, linkers and disassemblers and
relying on heuristics and other ad-hoc methods will sooner or later
cause a catastrophe. You must have a formally sound method to describe
the environment.

Paul


Re: C vs C++ in Embedded Systems?

Quoted text here. Click to load it

Indeed that appears to be the case, sorry.  My point above is strictly
related to the paragraph of yours that I quoted.

Quoted text here. Click to load it

[*] [I'd like to come back to this one, below]

Quoted text here. Click to load it

I do not believe that I was talking about an ad-hoc system.  In all of the
languages that I mentioned, the language runtime knows exactly where all
of the pointers (references) are, through type information and other
language-based constraints.  Random bit patterns aren't in memory
references, by definition, in these languages.

Quoted text here. Click to load it

Exactly.

Now, whether any of the "safe" languages are suitable for any particular
embedded application or environment is obviously a decision for each
situation.  Certainly many embedded platforms are just too small to
support the overhead.  Some aren't, though.  Certainly some of the issues
relating to real-time behaviour may not be solved in all situations, and
that may be an issue, but there are certainly embedded applications that
involve non-realtime behaviour where some of the benefits of a dynamic OO
language could be of use.  Anything with an ethernet port and a
sophisticated TCP/IP control protocol would be a reasonable ball-park, IMO.

[*] [Digression:] It has occurred to me, as I've increased my use of
matlab, scheme and python for various projects, (off-line, usually), that
the apparent increase in "power" of these languages comes from the
structure of the libraries and the usual idioms.  These typically revolve
around operating on and returning whole collections (or other "large"
objects) from function calls, rather than scalar values. This allows the
programmer to operate more at the pseudo-code or high-level algorithm
level, and is, IMHO, a consequence of pervasive garbage collection.  It is
much harder (if not actually impossible) to use this style in C or C++
because it breaks the usual rules of level-balanced allocation and
de-allocation.  In a non-garbage-collected environment, a caller can't
know, in general, whether a particular reference that it has just been
given is unique or not (in which case it should not free it when it's
done), and if unique, whether it points to allocated storage (should be
freed) or static/const storage.  So: to answer the statement that prompted
this particular rave: if a hire is worrying about when to delete an
object then (a) they're obviously not operating on real-time code (and
probably not operating on embedded code at all) and (b) they're wasting
effort that could be put into better product or shorter release schedules.
Yes, I might be over-stating the case in the cause of a good argument; I
don't mean to be offensive.

[This is a digression, since the garbage/manual distinction applies
equally to C and C++, and so doesn't bear on the topic itself, other than
as a tangential put-down of C++. Sorry.]

--
Andrew


Re: C vs C++ in Embedded Systems?

Quoted text here. Click to load it

If this seems to be a problem, you might want to investigate reference
counted objects and auto_ptr.  Both are useful concepts for programming
effectively in C++, and both are typically used to address just this
kind of thing.  Basically, if the caller can't know whether a particular
reference is unique or not, then it is the responsibility of the
programmer(s) to assure that it does not matter whether or not it is unique.

Ed


Re: C vs C++ in Embedded Systems?

Quoted text here. Click to load it

And there you are, having re-invented a garbage collection system,
clumsily.  Better, IMO, to just start off with a neat, clean language that
does the right thing implicitly, or avoid dynamic memory allocation
altogether.

Just to zip back to the previous discussion about memory space
requirements and non-determinism of garbage collection cycles:

(a) How do you know how much memory you need, to guarantee that there can
be no fragmentation-based memory "exhaustion" in a long-running, dynamic C
or C++ program?  How does that change on different platforms, with their
different implementations of malloc/free?  Is it necessarily better than
the space needed with a garbage collection system?

(b) How do you know what the worst-case execution time behaviour of
malloc() and/or free() is, on your platform of choice?  Is it bounded?  Is
it allways better (or at least never worse) than any given garbage
collection system?

--
Andrew


Re: C vs C++ in Embedded Systems?
Quoted text here. Click to load it

Garbage collection is not necessarily "the right thing."  That's why C++
  doesn't include it, but includes sufficient tools to create it without
much effort.  Java, for example, insists on garbage collection, with all
the inherent problems in a GC scheme.  If you want to have a function
call return a collection of newly allocated items, you can do that
without garbage collection.  In fact, it's a pretty common idiom.  If
you destroy a STL Vector, Deque or other collection, by definition the
destructor is called for each element within the collection, too.

Quoted text here. Click to load it

These are good questions to ask if you are working on an embedded system
that allocates and frees memory.  They go with all of the other
questions a competent engineer must answer, like "How do you know how
much stack space will be enough?" and "Are any of the time guarentees of
any of the maskable interrupts ever violated by holding off the
interrupt for too long?"

The larger and more complex the system, the more difficult these answers
become.  When you're working with a more complex system, it makes sense
to use tools and techniques that simplify the job and the use of C++ can
be one of those tools.

C++ may not be perfect (and critics love to call it "poorly designed")
but it works well for me, because I've invested considerable time
figuring out how to use it.  It could be that some of those who rule out
C++ entirely for embedded work simply don't know the language very well.

Ed


Re: C vs C++ in Embedded Systems?
I have used C++ in a number of embedded projects.  This was back in the
mid-90s, and we had some rules to stop programmers going overboard with
language features to the extent that others could not determine what a
line of code did by simply reading it.

1. Don't use exceptions or templates - these are "heavy weight"
features which take a lot of code and/or runtime.
2. Don't use operator overloading:  a = b - c should not invoke 100,000
lines of code.
3. Don't use polymorphic functions.  With implicit type conversion it's
hard to figure out which out of the 3 versions of a function will be
called, given a specific line of code.

We mainly used C++ for its class hierarchies, virtual functions, etc.
for all the normal object-oriented advantages, but we restricted our
use to make sure we never lost track of what was a function call vs.
what was an instrinsic machine operation.

We were very successful.  Bringing C programmers up to speed with this
"reduced" C++ was very fast, and we made sure that C++ experts did not
write code that was too clever for the masses.


I guess the bottom line is that, like all language choices, "it
depends".  It depends on tool support, personal preference, even
political will.

C++ can do everything C can do... only with uglier (mangled) function
names :-)


Re: C vs C++ in Embedded Systems?
snipped-for-privacy@yahoo.com writes:

Quoted text here. Click to load it

This wasn't the main reason most people limited these features.  A
major problem was often that these "advanced" features were poorly
implemented, or not reliably portable across compilers.  Most of
these "heavy weight" features can be used in a light weight manner
with a bit of thought.

Quoted text here. Click to load it

Exceptions can be efficient in comparison to the alternative of
putting explicit error checks everywhere.  A problem with using
them though is that if you've got a significant body of existing
code then you've essentially got to restructure it all.  If you
try to add exceptions a little bit at a time you end up with an
extra exception overhead in addition to the pre-existing error
checking overhead.

Templates can be used just as smart type-checked macros.  Just because
the STL uses templates doesn't mean that all template use has to be so
inefficient.  In particular, one could create a straight-forward
list-of-void* container, then use generic templates to typecast
everything safely to have list-of-pointer-to-anything containers.
No extra run-time lines of code.


Quoted text here. Click to load it

Exceptions can be efficient in comparison to the alternative of
putting explicit error checks everywhere.  A problem with using
them though is that if you've got a significant body of existing
code then you've essentially got to restructure it all.  If you
try to add exceptions a little bit at a time you end up with an
extra exception overhead in addition to the pre-existing error
checking overhead.

Templates can be used just as smart type-checked macros.  Just because
the STL uses templates doesn't mean that all template use has to be so
inefficient.  In particular, one could create a straight-forward
list-of-void* container, then use generic templates to typecast
everything safely to have list-of-pointer-to-anything containers.
No extra run-time lines of code.

--
Darin Johnson
    "You used to be big."
We've slightly trimmed the long signature. Click to see the full one.
Re: C vs C++ in Embedded Systems?
Quoted text here. Click to load it

I see this justification a lot, and in conventional sequential code it makes
sense. However, anytime I run into these issues I wrap the throw scope in a
state machine, and put the error check in *one* obvious place. The overhead
of "explicit error checks everywhere" doesn't apply. I don' need no
steenking exceptions ;).

Going back to the (most) recent discussion about RTOSs: I'm wary of the
design approach that figures there's another layer watching out for my ass.
Diapers don't make for good housetraining.

Steve
http://www.fivetrees.com



Re: C vs C++ in Embedded Systems?
Quoted text here. Click to load it

Do you mean that it makes no sense to throw exceptions in state
machines? If so, why?

Quoted text here. Click to load it

Why are state machines different? An action could very well make a
deeply nested call, or could it not? So, it makes sense to use
exceptions there.

Regards,

--
Andreas Huber

When replying by private email, please remove the words spam and trap
We've slightly trimmed the long signature. Click to see the full one.
Re: C vs C++ in Embedded Systems?
Quoted text here. Click to load it

I mean that if the error source and the error handler can be tightly bound
(for instance by turning a sequence into a loop decoding a state machine),
then there simply is no need to use exceptions.

Quoted text here. Click to load it

Same answer, really. If the deeply-nested call can return a failure code to
the state machine, again there's no need for exceptions. (I generally find I
have to do this anyway, since the state machine needs to know what to do
next...)

Steve
http://www.fivetrees.com



Re: C vs C++ in Embedded Systems?
[snip]
Quoted text here. Click to load it

I don't see what this has to do with the argument whether exceptions
make sense in state machines or not. I guess I don't understand what you
say here.

The need for exceptions arises from the fact that most non-trivial
functions do their "work" by calling other functions what typically
results in deeply nested call trees. Moreover, the function that wants
to report a problem is very often separated by several levels from the
function that actually has a chance to remedy the problem.

Quoted text here. Click to load it

*If* it can, but why should it? If a call in an action results in lets
say 10 other calls, as follows:

A transition action calls a(), a() calls b(), b() calls c(), c() calls
d(), ... j() calls k()

and k() wants to report a problem that can only be remedied by a()'s
caller then why should one not use exceptions?

Quoted text here. Click to load it

As an aside, if the error cannot even be handled inside the transition
action then the transition action would want to report the problem to
the state machine. With almost all the FSM frameworks I know one would
need to do that by posting an error event and then returniung normally
from the action. I find this rather cumbersome because the state machine
could easily automate this. But that's really beside the point, what
matters is that exceptions help you to propagate errors from k() to the
transition action.

Regards,

--
Andreas Huber

When replying by private email, please remove the words spam and trap
We've slightly trimmed the long signature. Click to see the full one.
Re: C vs C++ in Embedded Systems?
Quoted text here. Click to load it

Yes, I understand.

Quoted text here. Click to load it

Again, I hear this justification a lot. My use of state machines tends to
"unwind" the problem, and tends to flatten the error reporting - I tend not
to run into such deeply nested calls - or at least, error propagation tends
not to be a problem. I sometimes think that the real skill I've acquired
over the years is in decomposition, and that's really what's saving my
bacon. But one of my standard tools is the state machine (or hierarchy of
nested state machines).

I suspect I'm making a dog's dinner of explaining myself; apologies -
tired....

Quoted text here. Click to load it

Again, all of this makes good sense. I have no huge problem with posting
error events, *so long as* they don't just become a thinly-disguised
global...

Steve
http://www.fivetrees.com



Site Timeline