C vs C++ in Embedded Systems?

No, that's much too strong a requirement. The real requirement is that the garbage collection system be able to find every pointer that it needs to relocate, so that it can be changed to follow the allocation. Double indirection through handles is one obvious way to do that, but simply ensuring that all pointer variables actually point to allocated memory. That's a natural consequence of the "safe" languages that I mentioned, and so those languages get to use normal, one-way pointers, even to memory objects that can be moved when needed. It's a requirement that can't be ensured with C or C++.

I'm prety sure that the last approach is the one usually used, although I believe that finding ways to relax this constraint is an avenue of active research.

True, that's why those systems make do with static allocations, and languages that support them: C, C++ (if you're careful), Forth, assembly, etc.

Cheers,

--
Andrew
Reply to
Andrew Reilly
Loading thread data ...

I mean that if the error source and the error handler can be tightly bound (for instance by turning a sequence into a loop decoding a state machine), then there simply is no need to use exceptions.

Same answer, really. If the deeply-nested call can return a failure code to the state machine, again there's no need for exceptions. (I generally find I have to do this anyway, since the state machine needs to know what to do next...)

Steve

formatting link

Reply to
Steve at fivetrees

It was just a question. I guess it *is* just me...

Many others in the ng seem willing to take you on face value:

1) you were an undergrad (in august 04) asking about what courses to take to prepare for MS in embedded. 2) you flirted with a 89c51 in december 3) you are now porting linux to a custom (not 3rd party or eval) PCB with a very powerfull processor (AT91RM9200 200 MIPS ARM with "16K-byte instruction and 16K-byte data cache memories, 16K bytes of SRAM, 128K bytes of ROM, External Bus Interface featuring SDRAM, Burst Flash and Static Memory Controllers, USB Device and Host Interfaces, Ethernet 10/100 Base T MAC, Power Management Controller...") that only comes in 256 BGA or a 208 PQFP packages.

I find this sequence a bit difficult to believe, that's all. I'm impressed, if it's true. How much support are you getting from Dehli College of Engineering? Is the school making the prototype PCB for you (including the reflow)?

Bob

Reply to
Bob
[...]

As well as the wrong answer. Or at least, not the answer that was probably intended...

Regards,

-=Dave

--
Change is inevitable, progress is not.
Reply to
Dave Hansen

I apologize. I shouldn't have implied that you were trolling.

Bob

Reply to
Bob
[snip]

I don't see what this has to do with the argument whether exceptions make sense in state machines or not. I guess I don't understand what you say here.

The need for exceptions arises from the fact that most non-trivial functions do their "work" by calling other functions what typically results in deeply nested call trees. Moreover, the function that wants to report a problem is very often separated by several levels from the function that actually has a chance to remedy the problem.

*If* it can, but why should it? If a call in an action results in lets say 10 other calls, as follows:

A transition action calls a(), a() calls b(), b() calls c(), c() calls d(), ... j() calls k()

and k() wants to report a problem that can only be remedied by a()'s caller then why should one not use exceptions?

As an aside, if the error cannot even be handled inside the transition action then the transition action would want to report the problem to the state machine. With almost all the FSM frameworks I know one would need to do that by posting an error event and then returniung normally from the action. I find this rather cumbersome because the state machine could easily automate this. But that's really beside the point, what matters is that exceptions help you to propagate errors from k() to the transition action.

Regards,

--
Andreas Huber

When replying by private email, please remove the words spam and trap
from the address shown in the header.
Reply to
Andreas Huber

Unfortunately, you seem to have missed my point. I was not discussing the problem of garbage collection (if a programmer does not know when to delete an object, that programmer should not be hired), but rather the problem of dynamic memory squeezing to allow a sufficient amount of contiguous memory. Using some ad-hoc method in which the code is disassembled to find out all memory references would not be acceptable due to the memory size and CPU power limitation, not to mention some rare cases, in which there might be some innocent looking bit patterns that could be misinterpreted. In an ad-hoc system, you might be able to achieve

99.999 % reliability, but in many systems, this is simply not enough. I have written several assemblers, linkers and disassemblers and relying on heuristics and other ad-hoc methods will sooner or later cause a catastrophe. You must have a formally sound method to describe the environment.

Paul

Reply to
Paul Keinanen

Yes, I understand.

Again, I hear this justification a lot. My use of state machines tends to "unwind" the problem, and tends to flatten the error reporting - I tend not to run into such deeply nested calls - or at least, error propagation tends not to be a problem. I sometimes think that the real skill I've acquired over the years is in decomposition, and that's really what's saving my bacon. But one of my standard tools is the state machine (or hierarchy of nested state machines).

I suspect I'm making a dog's dinner of explaining myself; apologies - tired....

Again, all of this makes good sense. I have no huge problem with posting error events, *so long as* they don't just become a thinly-disguised global...

Steve

formatting link

Reply to
Steve at fivetrees

Any halfway decent C compiler will generate only load/store/move sort of instructions for that.

Reply to
Eric Smith

On Tue, 01 Feb 2005 00:51:06 +0200, Paul Keinanen wrote:

Indeed that appears to be the case, sorry. My point above is strictly related to the paragraph of yours that I quoted.

[*] [I'd like to come back to this one, below]

I do not believe that I was talking about an ad-hoc system. In all of the languages that I mentioned, the language runtime knows exactly where all of the pointers (references) are, through type information and other language-based constraints. Random bit patterns aren't in memory references, by definition, in these languages.

Exactly.

Now, whether any of the "safe" languages are suitable for any particular embedded application or environment is obviously a decision for each situation. Certainly many embedded platforms are just too small to support the overhead. Some aren't, though. Certainly some of the issues relating to real-time behaviour may not be solved in all situations, and that may be an issue, but there are certainly embedded applications that involve non-realtime behaviour where some of the benefits of a dynamic OO language could be of use. Anything with an ethernet port and a sophisticated TCP/IP control protocol would be a reasonable ball-park, IMO.

[*] [Digression:] It has occurred to me, as I've increased my use of matlab, scheme and python for various projects, (off-line, usually), that the apparent increase in "power" of these languages comes from the structure of the libraries and the usual idioms. These typically revolve around operating on and returning whole collections (or other "large" objects) from function calls, rather than scalar values. This allows the programmer to operate more at the pseudo-code or high-level algorithm level, and is, IMHO, a consequence of pervasive garbage collection. It is much harder (if not actually impossible) to use this style in C or C++ because it breaks the usual rules of level-balanced allocation and de-allocation. In a non-garbage-collected environment, a caller can't know, in general, whether a particular reference that it has just been given is unique or not (in which case it should not free it when it's done), and if unique, whether it points to allocated storage (should be freed) or static/const storage. So: to answer the statement that prompted this particular rave: if a hire is worrying about when to delete an object then (a) they're obviously not operating on real-time code (and probably not operating on embedded code at all) and (b) they're wasting effort that could be put into better product or shorter release schedules. Yes, I might be over-stating the case in the cause of a good argument; I don't mean to be offensive. [This is a digression, since the garbage/manual distinction applies equally to C and C++, and so doesn't bear on the topic itself, other than as a tangential put-down of C++. Sorry.]
--
Andrew
Reply to
Andrew Reilly

I agree, and would go further to suggest that it's part of a design methodology that doesn't admit to "error" conditions: everything that can happen is something that can happen, and should be designed-for. If you keep everything in-band, then there is no scope for "exceptions" as such, although you may very well need to define specific "error states" as part of the system decomposition. I suspect that this strategy may be brittle, and may not scale up to things as complicated as graphical workstations, but it does work nicely on single-chip coded-to-the-metal embedded projects, in my experience.

--
Andrew
Reply to
Andrew Reilly

As written in those three lines, of course, but it was only meant to illustrate that all of the numbers were integers. If it helps, consider this instead:

int divide(int numerator, int denominator) { return numerator / denominator; }

Ed

Reply to
Ed Beroset

If this seems to be a problem, you might want to investigate reference counted objects and auto_ptr. Both are useful concepts for programming effectively in C++, and both are typically used to address just this kind of thing. Basically, if the caller can't know whether a particular reference is unique or not, then it is the responsibility of the programmer(s) to assure that it does not matter whether or not it is unique.

Ed

Reply to
Ed Beroset

And there you are, having re-invented a garbage collection system, clumsily. Better, IMO, to just start off with a neat, clean language that does the right thing implicitly, or avoid dynamic memory allocation altogether.

Just to zip back to the previous discussion about memory space requirements and non-determinism of garbage collection cycles:

(a) How do you know how much memory you need, to guarantee that there can be no fragmentation-based memory "exhaustion" in a long-running, dynamic C or C++ program? How does that change on different platforms, with their different implementations of malloc/free? Is it necessarily better than the space needed with a garbage collection system?

(b) How do you know what the worst-case execution time behaviour of malloc() and/or free() is, on your platform of choice? Is it bounded? Is it allways better (or at least never worse) than any given garbage collection system?

--
Andrew
Reply to
Andrew Reilly

to

not

tends

of

Nicely put. That's also been my experience.

Steve

formatting link

Reply to
Steve at fivetrees

Garbage collection is not necessarily "the right thing." That's why C++ doesn't include it, but includes sufficient tools to create it without much effort. Java, for example, insists on garbage collection, with all the inherent problems in a GC scheme. If you want to have a function call return a collection of newly allocated items, you can do that without garbage collection. In fact, it's a pretty common idiom. If you destroy a STL Vector, Deque or other collection, by definition the destructor is called for each element within the collection, too.

These are good questions to ask if you are working on an embedded system that allocates and frees memory. They go with all of the other questions a competent engineer must answer, like "How do you know how much stack space will be enough?" and "Are any of the time guarentees of any of the maskable interrupts ever violated by holding off the interrupt for too long?"

The larger and more complex the system, the more difficult these answers become. When you're working with a more complex system, it makes sense to use tools and techniques that simplify the job and the use of C++ can be one of those tools.

C++ may not be perfect (and critics love to call it "poorly designed") but it works well for me, because I've invested considerable time figuring out how to use it. It could be that some of those who rule out C++ entirely for embedded work simply don't know the language very well.

Ed

Reply to
Ed Beroset

Decomposition - at least the kind I know - makes call trees deeper and not more shallow, so you'd have even more need for exceptions. The only way I see how you can flatten call trees in state machines is:

1) You make your functions contain more statements (e.g. instead of calling d() from c() you simply copy-paste d()'s function body into c()). 2) Because 1 leaves you with inordinately long function bodies, you break them up into handy pieces and have your fsm call them in the right sequence.

The problems I see with this approach is that it leads to code duplication and obfuscation, just to avoid exceptions.

Regards,

--
Andreas Huber

When replying by private email, please remove the words spam and trap
from the address shown in the header.
Reply to
Andreas Huber

Exceptions don't make any difference here. No matter whether you use error codes or exceptions you'll always have to deal with them one way or another. Exceptions only provide a mechanism to report problems. Dealing with them remains the job of the programmer.

I don't doubt that it works well for small applications, but - as you say - it does not scale very well.

Regards,

--
Andreas Huber

When replying by private email, please remove the words spam and trap
from the address shown in the header.
Reply to
Andreas Huber

No, that's not what I mean. Not sure I can describe what I *do* mean, though ;).

Not qualities I would tolerate. No; if anything I believe my code is clearer - simply because there is nothing going on that isn't visible, and that I have to remember to clean up later...

Probably, exceptions in general are like everything else we've been discussing - a tool that's appropriate in the right cases, and in the right hands.

Steve

formatting link

Reply to
Steve at fivetrees

This isn't what I meant at all, but it's hard to give a non-specific, general example. Here's a (probably flawed) one:

Say that you have an application that wants a reciprocal. You might be able to use domain knowledge of the applciation to define a "reciprocal" function that returns a saturation value if the input is zero (or negative, perhaps) rather than throwing an exception that must be explicitly caught by the caller. By encapsulating what might be an exceptional condition in a fully general function into the *spec* of the specific function, you can completely remove the exception and the need to deal with it at a high level, by completely dealing with the situation at the lower level.

The brittleness arises because this doesn't look as though it would encourage the development of fully (or maximally) general subroutine libraries. Perhaps there's a happy medium where specialized (wrapped) versions of library functions that may throw exceptions are always used, where the exception is caught at the lowest level (the wrapper), and dealt-with appropriately (for the application). That doesn't sound a whole lot different to just checking for the returned error condition, though...

--
Andrew
Reply to
Andrew Reilly

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.