The C standard allows it. Principally used for obfuscation.
We are talking about embedded systems here and ROM mounted code. All the 'boot' has to do is set the stack pointer, and possibly pass a pair of values defining the limits of memory. You can even avoid the stack pointer setting by doing a jump to main. The call to 'init' can be done with another jump, passing the return address in a register. No need for argc and argv because everything starts on power-on. This is all standards compatible for non-hosted systems.
--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
Error-checking is good - but what would you do if new/malloc et al *do* fail due to e.g. memory fragmentation? Reboot? (I'm assuming a modal dialogue is inappropriate ;).)
I actively avoid all such sources of errors at runtime. (I also usually don't link in malloc and friends.) Nothing to do with C++ vs C or others, more part of the "Thou Shalt Not Crash" philosophy. YMMV.
Heh! I still hear that criticism of C from time to time. I read it as a) the code writer could have used an intermediate variable to make code flow simpler with (with care) no impact on compiled code, and b) the code reader is a n00b ;).
Re OO design: as I've said, it's a valid and useful tool. There *is* a case to be made, I think, for the idea that it's overused or used inappropriately - the hook/line/sinker effect I've mentioned elsewhere.
Steve, its time to weigh in on this discussion from someone with both a lot of embedded experience and a significant understanding of compiler issues.
Small embedded processors of the type used in consumer products and automotive applications have very little ram. Developers want to avoid malice and indirect accesses. Code for these applications is almost always fixed (It doesn't change after reset and initialization)
Compilers for small processors try to achieve as much compile time calculations as possible as a tradoff for code size and cycles. This helps OO because where possible a lot of the reasons stated for disliking OO can be resolved at compile time.
The real question what parts of OO are essential to be supported to be useful to achieve a useful development language and what parts would be on a wish list.
I think there is lots that can be done to create very tight OO implementations. Given a clean slate what C changes be needed to support OO?
You're right. I guess I was thinking that simple inheritance could be done statically, even in C++. I can see more of a justification for vtables in more complex cases.
If you're asking how I do inheritance: basically I cheat and consider it to be a somewhat more extreme form of polymorphism ;). I.e. it's not hard to extend a parent class via e.g. callbacks, even to the extent of adding a further subclass handler. And it all stays static. Nary a vtable, or a late binding, in sight.
If you mean how do I protect the definition of the superclass: first, it's entirely local in scope. Apart from the benefits in terms of encapsulation, it also avoids the usual C++ complaint about having to recompile everything if a superclass changes. I.e. there's somewhat less need to cast the superclass in stone.
If you mean something else: forgive me, I'm tired, and probably being dense.
I agree, and I'm not going to post a summary of Meyer's terminology. OO means too many different things to too many people, and I don't care for a fight over terminology. Get Meyer's book if you want a clear set of terminology that succinctly spans most of the ideas.
One other comment about C++; many of the features that make it better than C aren't intrinsic to OO at all, like mandatory prototypes, inlining, templates, overloading, etc... and some other non-OO features are a bad idea or badly implemented, like operator overloading, friends, the whole STL, the whole iostream library, the list is long :-). And then there are the crappy bits of C that it didn't escape from, like the incomplete and unsafe type system...
But I don't see why Steve's so against it; use the extra features where they help, and avoid the ones that don't. IOW, a better C. Sure, it takes experience to know how to use it well, but so does any powerful tool.
Off the top of my head, my wish list would be similar to Chris'. I'd like to be able to define classes entirely locally - no exposed definition - and be able to use a this-> pointer and have it compiled entirely statically. I'd like global *data* to be flagged with warnings (hell; make 'em errors ;).)
Constructors/destructors would be nice to have for general-purposeness, although for embeddedery I'd avoid new/free, so wouldn't need 'em - and for init time, I'd want to be very sure of the order in which they were called. I'd like strong(er) typing.
Inheritance... meh. Not sure. I'm gonna have to ponder this ;).
Yep, I agree with all of that. (And I hadn't appreciated that overloading wasn't OO, but now you mention it, it makes sense. I did see it mentioned when I researched the definition of OO a while back [during a similar conversation here - just wanted to check I wasn't talking utter bollocks ;)], but I'm beginning to distrust some of those definitions. And yeah, overloading really, really sucks. I can *always* find a cleaner, clearer, alternative.)
Heh. Probably because a) we're talking about embedded applications, and b) I've rarely seen it done well. Add those two together, and you have (or I've seen) projects suffering from terminal bloat, both in terms of runtime resources and in terms of development time. Not always, but often enough to ring warning bells. (Not *my* projects, I hasten to add ;).)
To be fair, I have seen C++ used properly and well in a desktop environment. (Statistically, I had to sooner or later ;).) I am aware that embedded C++ compilers have improved immensely, and I've been aware for a long time that a subset of C++ is fine for embedded use, given a good compiler, and a good understanding of the cost of each feature. But I've also seen one casual instance of a library class call in constructor after constructor involving great swathes of code (and puzzling execution delays), and almost double the code size. Data hiding is good; I'm not convinced that execution hiding is.
I'm also a strong believer in good design, implemented with a restricted toolset of familiar idioms - hell, code can be hard enough to read as it is. I don't want to be reaching for the C++ grammar police manual, or opening every header in the project and its libraries to be able to trace back some convoluted inheritance or constructor issue. I try to minimise side-effects in my code; C++ seems to fight to put them back in. Yes, it can be done well, *if* one checks the linker output carefully after every build.
Bottom line: nah. I can do better by myself by designing in OO, and coding in something clear and clean. KISS is my middle name ;).
Should have added: for the record, we've had this, or a similar, conversation many times here before. But I'm enjoying and learning from this thread more than before: it's clear that there have been some very thoughtful and insightful responses. I do sometimes question my disdain for embedded C++, while being a fan of (appropriate) OO - anything we can collectively do to make software-based products more reliable and robust, and reduce time to market, is something that I'm inherently HUGELY interested in.
[Actually, I've discovered that I'm more anti the hook/line/sinker problem than I am anti C++. This seems to be something I come across often, in all the work I do, ranging from project management to embedded development and all points in between.... Hmmm.]
So anyway: thanks to all for a most enjoyable and thought-provoking thread.
Steve (PS: ok, so I've had a couple of glasses of wine over a delightful dinner. But not quite into "you're my besht mate " territory. Yet.)
This is silly. If you don't check the return value from malloc then it's your own faulty code that is the problem. Otherwise, you decide how to handle the failure. If you can't stand any failure at that point, you look for another mechanism. If you can't make up your mind you pass the error upwards until it reaches a routine that can.
Meanwhile you are getting more mileage out of the available memory by controlling its allocation lifetime. If you have a memory leak that's again your own fault.
One more possibility - you have a faulty malloc/realloc/free module. That requires loud complaints to your compiler or runtime supplier.
--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
One of the things that I really liked about Eiffel (well, maybe it's changed since I looked at it, years ago) is that it separates allocation from initialization. Allocated objects are zero-filled by the allocator, and if that's not good enough for the class then a separate initialization method must be called before anything else happens (and that can be checked with appropriate sanity check assertions). This has the (IMO huge) benefit that it's easy and obvious to follow the chain of initializations, and they (necessarily) don't happen until *after* the main program has "started". Absence of magic is a good goal of programming languages, IMO.
I find that when I code object-oriented designs in C I follow most of the styles and idioms that I learned from Eiffel, and that practice has held me in good stead.
Reboot is not a bad thing _if_ you have double/triple redundant hardware, however, with many non-redundant system, reboot is a _very_ bad thing.
If double redundant hardware is not otherwise justifiable, would you add extra hardware to handle the dynamic memory problems due to dynamic memory fragmentation?
And what do you do when you assign fixed static storage blocks, and need more space at some point? The point is that you can get more mileage out of a malloc/free system than with fixed storage. Sooner or later you have to decide what to do when space is exhausted.
--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
a) I don't use no stinkin' free/delete. Fragmentation is the enemy.
b) This is only allocation (new/delete) at initialization. After startup is complete, you know that there are sufficient resources, otherwise you fail the application. That's why I don't like this technique.
Me too. All run-time allocation is done with dedicated pre-allocated pools and "placement new".
Amen.
--
Michael N. Moran (h) 770 516 7918
5009 Old Field Ct. (c) 678 521 5460
Kennesaw, GA, USA 30144 http://mnmoran.org
"So often times it happens, that we live our lives in chains
and we never even know we have the key."
The Eagles, "Already Gone"
The Beatles were wrong: 1 & 1 & 1 is 1
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.