C++ syntax without C++

But we all agree "what arithmetic is" :)

Precisely :) Ditto Scott Meyers.

For me it was watching the experts argue over whether it was possible/desirable/necessary/legitimate for it to be possible to "cast away constness"

I have no doubt that small subsets of the language will be very useful in constrained circumstances. "C-with-classes" (i.e. C++85) seems like a good idea.

Reply to
Tom Gardner
Loading thread data ...

Are there /any/ C++11 compiler available now?

So, if history is anything to go by, we've only got another 5 years to wait for the first C++11 compiler :)

It is an order of magnitude worse with C/C++ than some other important languages.

Er no. You've merely controlled what you have asked for, not what you are given :) That's the problem :(

Reply to
Tom Gardner

If you only pick small problems with limited lifetimes, then I agree the problem is much reduced.

But C++'s raison d'etre was to enable reliable construction of large programs with a long lifetime.

That's a revealing statement, in that it implies what you haven't tried to do with C++.

Which will enable you to find different problems, and ones which are far less tractable :(

What's the current position on "casting away constness"?

(In the early 90s the apparent impossibility of getting all the language experts to agree the "right answer" was one of the reasons I gave up on C++.)

Reply to
Tom Gardner

There's an old engineering adage: "you can't test quality into a product".

False. But it is probably true that there are good almost-but-not-quite-C++ compilers.

Maybe, maybe not. Time will tell.

Either way it indicates that the standards are /far/ *far* too complex - and arguably brittle.

If only! Many C++ compilers accept the syntax and emit code that does the wrong thing. The "wrongness" may be due to any of: - programmer misunderstanding C++ - implementer choosing a different valid interpretation of the C++ standard - incorrect optimisation - libraries built with different presumptions - and others

Those can be minimised by working on small programs, but C++ was "designed" (and I use that term loosely) for large programs.

Reply to
Tom Gardner

This is exactly my point! Even if you ARE a guru, are all of the folks who *inherit* your pristine work similarly capable?

Consider (C):

double a, b, c, d; ... d = a + b + c;

I suspect *most* programmers would have a good feel -- in terms of instruction cycles (whatever those are) and memory -- as to the approximate cost of this statement. *Experienced* programmers could probably give you a rough multiplier for the difference between that and, for example:

long long a, b, c, d; ... d = a + b + c;

Folks with an understanding of the hardware could probably write pseudo-ASM for the (imaginary) machine lying thereunder.

Now, consider:

Complex a, b, c, d; ... d = a + b + c;

In your mind, a Complex is just a pair of doubles. So, perhaps

*twice* as complex as the first example?

Gee, did you count *all* the constructors that were invoked in that

*one* assignment? And the destructors that almost immediately followed? Did you remember all the *details* involved in instantiating each of those? Those "good" classes tried hard to *hide* these details so you wouldn't be bothered with them...

Error function(...); ... function();

Did you remember that an object was constructed for the return value -- and immediately destroyed because it wasn't used (referenced)? Do you remember that Error contains a dynamically created String object to contain the textual representation of the error message (the message that you never even looked at)?

E.g., String foo() { String message = "The result is " ... if (something) message += "positive!"; else message += "negative!"; return(message); }

String error = foo();

How much "work" is being done, here? Will the guy in the next cubicle come to the same answer?

I always liked the operator overloading aspect of C++ and the fact that I could design a self-consistent type of my own choosing.

E.g., I rely heavily on cubic bezier curves in my gesture recognizer. It would be *really* nice to be able to "operate" on them using a nice, clean "infix" syntax:

Bezier gesture, first_segment, last_segment, et al.;

gesture = first_segment + last_segment; if (gesture.direction != template.direction) gesture = -gesture; while (gesture.length > template.length) gesture /= 2; } if (template.curviness = gesture.curviness) ...

But, in my case, these operations need to be *fast* AND lean (as they reside in a little battery-powered "peripheral"). The "niceness" of the syntax has to give way to more practical implementations.

How expensive would the above code snippet be? How many times could it be applied in a 100ms window? How deep would the stack/heap need to be to ensure the unknown nature of "gesture" doesn't crash the device? How much work is involved in that innocuous "gesture/2" statement? Did you remember that all the member data for the new curve has to be updated?

Will The Next Guy understand these same issues? Or, do I have to leave implementation specific notes: "Given the 11/05/2013 implementation of the Bezier class, the above executes on hardware X using..." and *hope* someone maintains that metric?

OTOH, if the above was purely C (different meaning, entirely), you wouldn't be nearly as intimidated by these questions. (Or, if you examined a pure C implementation of the above)

You could buy an i8051 (actually, i8052) with BASIC built in. Consider how much *easier* that would have been to develop products?! How many have you actually *encountered* in products? :-(

Reply to
Don Y

You've missed two worse possibilities: 1) they don't read the comments 2) the comments have been deliberately removed

Now (1) is common.

I've seen (2) done to my code because "agile code is self-sufficient because it has unit tests and the nobody can be sure the comments are kept up to date with the code". At which point I went ballistic.

Don't get me wrong, agile has a lot of good aspects, but when it is applied as religious magic it is as bad as everything else.

(Ignoring all your other good points.)

Reply to
Tom Gardner

Versus:

struct complex a, b, c, d, temp; init_complex(a); ... add_complex(a, b, &temp); add_complex(temp, c, &d);

If your Complex class has heavy initializer routines, or if the addition function is bog slow, then that's a problem with that code. If it's library code, then it's a library issue like any other. If it's your own code, then it's a you wrote bad code issue like any other.

The C++ syntax hides much of this stuff. But that doesn't mean that someone meeting the basic standard of "I do this for a living" shouldn't know that it's there. And once those things are in fact buried off in other places, they're just as invisible in terms of how heavy or light they are. Maybe init_complex is just a null #define. Maybe Complex has no constructor. If you actually need to know, then you need to go look, and that's regardless of how much syntactic sugar the language throws down for you.

--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com 
Email address domain is currently out of order.  See above to fix.
Reply to
Rob Gaddi

Belatedly realizing my Limbo example may be vague. Points of clarification:

- greeting := "Hello" defines greeting's type (that of the rvalue) as well as setting it's "value". So, this is roughly equivalent to: string greeting = "Hello" (you can fill in the omitted details)

- string is a base type in Limbo. So, unlike a "CString" class, the implementation details are far more opaque

- the "heap" is hidden from the developer (along with *its* behavior)

- Limbo employs its own built-in GC adding even more uncertainty to the "calculation" I proposed, above.

- Limbo expects to operate under Inferno (OS) thereby co-operating with other "threads" in the same address space (details about when you might be preempted are as scarce as the rest of this stuff!)

So, while I can *do* a lot in Limbo very easily that would end up requiring lots of custom "mechanism" in another rlanguage (e.g., built-in support for lists, tuples, communications, multitasking...), I do so at the expense of having no clue as to what this is costing me at any particular stage! I can look at an executable that compiles down to a few KB -- and wonder if several megabytes will be enough RAM! :<

Reply to
Don Y

The flipside is equally common: they read ONLY the comments and ignore the code! How do they even *know* that they are actually using the 11/05/2013 implementation, presently? Will they even bother to look? (I believe most folks are lazy)

If it is NOT the 11/05 version, then this comment means what? Things have gotten better? Worse? Banana?

I've seen large commercial projects without any commentary. Even "sources for sale"! With descriptive identifiers like "a", "b" and "c". (Um, folks, cutting down on the number of keystrokes doesn't make the sources more affordable!)

I now approach software documentation from a different perspective. I've gone through the "comment everything", "comment nothing" and "comment intelligently" (whatever that means) phases over the course of my career. I even tried the LP approach (and found it lacking).

I now "comment sensibly" -- but base all my comments on the assumption that the developer/maintainer has read all the ancillary "supplemental" documentation that I have provided.

E.g., my use of Beziers and properties thereof -- along with the algorithms, "special casing" and tricks that I apply in the code are largely uncommented. I *expect* you to have read the ~50 pages of explanatory text n the "accompanying documentation" so you understand the issues, techniques and algorithms that I am

*applying* in the code. You don't (shouldn't!) expect me to give you a lecture on how density of a volume is calculated *in* the code that performs that calculation. Why should I have to explain the reasons that this code fragment is checking for a particular pathological case of curve?

This is actually proving to be very effective. There are so many more things that you can do in a "real document" that just don't make sense in a "programming" environment. And, when writing the code, if you feel you are doing something that *needs* explanation, you can make a note to add further clarification to your "verbose" description -- and just a terse note here, "in the comments" ("Degenerate case of colinear control points")

Separating the "knowledge" (documentation) from the "description" (code commentary) lets each have enough flexibility that the risk of comments becoming stale/misleading is minimized.

I use a similar approach to bind the code to the user documentation. So, errors/exceptions *in* he code can be related to specific explanations in the documentation -- along with remedies! (trying to go BACK through sources after-the-fact to extract this sort of information is expensive and largely ineffective. Esp as the next revision of the code will probably leave the programmer unaware of the requisite changes that the user info needs as a result of *his* modifications!

Reply to
Don Y

Exactly. Would you think along the same lines if faced with:

float a, b, c, d; ... d = a + b + c;

(assuming you had native float support)? Or, would you think in terms of a handful of equivalent ASM instructions?

[And, if you couldn't support the type natively, you'd still think in terms of helper routines THAT YOU HAVE LOTS OF EXPERIENCE WITH]

In *my* application, replace "float" or "Complex" with "Bezier". A few "simple" keystrokes in that statement can have HUGE cost consequences. The simplicity of the syntax hides too much mechanism -- requiring the developer to be more vigilant and more "qualified".

I contend that folks *don't* "go look". That someone writes: d = a + b + c + 5; without thinking "will a Complex(int) constructor be invoked? Or, will an operator+ that acccepts a rhs of int be invoked? How much added cost has been introduced? Or, is this even *legal* given the existing class implementation?? I wonder if Fred has any plans for lunch..."

And, these aren't even *murky* areas! Or, aspects "known" to be "expensive" (as, presumably, those can be *avoided*!)

I recall a (commercial!) tool that, once profiled, spent MOST of it's time executing printf("%d", ...). The developer refused to believe this! Show him the profiler's output and the argument is over. He just didn't *think* about how expensive it was to do this. It was easy for *him* -- bad for the product!

Reply to
Don Y

Templates can generate lots of code when used the wrong way. When used correctly, they can save a lot of work and add quite some type safety without adding code size.

Bad example: template void foo(T& t) { // huge implementation } Good example: void genericFoo(void*, size_t) { // huge implementation } template inline void foo(T& t) { genericFoo(&t, sizeof(t)); }

I even used templates and classes in bootloaders that were measured in bytes, not kilo or megabytes.

Just because you can do that doesn't mean you have to.

Stefan

Reply to
Stefan Reuther

You have to know if you *have* hardware floating point support. If so, you need to think before using it, for example the floating point registers may not be preserved in an ISR or between task switches. You may need special linker flags, need to use special versions of printf and library support. So it is not as clear cut as you seem to imply. And c++ does not take anything away, you can after all write c code with it.

You could make exactly the same argument against functions. They hide too much detail, who knows what they could be doing.

--

John Devereux
Reply to
John Devereux

Good gawd, I've involved myself in a flame war!

Let me just state here that I generally use C++ for embedded programming, and it serves me well. I avoid RTTI and exceptions, and I stay aware of what "cruft" I'm pulling in (even if David and I have different definitions of "cruft"). As long as I stay alert, everything works great.

I haven't bothered to keep up on all the nuances, and I don't pore over standards and whatnot, so most of what I know is intuitive (hence my comment about there being articles or books; I'd look for one that's specifically for C++ in the embedded world, and look for authors that understand that "embedded" comes in DIFFERENT SIZES.

Having said that, I should qualify: C++ works well when programs get big enough that the level of detail you need to make C work gets unwieldy. On a teeny processor with a teeny application, by the time you trim your C

++ usage down to fit the application you're pretty much programming in C anyway, so you may as well start there. If it fits in 4kB, or even 16kB of code space, you may as well be going with C.

(Similarly, if it fits in 256 bytes of code space you're probably better off in assembly language).

I think the biggest single benefit to me right now of using C++ is that I have a pretty big library of code that I've already written, and because it's in C++ it's been easier for me to make it reusable. Thus, when I need yet another text-based menuing system, I don't have to write it from scratch. Moreover, because it's in C++, the interface is well defined and the compiler is standing there with a ruler, ready to smack my knuckles if I made some stupid mistake. I could do the same thing in C, it'd just be way more awkward.

When I adopted C++ I was working in small groups of 2 to 5 software engineers all working on one processor or similar processors. There, C++ was a great advantage because I could make sure that the foundation stuff to the really good coders and let them insure quality, then the new guys and less ambitious guys could take that stuff and just use it. C++ has been much better for me when the group grows beyond one person than C has.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

Years ago, I was reading one of Stroustrup's C++ books. I had to keep RE-reading a section (a few pages) as it seemed entirely WRONG to me. Yet, here's the guy who created the language... surely the problem must be one of comprehension (lack thereof) on my part!

No, it's not a "typo" -- he is taking a consistent position and repeating it. "What am I missing?"

I finally took the opportunity to write to him -- humbly citing chapter and verse and explaining my dismay at failing to understand (or agree with) his points: "What am I missing?"

His reply: everything he had written was WRONG. (WTF? Surely *you* with the most experience in YOUR creation...)

What does that say about those of us who are less experienced, less disciplined, less invested or less intelligent? People who imperfectly know something inevitably make mistakes. People who aren't "motivated" don't CARE about their mistakes. Presumably, Bjarne was neither -- yet still made a glaring mistake -- and printed tens of thousands? of copies thereof! (apparently coming as a surprise to him as he didn't indicate it was a "previously reported error")

I am perpetually drawn back to complex: adj., too large to fit in a single human brain (perhaps conditioned by "while leaving room for other 'important' things" as a reference to Kelly Bundy)

Reply to
Don Y

True.

You are correct, but missing the point.

I get worried if language (or other) designers don't understand the consequences of what they've designed.

There's the old aphorism attributed to CAR Hoare (who made seminal advances with multiprocess/multiprocessor systems: ?I conclude there are two ways of constructing software design: one way is to make it so simple there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.?

It is obvious which Bjarne Soustroup et al chose :(

Reply to
Tom Gardner

C++ is a monster, it is a bastard between object-oriented language and a non-object language. It is having symptoms of PL/1 which was a victim of its versatility.

--

Tauno Voipio
Reply to
Tauno Voipio

That was what I was trying to avoid by EXPLICITLY framing the question as "C++ syntax" and not "C++ vs C". I.e., what can I do *in* C++ that is NO DIFFERENT than the code that C would generate -- without forcing the compiler to *only* treat my code as C! The examples I cited should drive home that point (e.g., I don't see how namespaces should

*require* a treatment effectively different from augmenting the symbol processing in a preprocessor).

Exactly. IME, most C++ programmers come from desktop environments. These *tend* not to be real-time nor resource-constrained. You can always buy a bigger disk, more memory, faster CPU, etc. Cost is little object...

I think "big" depends on how you approach a problem. I.e., you can decompose a HUGE problem into lots of very manageable smaller problems. Or, treat it as an unwieldy monolithic entity!

Stronger typing *tends* to constrain errors more. But, it also constrains abilities, as well. And, if there are mechanisms to side-step those constraints, then they aren't effective (folks learn how to sidestep them instead of doing things "The Right Way")

I saw the opposite. Run of the mill C++ "coders" throwing things together, wrapping a box around it and complaining any time someone grumbled about its performance, correctness, etc. No "forethought".

To be fair, these same folks would probably have created equally abysmal code in ANY language. But, I think the "low price of admission" that C++ (and now Java) presented drew many of The Wrong Type of people into the field ("I think I can make a lot of money writing software..." as opposed to "I have the sort of mindset that fits well with writing software"). Witness how far new languages go to "dumb down" the API...

Also, I suspect a good bit of the reason I chose to avoid C++ in my designs is associated with an "early adopter" mentality. I look at many technologies early on -- when they often aren't sufficiently mature (Not Ready for Prime Time). And, have a fixed level of patience for them to "get their shit together" -- I'm paid to *design*, not wait for compiler releases! When you have proven successes using other technologies, there's not much incentive to hang around and *hope*!

[amusingly, this late in my career, I am now making HUGE gambles on technologies that may never become effective! But, it's *my* dime so I can spend it as I want -- if it's a client's, I've got to spend it *wisely*! :> ]
Reply to
Don Y

Have a look at Ada if you really want to be intimidated! :<

If you make a tool and no one can use it effecvely/efficieny, I contend that you've made the wrong tool! (imagine if a HAMMER required you to wear a special suit and use a special

*holder* in order to employ it. You'd see lots of shiney new hammers sitting on the bench collecting dust -- and lots of people carrying large ROCKS in their toolkits! :> )

I was particularly dismayed to see the approach Ritchie et al. took with Limbo/Inferno -- *knowing* how successful C had been and all the "issues" with C++. It would be interesting to have a

*candid* conversation with any of them on that topic.

Perhaps their attitude was "it's just a job"?

Reply to
Don Y

My trajectory enable me to be able to choose to be on the bleeding edge, if and when I could see advantages: - Algol - didn't know anything else - C, early 80s - good for embedded - Smalltalk, mid 80s - good for more complex and dynamically unpredictable systems than C - C++, late 80s - too easy to make wrong choices that can't be undone, too complex with inadequate benefits - Objective-C, late 80s - nice, Smalltalk in C, best of both - Java - great, learned from history and merged all the good concepts into a coherent whole. Shame it can't be used for embedded

Reply to
Tom Gardner

The mechanisms in C++ for sidestepping the type constraints pretty much involve putting a big red arrow in your code with a label that says "PAY ATTENTION!!!"

If you're working by yourself it makes you do that, and if you're working in a group that's sufficiently disciplined then someone else will make you pay attention. When I was working in groups, we'd have one guy in every code review who was assigned the task of looking for questionable things like that and bringing them up, and basically grilling the author on why the decision was made to cast to a different type, why it couldn't be avoided, etc.

If you're going to write good software you have to approach things in a disciplined manner. The problems you describe are due to a lack of discipline, and as you say they could be associated with any language.

C++ makes it easier to do large projects quickly; it's still up to the team to make sure that those large projects are done _right_.

I've been doing embedded programming in C++ since around 1995. Since the day that I started, it has saved me time and -- in my opinion at least -- it has not led to a degradation in the quality of my work. In the time that I've been using it I've had some big victories (mostly involving successful reuse of large swaths of code). I've had a few setbacks, but those have mostly been involving people who either didn't care what they were doing, or who got so enthusiastic about the language that they felt they had to explore every possible feature thereof (this is a big mistake in C++ -- you want to adopt feature by feature, and at least for embedded, you want to stop well before you're exercising the whole feature set).

But then, I learned my C++ by always using the '-s' option on the compiler, and looking at the code actually generated.

That keeps you from falling into a rut and either being a has-been, a manager, or a has-been manager.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.