C++ syntax without C++

"School" exposed me to a fair number of languages -- it seems every course had its own "language du jour" as well as "custom OS" on which it ran! Algol, Pascal, PL1, LISP, FORTRAN (variants) etc. But, all of those ran on big iron (nothing smaller than an '11).

In industry, mostly ASM (i4004, 8080/5, Z80/180, 2650, 8051-ish,

680x, 680x0, 32K, etc.) and C. Some Pascal. and, a spate of applicaion-specific languages developed as needed. I've only used C++ (and other "big" languages -- perl, tcl, etc) in desktop settings. Most often in building tools, etc. -- things that are never "deployed" in a product.

Invariably, I am facing scarce resource environments so *any* additional resources are questioned (as to their cost and *extended* costs!). "Can a recursive solution actually cost *less* than an iterative one?" etc.

This persists even in the automation project: *hundreds* of processors yet I'll invest time figuring out how best to use temporarily surplus capacity (at some increase in complexity) rather than increase the "base" resources available. (sooner or laer you ALWAYS run out of resources!)

So, its real hard for me to look at:

string: greeting; ... greeting = "Hello"; greeting += " " + gender; greeting += " " + lastname;

and NOT be aware of the differences between that and, e.g.,

sprintf(greeting, "Hello %s %s", gender, lastname);

Reply to
Don Y
Loading thread data ...

But you still have to PAY ATTENTION! And, have some "mechanism" (i.e. boss/authority) that can enforce this.

Too often, bosses/employers/clients would opt for the quick/easy out when faced with fixing something that "wasn't right" -- "We'll fix it later. Right now, the assembly line is waiting for ROM images!"

Of course, "later" never comes. If *you* want it fixed, then *you* bear the cost of fixing it (and the blame if you break it in the process!)

This is one of the big reasons I started working for myself. I want things done "right" (whatever that means, in my interpretation). I don't want to have to accommodate someone else's folly just because I can code faster/better than they can.

"Yeah, I know Bob screwed up and the interface to his module isn't what it was supposed to be. But, can you just tweek *your* code to make *his* work? It will be a lot easier for ALL of us if you do..."

(Yeah, you might be willing to *pay* me to do this; but that doesn't mean I will enjoy the experience or the "abuse" it entails)

Exactly! I can only impose discipline on myself. If other developers are NOT disciplined, I can't do anything about that -- except try to make choices that minimize the downside risk of their practices.

E.g., build protected environments for tasks -- so you can only hurt yourself, not others! Install capabilities so you can't use services that you aren't supposed to use (even by accident). Build opaque objects so you can't see what you aren't supposed to see. etc.

If your emplyer/client is focused on getting product out the door, the consequences of "lack of discipline" don't impact those who should most bear that cost (the folks who failed to exhibit any!)

[what, they get a smaller raise at year end?? They are denied the opportunity to manage (aka "be responsible for!") projects and other staff? Perhaps they should be penalized by *making* them manage products so they see and bear the costs of other folks' lack of discipline??]

I've got almost (exactly) 20 years on you :-/ So, have had to make more/bigger changes over the course of my career -- both in terms of the technologies, tools and methodologies.

I tend not to reuse "code" but, rather, "designs". (except for trivial "utility" things). But that's probably because I work in a wide variety of application domains, each with special characteristics.

E.g., the code to read a barcode from a TTL level signal has very little reuse value if the next project has a "smart" barcode reader in it (or no barcodes at all!). Likewise, the code to implement an "(blood) assay result" database and coom protocol in a test insrument probably won't port well to a stepper motor driver.

OTOH, how you structure the application and partition it into co-operating tasks -- as well as the software/hardware partitioning

-- is a highly reusable skill!

Goal is to come up with an effective solution. In my case, to also try to learn something along the way (though typically this comes from the application domain and not the implementation technologies -- ever wonder how they get the candy shell on M&M's? Or, how osmotic pumps apply to tabletting?? :> )

But, to too many (?) folks, "it's just a job". And, can you *blame* them? Why should they care (too much) about The Product? Will they still be at this firm after the next round of layoffs? Or, when the spouse gets transferred to ?

So, how do you put things in place that don't *require* the other guy to care a lot? So, all he has to be is "competent"? How do I prevent him from typing "class Foo" and starting to drag in features/costs that are "inappropriate" to the rest of the established design? (see my comments elsewhere re: video UI)

Reply to
Don Y

My school (not university/college) exposed me to Algol running on a single-user big iron machine with

39-bit (yup!) words and a fully-loaded complement of 8Kwords of ferrite core store, and an instruction time of 576us. I also "reinvented" FSMs when writing in asm, simply because they were the shortest way I could predict the program would work.

As for other asms and languages, 8080/Z80, 8086,

6800, 6809, Prolog, Pascal, MLP for writing transcendental functions for 6800s, Matlab, HiLo, PALASM, etc etc.

But not perl, not cobol (in school seeing "compute a=b+c*d" being "advanced" innoculated me), and unfortunately not FORTRAN.

If it is a one-off during exploration, I don't care. For logging in production systems, I do care!

Reply to
Tom Gardner

Can you, or do you, "cast away constness"? I gave up on C++ when there were endless discussion as to whether that should be possible.

It is frightening the way most programmers haven't got a clue about what is "on the other side" of a compiler. Frequently they can't even /vaguely/ indicate what happens in a simple function call!

And enables you to select the right tool for the job in hand. Very important to try to avoid hammering in screws!

Reply to
Tom Gardner

I don't think the two are mutually incompatible. I htink you can have something VERY complicated that *hides* its complexity from those intended to use it!

E.g., cars are reasonably sophisticated machines. Yet look at the folks who drive, sell and maintain them! The complexiy has been buried in a manner that insulates these people from it.

If, OTOH, there were five different ways of "taking a left turn", I suspect far fewer folks would be able to do so!

(Of course, even hidden complexity is exposed to *someone* at some *time*. But, hopefully, only a "select" cast and in specific circumstances -- car dealer can't arbitrarily decide *7* cylinders would be better than 4, 6 or 8!)

Reply to
Don Y

Ada's got a lot of functionality in it, and can seem unwieldly and bloated at times as a result (especially to newcomers) and the language maintainers seem intent on adding more and more in recent versions, but there is still a core language there which is still usable and reasonably easy to understand.

I really like what is at the core of Ada (and I like Wirth style languages in general) but at times it seems like the Ada language maintainers are trying to justify their own jobs by adding more and more functionality into the Ada standards.

However, none of that changes the fact there is a good core language within Ada which is still as usable as always.

In some ways, I like the approach Wirth has taken with the Oberon variants in which he (and his students) have created various language variants with only the core features remaining.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

I have no idea of the performance of the first machine I used. It sat behind a 110 baud modem. Could have been a mouse running around in a squirrel cage driving *gears* given the "speed" I saw at my end.

After that, punched cards in "batch" mode -- you spent more tme waiting for the deck to be loaded than for the program to run!

I use FSM heavily because they mirror the way I design hardware. And, beyond that, table-driven code (so I can condense the semantic content of the algorithm into the table and not the cruft that it drives!)

I often have to use a lot of languages "imposed on me" by someone who crafted some FOSS tool that I want to use or modify. I'm not enough of a zealot to rewrite everything the way I'd have written it in the first place! :>

I do a lot of prototyping of algorithms and system designs. I want a pretty good feel for how something is likely to perform

*before* I formalize it in a specification. Often, you don't see "issues" until you start trying to make something work (even in a rudimentary "prototype" form).

So, when I see "noticeable" differences in the performance of two "similar" approaches -- or, when a particular approach has greatly *varying* performance -- I want to know "why?".

This was how I stumbled onto most of the performance hits that I've disliked in C++. Sure, none of them are surprises -- once you *see* them! But, far too many for me to be able to reliably pick up on "intuitively". There's just too much going on behind the scenes!

The same is true of lots of things that folks take for granted (because detail is hidden). E.g., cancellation wrt floating point operations (do you ACTIVELY think about this each time you write a floating point expression??). Overflow on integer types (again, do you think abot how much headroom your integer calculations have *when* you create them?). Stack depth (the stack just "is"... what do you mean it can be exhausted?)

Always fun to watch a newbie embedded program "discover" how fat printf() is! "Yikes! My executable doubled in size just by adding one statement! What the hell did I *do*??" Or, some of the vagaries of floating point ("What the hell is '-0'?")

Reply to
Don Y

Yes! Because you *know* it can be expensive! You also have to decide if your application can AFFORD it -- regardless of whether it is native or provided by a helper library!

YOU ARE AWARE OF THE COSTS ASSOCIATED WITH FLOATS!

But you *see* function invocations! You don't *see* the constructors, overloaded operator invocations, destructors, etc. that "hide in the whitespace" (of C++). You have to consciously remember that there are (often nontrivial) costs associated with a statement that *seems* "simple".

Can you and all the folks working with you (as well as those that will follow years later) make that claim? Do you have mechanisms in place so others who aren't as "observant" or disciplined *will* notice these issues? And, policies that will "fix" them? Or, is it just the start of bitrot and feature creep?

Information/complexity hiding is a sword that cuts both ways.

Reply to
Don Y

True, but that's an entirely different point, as Toyota have found out to their cost this week.

formatting link

...and Honda also have

formatting link

Reply to
Tom Gardner

The idea/concepts are (reasonably) self consistent. It's just the sheer *volume* of material that confronts you that makes it intimidating. "How do I print 'Hello, World!'? Where do I *start* to look for this information?"

(sure, there are lots of texts that can lead you through the language. OTOH, if you pick up The (original) C Standard, you can figure this out for yourself in something less than a fortnight!)

Think of where it is used as an answer to that musing! :> If it was easy, EVERYBODY would be doing (using) it! :-/

I like small languages where you can hone your understanding of a few ideas -- instead of having to master lots of detail (much of which you may never use!).

E.g., when I design "logic", I tend to use a few parts/modules over and over again. Can I save a bit by optimizing a particular instantiation somewhere? Yes. But, is it worth remembering N different forms of a particular "component"/module? Will the next guy appreciate *why* I chose this, here -- and not

*that*? Do I want to distract him by making changes that don't significantly influence the result? (i.e., I would rather use a "change" to alert the next guy that "something DIFFERENT is happening over here! Pay attention!")
Reply to
Don Y

Am Wed, 6 Nov 2013 22:04:43 +0000 (UTC) schrieb Simon Clubley:

Me too. And Oberon is available for free and for everyone, not only for Wirths students. But seems the future development merged over to Zonnon, which i believe is not a real "Wirth language" anymore.

comp.lang.oberon

formatting link
formatting link
formatting link

Greets, Andreas Baumgartner

Reply to
Andreas Baumgartner

5-channel paper tape edited at 5cps or 10cps. And for the very advanced, you might be able save it on magnetic film - complete with 35mm sprocket holes :)

One thing I's like to see in C is saturating arithmetic - makes all sorts of algorithms (e.g. a control loop) behave less obscenely in fault conditions.

What's a stack? Sigh. Is it a LIFO array? Thwack.

e.g. why starting a 0.0 and adding 0.1 repeatedly doesn't end up at 1.0. Head hits desk.

Reply to
Tom Gardner

The point I was trying to make was there might be other ways of givng the user the "capabilities" that the language aspired to provide -- possibly by making the compiler implementation far more difficult! -- yet providing a simple/intuitive "interface" to the tool/language.

E.g., Limbo/Inferno (sorry, I am spending a lot of time with it, lately) addresses some "security"/integrity aspects (i.e., the sorts of things I address with capabilities) by the use of per-task "namespaces".

(different than the namespace issue I mentioned up-thread)

I.e., instead of making the entire filesystem visible to all tasks (the file system is effectively a namespace for *most* real-world systems!), you manually create a namespace for a task when you create said task.

So, I can take /etc/passwd, /home/dgy/calendar, /bin/application and stuff them into a single namespace as "passwords, appointments, and executable". The task that is given this namespace in which to operate can't access /sbin/init -- because there is no *name* for /sbin/init in its namespace (and no way for it to *get* one... even if it creates a file/folder called /sbin/init!)

This is an incredibly simple and intuitive mechanism that is easy to explain and understand. OTOH, implementing it takes a bit of work "behind the scenes". This, IMO, is a great way to use complexity -- if it has to be used!

Reply to
Don Y

That is a bit harsh, though I can imagine that exceptions look that way with a C mindset.

Though there are good reasons to stay away from C++ exceptions on resource constraint embedded systems, they do have merits too. Unlike "goto", at the point were the exceptional condition is detected it does not have to know where it is going to be handled, it just has to indicate what went wrong. At the point where you know how to handle a certain (category of) exceptional situation(s) you just have to specify which type of exceptions you are prepared to handle at that point. I.e. detection and handling of exceptional conditions is nicely decoupled. Unlike "goto", exceptions also provide a means to relay information from the point where the exceptional condition is detected to the point where it is handled. Unlike longjump, exceptions take care of proper stack unwinding making sure that all destructors are called that should be called, enabling the very useful and powerful RAII idiom (which is basically a must if you use exceptions, but is also very useful to write correct code if you don't use exceptions). One advantage (though some would consider it a disadvantage) of exceptions is that you cannot just be lazy and ignore them like return codes or status variables and continue as if nothing has happened.

The main advantage of exceptions is with large scale software where there can be a lot of calls between the point at which an error condition is detected and the point at which the error is handled. Exceptions eliminate a lot of code normally needed to propagate an error to the point where it an be handled.

Like with many language features one should also consider the cost of the (manually coded) alternative. In many cases that alternative is not exactly free either and might even be more costly. Yes virtual functions have a (small) cost, but if the use of a virtual function is justified then there is also a cost to achieve a similar result in a language that doesn't support virtual functions; switch-case, if-else or function pointers in C aren't free either. In case of exceptions; if it is not acceptable to just ignore detected errors the alternative for exceptions would be to check the return and/or status variables after just about every function call and handle it and/or propagate the result back to its own caller. That means cluttering the code with loads of checks and alternate flows and error propagating code (which aren't free either). In other words also when you don't use exceptions it is "very hard to do exceptions handling /well/ and you do have to think about it throughout your program".

Reasons why I wouldn't use C++ exceptions are:

  1. Exceptions when enabled do have a cost (space & performance) which cannot be avoided, on resource constrained systems this cost may very well be unacceptable;
  2. Unfamiliarity with exceptions. Though software engineers not familiar with exceptions are a dying breed (on most popular programming languages conceived in the last two decades exceptions are an integral part of the language), exceptions do require a different programming style, not necessarily harder but different nonetheless;
  3. Compiler maturity, not an issue on mainstream C++ compilers, but there have been C++ compilers where the exception mechanism was quite buggy.
[SNIP]

Still in C there is a stronger correlation between source code and the resulting object code than in C++. Unlike C, C++ can implicitly call 'functions' such as destructors, conversion operators...etc without having an line in the source code to explicitly trigger that. Not that this is really an issue for an experienced C++ programmer, I do understand Don Ys point.

[SNIP]

Also inlining is just a hint; the compiler may choose to inline or not depending on the optimization settings. For small functions the inlined version may in fact require less code at the call site than the non-inlined version because with inlining there is no need to setup the call stack and it it less likely to mess up the register allocation and potentially avoids having to save/restore registers. I.e. inlining can actually help to reduce the code size. Also modern compilers/linkers can choose to inline regardless of whether there is a inline function definition in the header file.

Also in this respect C++ compilers have gotten much better over the years. A decent C++ compiler will reuse object code of different template instantiations where possible, even with a naive implementation of the template class. E.g. on a decent C++compiler std::vector, std::vector and std::vector all use the same object code if the size of an int is the same as the size of a pointer. In cases where the compiler would need to generate different code for different template instantiations, chances are that you would have needed to write different code (yourself!) without templates as well.

Reply to
Dombo

Sure there's a difference. In addition to being type safe, extensible, not subject to buffer overruns, and whatnot, the former is faster.

#include #include #include

#define LOOPS 1000000

std::string s_gender = "Mr."; std::string s_lastname = "Smith"; char c_gender[] = "Mr."; char c_lastname[] = "Smith";

void outputstring(const char *s) { ; }

void f1(std::string gender, std::string lastname) { std::string greeting; greeting = "Hello"; greeting += " " + gender; greeting += " " + lastname; outputstring(greeting.c_str()); }

void f2(char *gender, char *lastname) { char greeting[256]; sprintf(greeting, "Hello %s %s", gender, lastname); outputstring(greeting); }

void f3(char *gender, char *lastname) { char greeting[256]; strcpy(greeting, "Hello "); strcat(greeting, gender); strcat(greeting, " "); strcat(greeting, lastname); outputstring(greeting); }

int main() { unsigned __int64 cc; int i;

cc = __rdtsc(); for (i=0; i

Reply to
Robert Wessel

For this usage of Complex, I'd expect all of those to generate no code at all.

If foo() is not too big, I'd expect it to be inlined, and the extra object construction to be optimized away.

But as I said in my other post, if I cared, I'd measure it to make sure.

Reply to
Robert Wessel

But, you've made additional assumptions about the example! E.g., that gender and lastname have fixed, one-time instantiation costs. And, that there is no GC needed or running (I suspect each iteration of your loop reuses the memory for the previous instantiation of greeting. And, the particulars of your string class's implementation bias the result (does it overallocate space for the initial instantiation and then *use* this space because it's big enough to append the " " + gender, et al.?). Or, how some other dynamic allocation occuring between string ops affects the heap? Or, whether an allocation requires another page to be faulted in? Does sprintf have a fixed (extra large) buffer to play in? Is it, instead, dynamically created? What is your sprintf implemetation like? etc.

[scaffolding elided]

Exactly. And you have to know *which* hoods to look under (i.e., that there are actually hoods that you may be unaware of). You need to be able to *see* (visualize) what is happening to understand what\

*might* be happening! And, what do you do when the implementation details are *hidden* or tightly entwined with the language? (e.g., exceptions)

As I mentioned with the printf("%d") fiasco. If all you are doing is printing ints, then why not itoa()? (or, why print ints at all??!)

I suspect most folks haven't had to *write* one! I use my own libraries so I am keenly aware of where they trade cost/performance. E.g., mine only includes support for he formats that are actually used in the application (why include support for long doubles if they are never used? Similarly, the short forms, %x, etc.)

Reply to
Don Y

I've still got an ASR-33 collecting dust in the garage...

Saturated arithmetic is easy to "roll your own". What *I* would like to see is *decimal* arithmetic. Preferably BigDecimal. (considerably harder roll your own -- efficiently!)

And exceptions (without resorting to preprocessor kludges)

Or why the poles and zeroes of that transfer function aren't where the math *says* they should be! "Gee, no wonder the control system isn't behaving optimally!")

Reply to
Don Y

The biggest advantage I see to (this sort of) exceptions is that it makes it much easier to be more *descriptive* of the root cause of the "error". Instead of just return(FAIL) because the topmost level has no idea what the *reason* for the failure propagated up to it may have been!

"Yeah, I see that the read() failed. But *why*? Is it a media problem, a permissions problem, a fault in the VM system, ..."

Reply to
Don Y

I actually convinced a compiler vendor to produce a version of a compiler that allowed me to use "const" as a modifier to store things in ROM. So, I could tell the linkage editor that the "const" space was where my ROM was located.

Hmmm... hard to believe that. OTOH, when I was in school, you studied the hardware as well. Much easier to understand all the indirection operations some (ASM) syntaxes supported! (CS was a subset of the EE curriculum)

Reply to
Don Y

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.