C++ syntax without C++

[Z180 "huge" model]

I have no recollection of how this was actually implemented. I just remember a conversation with the compiler author and his "startled" realization that I was advocating "*another* 64K??". The idea hadn't occurred to him that const could easily mean REALLY IMMUTABLE -- as was LIKELY in the sorts of environments where this code would be used.

After all, the TEXT was bigger than the logical address space. Why couldn't the data be, likewise?

Reply to
Don Y
Loading thread data ...

That wouldn't surprise me, but some of them have more usefully solved the problem by not generating garbage that gets tenured out of eden space.

And by getting rid of the general purpose networking stack in general purpose computers.

Reply to
Tom Gardner

And in the C/Unix machines they did use for products, the C compilers had ignored the register keyword for at least two decades!

Speaking as a conspiracy theorist... :)

If you have to attribute something to "stupidity" or "conspiracy", "stupidity" is usually the correct reason, if only because the participants are too inept to create a successful conspiracy!

Reply to
Tom Gardner

In article , Don Y wrote: }Years ago, I was reading one of Stroustrup's C++ books. .. }I finally took the opportunity to write to him -- humbly citing }chapter and verse and explaining my dismay at failing to }understand (or agree with) his points: "What am I missing?" } }His reply: everything he had written was WRONG.

Oh no! You've left out the most interesting bit! What was the material that was wrong? (And was it later corrected?)

Reply to
Charles Bryant

That's what they want you to think.

Reply to
George Neuner

In article , Don Y wrote: }The biggest advantage I see to (this sort of) exceptions is that it }makes it much easier to be more *descriptive* of the root cause of }the "error". Instead of just return(FAIL) because the topmost }level has no idea what the *reason* for the failure propagated up to it }may have been!

Could you point me at any openly-accessible code which uses exceptions like this? I have been looking for a good example for quite a while now.

Reply to
Charles Bryant

In Java frameworks and libraries there are many examples of that; it is rule rather than the exception (ho ho). It is one of the reasons it is possible for competent /mortal/ developers to construct complex systems relatively easily.

In C++ I couldn't comment.

Reply to
Tom Gardner

Thanks for reminding me about Zonnon.

I've just had a look at it and it appears to require a .Net/Mono environment which is unfortunate.

Still, I'll have a read through the language report to see what new good ideas might be in there...

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

Hi Charles,

[SWMBO worked for a "Charlie Bryant" many years ago! But, left-p> > }Years ago, I was reading one of Stroustrup's C++ books.

Sorry, I don't recall. It was a *long* time ago. But, it was something *so* basic that I was convinced it simply *had* to be a "typo" -- "Surely *HE* wouldn't get THIS wrong!". Yet, his discussion was consistent -- and, apparently, consistently

*wrong*! :-/

No idea if it was ever fixed -- I would assume he, like any other author, would add it to the list of changes to make in the' next revision! I rarely buy a second version of a book, though.

And, over the last decade, have been working hard to rid myself of all these dead trees (damn things *must* still be alive cuz they seem to keep multiplying!). Otherwise, I'd go hunting by books he'd authored and search for "post-its" within.

[IMO, the single best use for post-its is to annotate errors in books! Sure, you could mark up the pages (assuming you have validated the error). But, the post-it reminds you of the error much more visibly -- as well as affording you space to explain (to yourself) why this is an error]
Reply to
Don Y

It's relatively trivial -- you just keep propagating (raise/throw) the exception back up the call-tree (doing whatever you can to

*try* to resolve it at each level -- if possible).

You can do this with a traditional, non-exception environment by returning "error codes".

But, in practical terms, this falls apart.

QUICKLY!

Recall, you are returning "something" (error/success) at each level. So, at the topmost level (ftn1), you start with: {SUCCESS, FAIL}.

Then, you look at the *first* function that you invoke (ftn2) and see what can go wrong *inside* it that would cause *it* to return FAIL. I.e., you modify it (ftn2) to return {SUCCESS, FAIL_MEMORY, FAIL_ARGS, FAIL_FOO}.

Now, if first level function receives !SUCCESS from that function, it can try to sort out how to deal with the "failure" based on the actual code returned. E.g., if FAIL_MEMORY indicates a dynamic memory allocation within that other function failed due to not enough memory (in heap), the calling level might try to free up memory and reinvoke the failed function. Or, if the failure (_MEMORY) was caused because the caller was trying to invoke the function with an arg of '5' and maybe '4' might work.

When the caller decides he can't do anything more about the error, *he* has to return FAIL -- but, wants to do so with information that conveys this "FAiL_MEMORY" condition AT THIS PARTICULAR PLACE IN HIMSELF (cuz FAIL_MEMORY might be signaled by some other function that he would have called later!).

So, he has to create his own version of FAIL_MEMORY to pass to *his* caller.

Remember, he already has two different indications: {SUCCESS, FAIL}. So, he has to modify this to be {SUCCESS, FAIL_MYMEMORY, FAIL_MYOTHERWISE}. Each new failure reason means he has to keep modifying his return code "name set" to accommodate more reasons.

AND, there will be other reasons that he hasn't *thought* of "handling" -- how do those get PROPAGATED?

ftn1(): result_t { result: result_t; ... result = ftn2(); case result { SUCCESS => break; FAIL_MEMORY => # free up memory then retry ... if (SUCCESS == ftn2()) break; return FAIL_MYMEMORY; FAIL_ARGS => # cleanup ... return FAIL_MYOTHERWISE; FAIL_FOO => # nothing I can do about this! return FAIL_MYOTHERWISE; * => # other result's that I haven't thought about return result; } # continue after ftn2 succeeds! ...

if (something) return FAIL_MYMEMORY; ...

}

Stack this sort of thing on top of itself many levels.

Now, what if ftn2() doesn't return the same "type" as ftn1()? Then, you have to magically convert between these types in the catchall ("* =>") case -- you can't just "return result".

[It's early in the morning so if I've confused ftn1 and ftn2 in this description (and below), I apologize...]

*Now*, imagine ftn2() *effectively* has a different return type even though the underlying base type is the same. E.g., perhaps result_t and result2_t are both "ints". But, in reality, one is {SUCCESS=0, FAIL_MYMEMORY=1, FAIL_MYOTHERWISE=2} and the other is {SUCCESS=0, FAIL_MEMORY=9, FAIL_ARGS=8, FAIL_FOO=2, FAIL_DOMAIN=1...}.

What happens when ftn2() is modified to signal a FAIL_DOMAIN error? When ftn1 was crafted, this didn't exist. So, you were prudent and allowed for future codes to be transparently propagated (using that "return result").

[The alternative would have been to lump all "not yet knowns" into some generic "FAIL" result -- FAIL_NOHOPE]

So, FAIL_DOMAIN (=1) gets propagated back up the call tree via that "return result" -- which makes it *look* like a FAIL_MYMEMORY (=1) out of coincidence! Because it *isn't* according to the code! (else it would have been handled, presumably, in the MYMEMORY code!)

I.e., you really want a "result" type that allows all of these different "types" of results to coexist within a single "namespace" (typespace?) that allows them to retain their individual, INDEPENDENTLY created/defined designations (names).

Or, a way of defining codes such that their uniqueness is implicitly assured. So each carries an indication of its specific "type".

Without this, it is just FAR easier to write code like:

ftn1() { ... if (SUCCESS != ftn2()) return FAIL; ... if (SUCCESS != ftn3()) return FAIL; ... }

and, all the details of *which* function failed along with *where* (in ftn1) gets discarded out of convenience.

Which is what happens in *most* designs. :<

I'll see if I can find examples of each approach in my codebase...

Reply to
Don Y

Exceptions are just confined to languages that formally support them. It's a *concept* so you can choose to adopt/implement wherever you want (often in kludgey ways).

E.g., I use them in C with preprocessor support to give me a more palatable "syntax" (though the compiler can't tell me if I've screwed up!)

Like many "tricks" (bad choice of word), this requires a bit more of an understanding of your execution environment.

Just like dealing with errno in a multithreaded/multitasking environment!

Reply to
Don Y

Grrrrrrrrrrr... s/are/aren't/

Reply to
Don Y

One of the least appealing characteristics of C++ and the C++ community was the continual unwitting poor kludgy re-invention of features that worked tolerably well in preceding languages.

It is probably unfair to say "unwitting", but they never bothered to reference the concepts from other languages.

In contra-distinction, the Java whitepaper in 1996 made repeated statements of the form "feature X was first demonstrated in Y, and works well with Java features A,B,C because...".

Much more impressive.

The troubles with anything like that is integrating it with decent tool support and having other people (e.g. maintainers or co-workers) understand it.

Reply to
Tom Gardner

Look at whence C++ came -- the same folks who thought big, formally defined systems were too constraining (MULTICS -> UNIX). Do you think folks with a "Wild West" mindset would want to

*force* a particular coding style/technique on others?

Look, e.g., at Limbo/Inferno -- the "most modern" creation by the same sorts of minds and ask how much is elegance vs. kludge.

Exactly -- there's the rub. You can build something that is self-consistent, works well, robust, etc. But, if the first (or second) person who touches it *after* you bodges the whole thing, then what's the point/value?

OTOH, it *does* get you thinking along different lines and, hopefully, inching towards better style and technique -- despite how your development environment shackles you!

Reply to
Don Y

True, but that's a different point. In effect the kludges are /preventing/ an elegant coding style.

Reply to
Tom Gardner

BLISS supported exception handling on VAX/VMS in the 1970's :-).

Installing exception handlers, resignal exceptions, unwind stack and continuing from an exception were all supported from the start (see VAX/VMS architecture handbook).

Reply to
upsidedown

But that was my point! MULTICS really tried to approach "computing" with the same level of robustness as a public utility (!!) I.e., INSANE uptimes.

UNIX took the opposite extreme -- "let's just get something up and running and if we have to reboot DAILY... "

There (appears to have been) a lot more forethought put into how MULTICS was "assembled" ("put together") than UNIX. Its structure as well as the methodologies applied. You get the *feel* that UNIX was a "garage shop" operation.

[I am not meaning to malign UNIX. Rather, comment on the different mindsets behind them, their makers, their tools, etc.]

As tends to be true in many cases in our society, "quick" often wins out over "good". Which would be fine -- if you could backfill "good" AFTER "quick" (this seldom seems to be the case).

And, folks looking at existing codebases are REALLY hesitant to even consider NOT using them: "imagine all the manhours that are represented in this stuff!!!". So, the kludges perpetuate with little incentive to ever go back and "fix" things (or reimplement, etc.)

E.g., the current "(death)spiral" approach of incremental design: you don't know what you really *want*... but, you will know what you DON'T want -- after you've coded it! (then, just tweek it and repeat)

[Of course, this also means you are essentially locked into the initial assumptions you have made -- yet "con" yourself into thinking they can be "revised" on the next loop around! With the waterfall approach, this was incredibly evident BEFORE YOU GOT STARTED!]
Reply to
Don Y

In article , Tom Gardner wrote: }On 10/11/13 03:08, Charles Bryant wrote: }> In article , Don Y wrote: }> }The biggest advantage I see to (this sort of) exceptions is that it }> }makes it much easier to be more *descriptive* of the root cause of }> }the "error". Instead of just return(FAIL) because the topmost }> }level has no idea what the *reason* for the failure propagated up to it }> }may have been! }> }> Could you point me at any openly-accessible code which uses exceptions like }> this? I have been looking for a good example for quite a while now. } }In Java frameworks and libraries there are many examples of that; }it is rule rather than the exception (ho ho). It is one of the }reasons it is possible for competent /mortal/ developers to construct }complex systems relatively easily. } }In C++ I couldn't comment.

I should have been more explicit. I'm sure they are written with that in mind, but I expect libraries may only have lots of examples of throwing exceptions. I'm interested in seeing how they are caught and handled as well.

Reply to
Charles Bryant

In article , Robert Wessel wrote: }On Wed, 06 Nov 2013 13:56:51 -0700, Don Y wrote: }>So, its real hard for me to look at: }> }> string: greeting; }> ... }> greeting = "Hello"; }> greeting += " " + gender; }> greeting += " " + lastname; }> }>and NOT be aware of the differences between that and, e.g., }> }> sprintf(greeting, "Hello %s %s", gender, lastname); } } }Sure there's a difference. In addition to being type safe, }extensible, not subject to buffer overruns, and whatnot, the former is }faster.

.. code included below ...

}You do have to have some awareness of what's going on under the hood, }but programmers are notoriously bad at estimating where the actual }bottlenecks are - you have to measure. Many people don't realize just }how slow printf actually is. The strcat version is fastest by a large }margin, but of course is horribly unsafe (it also has the unfair }advantage of not adding in the first space in a separate step (and }doing that bumps the cycle count to ~400M). Increasing the number of }strings favors the printf route more, although the strcat version }continues to run faster.

Such awareness also tells you that strcat() has an inherent inefficiency for most uses: it has to find the end of the string it wants to append to. Consequently, I rarely use it. Here's your test program with added f4() (and tweaked to run on g++/Linux):

#include #include #include

#define LOOPS 1000000

std::string s_gender = "Mr."; std::string s_lastname = "Smith"; char c_gender[] = "Mr."; char c_lastname[] = "Smith";

static __inline__ unsigned long long rdtsc(void) { unsigned hi, lo; __asm__ __volatile__ ("rdtsc" : "=a"(lo), "=d"(hi)); return ( (unsigned long long)lo)|( ((unsigned long long)hi)

Reply to
Charles Bryant

[much elided]

Note you've "preinstantiated" each.

Ah, but you're still missing big pieces of the puzzle!

Profile (memory+time) something like: sprintf(&buffer[0], "%-*.*s", BIG_NUMBER, BIGGER_NUMBER, string); sprintf(&buffer[0], "%*.*s", BIG_NUMBER, BIGGER_NUMBER, string); And repeat for negative manifest constants. Does memory required change as constants are varied? (assume buffer is not a factor)?

Keep in mind that printf(...) is effectively fprintf(stdout, ...) What does resource utilization look like as you vary the FILE* ? Does the implementation end up doing something like: temp = malloc(); sprintf(&temp[0], format, ...); fwrite(file, &temp[0]); free(temp); Or, perhaps pass one character at a time to the device associated with file?

Or: prinf("%*.*d", wide, minim, value); prinf("%0*.*d", wide, minim, value); Lots of interesting combinations available to annoy your implementation!

Anyone using any of these operations in an embedded system should be able to explain how *their* *printf's behave, right? (for the record, mine have fixed overheads even as BIG_NUMBER gets to be ridiculous! Can you ensure your code will never see a *variable* for one of those parameters and that the variable might not become obscene in some cases? :>)

Returning to the post....

*My* tests: [apologies for sloppy code -- too early in the morning to be writing it!]

-----8Context, argv: list of string) { tests: list of test; results: list of int;

sys = load Sys PATH;

# instantiate &/ initialize outside of test framework ITERATIONS := 1000000; SAMPLES := 50; start, end: int;

# build "tests" t6 := (test) (test4, "Preinitialized Global objects plus sprint"); t5 := (test) (test4, "Global objects plus sprint"); t4 := (test) (test4, "Create, init and destroy objects plus sprint"); t3 := (test) (test2, "Create and destroy objects"); t2 := (test) (test2, "Create, init and destroy objects"); t1 := (test) (test1, "Create, init and destroy objects plus operations"); t0 := (test) (test0, "Subroutine invocation overhead");

# assemble ordered list of tests (prepend is constant time!) #N.B. test6 must follow test5 as it expects the globals to be preinitialized tests = t6 :: tests; tests = t5 :: tests; tests = t4 :: tests; tests = t1 :: tests; tests = t2 :: tests; tests = t3 :: tests; tests = t0 :: tests;

while (tests != nil) { # next experiment (job, description) := hd tests; tests = tl tests; results = nil;

# N.B. count DOWN! Makes it typesafe for free! for (s := SAMPLES; s > 0; s--) { # run this experiment again start = millisec(); for (i := ITERATIONS; i > 0; i--) { job(); } end = millisec();

elapsed := end - start; results = elapsed :: results; }

total := big 0; count := 0; elapsed := hd results; min := max := elapsed; # ick! but known to be a valid candidate!

while (results != nil) { elapsed = hd results; results = tl results; if (elapsed < min) min = elapsed; else if (elapsed > max) max = elapsed; total += big elapsed; count++; }

duration := real total / real count; unitcost := (duration/(real ITERATIONS)) * 1000.0;

sys->print("%s took %.3g us [%d,%d]\n", description, unitcost, min, max); } }

# Unfortunately, I just created tests in whatever order they # occurred to me -- instead of a LOGICAL order! :<

# convention uses different local identifiers in each to # make use of locals+globals similar

test0() { # do nothing -- subroutine invocation overhead }

test1() { # instantiate and destroy objects locally, plus operate greeting1 := "Salutations,"; gender1 := "great"; lastname1 := "Buckaroo Banzai"; greeting1 += " " + gender1; greeting1 += " " + lastname1; }

test2() { # just instantiate, initialize and destroy, no operations greeting2 := "Salutations,"; gender2 := "great"; lastname2 := "Buckaroo Banzai"; }

test3() { # just instantiate and destroy, no operations greeting3 : string; gender3 : string; lastname3 : string; }

test4() { # instantiate and destroy objects locally, plus sprint gender4 := "great"; lastname4 := "Buckaroo Banzai";

greeting4 := sprint("Salutations, %s %s", gender4, lastname4); }

test5() { # global objects, plus sprint gender0 = "great"; lastname0 = "Buckaroo Banzai";

greeting0 = sprint("Salutations, %s %s", gender0, lastname0); }

test6() { # preinitialized global objects, plus sprint greeting0 = sprint("Salutations, %s %s", gender0, lastname0); }

-----8

Reply to
Don Y

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.