C++ syntax without C++

That's true, at least to some extent - but it is also true of C. At most, it is a matter of degrees and that such things are perhaps more pronounced with C++ than C.

Consider this pseudo code in C:

extern int part1, part2; extern volatile bool GIE; // cpu's "global interrupt enable" flag

void atomicSet(int a, b) { GIE = false; // Block all interrupts // Set both parts as an uninterruptable operation part1 = a; part2 = b; GIE = true; // Enable interrupts again }

A great many people would think this is correct code. On a great many compilers, it will do as the programmer expects. But on some compilers, with some optimisation flags, it will /not/ work. This is because the compiler can freely move the stores to part1 and part2 with respect to the GIE assignments.

I can well agree that such issues can be surprising, and lead to hard-to-spot bugs. And I can well agree that you are more likely to see such bugs with more optimising compilers (or compiler flags). But I can't agree that C++ is so very much more prone to this than C, nor can I agree with avoiding optimisation in the hope of avoiding such bugs. (The compiler is free to make the same movements when optimisation is disabled.)

How do you handle this with C programming? (That's not entirely a rhetorical question - if you've got a good, non-obvious answer, it would be nice to hear it.) You do the same thing with C++.

That's true - but it is also true of C. Take a look at comp.lang.c - there are people there that have studied the C standards for decades (and sometimes been involved in creating them), and they /still/ argue over details and interpretations.

So I agree with you - I just don't think it is a good enough reason to avoid C++.

Well, a lot of people seem to make a living out of programming in C++, and a lot of software written in C++ seems to run reasonably well.

But I think it takes more work and greater ability to be a /good/ C++ programmer than to be a good C programmer - a lot of these C++ programs will have subtle bugs, and a lot of the developers will not be good enough to understand them.

Just like with C programming, many of the potential bugs in C++ code can be avoided by using a limited subset of the language. If you think member resolution for class hierarchies with multiple inheritance is a likely source of subtle problems (and it is!), then ban multiple inheritance - you have very little to lose, and a lot to gain. It is just like avoiding malloc/free (or new/delete) makes your code immune to heap memory leaks.

(I am not trying to say that this is an easy thing to do, naturally. I am not even trying to say that C++ is a good choice for any particular usage - I am just saying that some of the arguments given against C++ are not valid, IMHO.)

For gcc 4.5, I am saying "partly yes" - it had a fair number of C++11 features that are useful. Some of these were considered "experimental"

- meaning that the gcc developers think they work correctly, but they haven't yet had the wide testing you can only get by releasing the tool for general use.

For gcc 4.7, I am saying "yes", and for gcc 4.8 "YES".

Maybe I misunderstood you - when you said "that the vast majority of people regard as complete", I took it to mean that people can use C++11 features for normal usage, rather than requiring absolutely every little nuance of the standard to be fully implemented. If you look at the cxx0x status page I gave, you will see there is only one item that is /not/ implemented ("Minimal support for garbage collection and reachability-based leak detection"). If you want to be black-and-white, then gcc is not C++11 complete. If you want to be practical for "the vast majority of people", then yes, it is complete.

(I can't comment on whether the list on that page covers every aspect of C++11.)

The gcc C++ library is perhaps not as C++11 complete, according to its status page - though I think that page is a little out of data (the big missing feature is regular expressions, and these are at least supported in the development version of the library). But I haven't looked much at the C++ library.

It is a matter of what is meant be "complete". As noted above, it seems we have a different idea of "complete", and I misunderstood what you meant by it. In the black-and-white sense, then gcc is not C++11 complete. (And by your reasoning above, probably no compiler can be "complete" to any C or C++ standard - not even ANSI C or C89.) But in the practical "can I use this tool to write C++ code using C++11 features" sense, then gcc was roughly complete in 4.7 with a few final additions in 4.8.

Reply to
David Brown
Loading thread data ...

Ouch. That should be a standard part of any computer science degree. Is this actually a general problem ?

Then what mechanisms are used to teach computer architecture ?

A CS course should be about _science_ and not act as a glorified trade school. If you learn the concepts, you can adapt to any new technology. If you only learn the tools, then that becomes shallow, and fragile, knowledge which is harder to apply to new situations.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

Yes, that is true - and yet I think it is close to being inevitable with complex systems. The more complex the system, and the more people that use it, the more likely it is to happen.

There are several aspects to this. Are we talking about subtle failures that the designers did not know about? Did they know about them and document them (such as areas of "undefined behaviour" in C and C++)? In this second case, it is arguably the users' fault - but perhaps the documentation is too hard, teaching too poor, or the quantity of issues too great for a mere mortal programmer.

Reply to
David Brown

Exactly. Just because it's not a (directly) paid job, that doesn't mean it's not considered to be a job for a number of purposes.

I actually had a what-the-hell moment a few weeks ago over in comp.lang.ada. There was a reference to a tricky part (I forget which part) of the Ada standard which was apparently difficult to implement in a Ada compiler and the people involved in the process seemed amused by that.

My internal reaction was basically; What the hell ??? You people are actually proud of that ??? That's the kind of thing that goes on in C++ land, not in something built around the principals Ada was built around. :-(

(I never actually posted that in response, although I was seriously tempted to.)

Make a standard easy to implement and you are more likely to get consistant and robust compilers. Go all macho while drafting your standards and you end up with code that's going to have different behaviour between compilers.

At the core of Ada is a good, well defined, language. I hope the people responsible for it's development don't lose sight of that.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

That depends a bit on the target and how it is implemented. On targets with the same access mechanisms for reading data in flash/rom and ram, then it is common for "const" variables to be put in a section that is mapped to flash/rom. But Don may be talking about an old or limited compiler that did not do that.

On some targets, different code is generated for reading from flash/rom (AVR, COP8, PIC, etc.). To be compliant with C (or C++) standards that allow you cast a pointer-to-non-const into a pointer-to-const, the compiler has to be able to read non-const data through a pointer-to-const, and thus it needs to use the same access mechanism for const data as for non-const data. That means .rodata has to go into ram. (The compiler can, of course, optimise this when it is sure it is safe - "static const" data is often used directly rather than storing it anywhere.)

I have used a compiler that treated "const" as "flash", which of course broke pointer-to-const accesses.

Reply to
David Brown

The main issue is that it was by accident rather than design. So you have people writing "programs" in something which wasn't intended to be used for that purpose.

And being Turing-complete means that the halting problem applies, i.e. there is no way to predict the end result of template expansion other than by actually performing it (which can only be done for specific instances of the template parameters; you can't generalise).

Reply to
Nobody

You don't have to put the data into flash to get the nasal daemons - the code here is undefined behaviour, exactly as it is in C (replacing "reinterpret_cast" or "const_cast" with "(int *)" ).

In particular, if you followed your code with "printf("%i\n", bob);" then the resultant code might print "42", or it might print "10", or it might format your hard disk. And it might do different things with different optimisations.

That's life with C, and it's life with C++.

What you /can/ do legally is:

int bob = 10; const int* pBobConst = const_cast &bob; int* pBob = const_cast pBobConst;

*pBob = 42;

And again, you can do the same thing legally in C. In other words, if you start with a non-const and cast it to const, you can cast away the const - but if you start with a const and try to cast away the const, you get undefined behaviour (but it will compile).

That's not /quite/ correct. (This "const" stuff is fun, really!) If a C++ class has mutable members then you cannot declare a const instance of it. But what you /can/ do is cast a pointer to a non-const instance into a pointer-to-const (typically by making the member function "const" so that the implicit first parameter is "const Class * this"). This means that the member function "promises not to change anything pointed to by *this, except the mutable members".

Of course, the member function can cheat by using const_cast or other casting mechanisms and change the non-mutable members of the class too.

And because the function can cheat - legally - the compiler can't assume that a "const" member function /actually/ leaves the object untouched, which is a shame from the optimiser's viewpoint.

And again, note that C++ is not any different from C in this way. It is just that "mutable" gives you a clear way to document such non const actions, and lets you implement them with fewer warnings and messy casts.

Reply to
David Brown

AFAIK, not having been involved, the standards committees must have known about many of the ambiguities in the proposed standards: because the UK voted "no" and was overruled.

It appears that the language designers are "mere mortals", by that criteria :(

Have a look for Nick Maclaren's Objects Diatribe. He has for decades been on the sharp end of diagnosing intermittent and subtle problems.

Reply to
Tom Gardner
[...]

I try to keep debugging separate. iSYSTEM (offering a 50$ solution for gcc-ARM or working with "free" J-Link), Lauterbach...

But I never tried it with C++.

Oliver

--
Oliver Betz, Munich 
despammed.com is broken, use Reply-To:
Reply to
Oliver Betz

Yes me too.

I must look at DDD again some time, something stopped me last time I looked into it.

DDD = "data display debugger" IIRC. Recently I found a way to get some nice plots out of gdb, just a few-line script to dump an array, say, into gnuplot. You can get nice "scope plots" of the contents of an ADC result array, or a histogram of same etc. Really nice for a program that analyses signals (like mine tend to do). It even works while the program is running!

--

John Devereux
Reply to
John Devereux
[...]

...if the gdb server supports the asynchronous mode, I guess? Segger didn't last month.

BTW: Watching variables at runtime is IMO a basic feature, I'm using it since Motorola HC12 times.

Oliver

--
Oliver Betz, Munich 
despammed.com is broken, use Reply-To:
Reply to
Oliver Betz

It was openocd and a STM32F discovery board as the JTAG-SWD adapter. I only noticed the ability with the SWD stuff, after many years on ARM7 JTAG.

It's very nice, to the extent that I start to organise my code to take advantage. Like having all the variables for a module in a structure ("class?") so I can see them all at once. And it turns out this is more efficient anyway. The code is starting to look more and more like a basic C++. All the functions passing around a pointer to the shared data structure. I should call it "this". :)

--

John Devereux
Reply to
John Devereux

I was doing that kind of thing in 1983, plus function pointer in the object as well. I'm sure that if I'd had too many objects of one type then I would have replaced the function pointers with a pointer to the struct of function pointers.

Who needs C++ when C-with-classes gets you 90% of the benefit? :)

Reply to
Tom Gardner

I've seen C++ classified as "it tries to make using good libraries easy, but makes writing good libraries hard", which I consider very close to the truth. To make a good library, you have too many options to choose from (templates or virtual functions? put the user code into a behaviour class passed in as parameter or as base class?), and you need very detailed knowledge how your code's going to be used and what the compiler makes of it.

But then, I wouldn't know what feature to leave out (except for the one or another C compatibility hack) and still keep a language with the same expressiveness.

Stefan

Reply to
Stefan Reuther

I guess I don't understand (as in the sense that I can't wrap my head around it... "greek") that. When I went to school, *everyone* took basic/core courses -- like Calculus, DifEqs, etc. (it was an "engineering school", not "liberal arts" so you figured every "major" needed these BASIC CAPABILITIES). Beyond those, your coursework was tailored to a specific "major" (which we called "course" -- for some innane reason!)

Beyond that, "computer science" existed as a subset of the EE curriculum (IIRC, there were 3 such subsets: "traditional EE, CS and "). So, I'm an EE.

*All* EE's similarly had a core EE curriculum that included classes in circuit theory, compiler/language design, etc. So, I can design an amplifier and a "traditional" EE could write code. Beyond that, we went in more specialized directions (though were not constrained to "avoid" particular courses). E.g., I have a strong hardware preference. I can (and have) designed CPU's whereas most of the folks who took the "CS subset" don't have this sort of education.

Reading ahead to your next P, we (all EE's) learned about FSM's as

*hardware* devices. Implication tables to reduce FSMs, Karnaugh maps to codify the "(present-state, input) -> next-state" logic in minimal form, etc.

So, if "CS" (by whatever name it is called today) isn't treated as a "hard science" -- no doubt, NOW considered far more "main-stream" than 35+ years ago -- then what is it treated as? Liberal arts?

(note I realize there are entire degrees that didn't exist "back then" which are now commonplace. E.g., all the "system administration" type jobs. And, I'm sure there are some "web developer" type credentials -- though I have no idea how that can merit a whole "special degree"!)

I.e., what would *my* education be considered? Would I have to get multiple degrees to get exposed to the same sorts of things?

So, they aren't exposed to the hardware? There's no understanding of what operations *cost*? Is every abstraction taught as a pure abstraction? Or, AS IF there was real hardware that implemented each "operation" natively?

Surely they understand things like "bytes". Do they think there's an instruction that magically creates this set of bytes when I want to instantiate some object/data type? (silly, of course) Then, how do they relate to the steps that are used to *build* these complex entities? Magic?

Do they understand the duality of recursion v iteration? The costs/savings/consequences of each? Different computer architectures? S-machines, T-machines, etc.? The difference between call-by-value and call-by-reference/name? Petri nets, synchronization mechanisms?

Or, is it all the equivalent of "push this button to do this thing"? (i.e., how people learn apps nowadays) Perhaps "use this class in this way to do this thing"?

[I'm not trying to be cynical. And, would appreciate an *honest*, *informative* reply. This goes to the very heart of my rationale for making the design choices that I've been making. If future developers can;t "perceive" the costs of their choices, then I'll have to take other measures to ensure their solutions "fit" in the constraints of my system]

I'm not averse to hiding complexity of implementation. But, folks need to be aware that there *is* stuff being hidden and what that stuff is doing for them.

If you *think* the distributed system is just one big machine, then you aren't likely to consider the reality of the actual implementation -- that parts of the machine could be "down" while other parts are not. (your PC is on or off; unlikely that the FPU will be OFF while the CPU remains ON! so if you assume it's all or nothing, you won't know how to deal with the "can't happens" that WILL happen!

(sigh) Crappy problem to start the day with...

Reply to
Don Y

I don't care one little bit about "language expressiveness". I do care about the ability to be able to produce a reliable product with the minimum of effort on my part.

One distinguishing characteristic of C++ is that (to a large extent) you have to choose your library and then work with whatever it gives you. Compared with some other languages, if you try to use two completely unrelated libraries in your application there is a good chance they will turn out to be mutually incompatible in one subtle way or another (often memory management).

Apart from that, I agree.

Reply to
Tom Gardner

Yes. At least 20 years? IIRC, it was for a Z180 (essentially a Z80 with extended addressing capabilities -- 1MB -- though in a really wonky mapping scheme that was 100% backward compatible with regular Z80 code. I.e., all pointers were 16b!). I essentially wanted a

16b address space for "TEXT + DATA" (an inherent limitation of the Z80 core) *and* the ability to treat "const" data sort of like TEXT (TEXT is generally not writeable -- nor is const data!)

I.e., with 1MB to work in, I had the potential for lots of DATA and const -- in addition to TEXT. I didn't want to restrict the DATA+const and just allow TEXT to avail itself of all that address space!

Yes. Now imagine .rodata being as big as your logical memory space. And, TEXT just as large. And... (i.e., more "used" memory than is theoretically possible in your "logical" address space -- yet, you want to be able to seemlessly *use* it!)

const char image[60000]; char array[60000];

index = 0; while (index < sizeof(image)) { array = image[index++]; }

(remember, address space is 65535 chars!)

[Actually, I'm not sure if I was able do this or if const+DATA had to fit in 64K -- in which case, what space does the *code* occupy? How does a pointer to a function still "work"?]

You'd have to recall the state of *embedded* compilers for SMALL MPUs eons ago. There were a lot of things that you *couldn't* do or that were "traditionally" interpreted (e.g., 64K Z80 address space -- even if the physical address space was different!) that didn't *have* to be!

I.e., these were the days when 16b ints were common -- doubles were a pipe dream (CPU would grind to a halt trying to do 64b math), etc. Just *having* a decent HLL in such a constrained environment made you giddy! :>

I actually had a UNIX-ish application environment rnning on that sort of hardware. Each task had real stdin/out/err, full set of C libraries, file operations on a "memory file system", hardware devices mapped into that file system (e.g., /dev/tty0), etc. Of course, no means of task isolation...

I had even instrumented the debugger so each task could emit log messages that would appear on the IDE's "console" (color coded so I could keep track of which task was saying what). The development system (ICE) was just another "device" that I could wire to each task's stderr. The device driver ensured each fwrite() to a device was atomic -- so you didn't get characters from task A's message intermixed with those of task B's message (though you couldn't control which messages got displayed first -- depended on who was the first process to take the lock *in* the device)

Unplug the ICE and all those writes go to /dev/null.

In some sense, things were more fun, back then. You *really* had very little so every accomplishment or feature felt *huge*!

Reply to
Don Y

Exactly! I really find this hard to believe. But, that's a personal limitation of my own -- I'm not calling Tom a liar or exageration... I've heard it from several people, now. Though, always with the same "bitter taste/disdain apparent in their tone. (which could just be the sort of emotional reaction *I* would have coming to that realization)

I just can't *imagine* how you could teach this sort of stuff without a full grounding in "The Basics". It would be like teaching multiplication by forcing kids to memorize multiplication tables and nothing more. What do they do once challenged by something "bigger" than 20 * 20?

Do they expect all problems to be "trivial" -- where all the real

*thinking* has been encapsulated in some store-bought black box?
Reply to
Don Y

I agree, of course. But nowadays it is too much about learning which buttons to press in an IDEs' wizard. (Especially if you're in a Microsoft environment!)

Was it Dijkstra or Wirth who noted that CS isn't a science and isn't about computers?

I know I'm jaded. I also know there are very good people out there. Maybe it is just that there is the same number of good people even though the total number of people in the field has expanded enormously.

I've seen HR droids in a high-tech company have to be told "don't filter out CVs, we (engineers) know what we're looking for".

I've seen HR droids in less high tech companies employ anyone with a pulse and a poor line in bullshit - simply because they needed warm bodies /now/. Plus the engineering director probably couldn't distinguish good from bad anyway - he'd signed off on a coding standard document that required people to use the "register" keyword. In Java programs!

Much simpler if you work in a small company with competent compatriots.

Reply to
Tom Gardner

(sorry, I don;t do Java)

Yes. It's more "documentation". OTOH, it *exposes* more which some folks find bad.

E.g., look at folks who get spooked seeing a "seconds" value of :60 in a timestamp: "WTF?! That can't happen!" *Sure* it can! :> Now, consider how your code may break when now+60 isn't really now+60!

I think this is true of many scenarios. I.e., why don't people check malloc()'s return value? And: because they haven't a clue as to how to deal with that possibility and don't want to think about it!

(Or, because the execution environment doesn't give them convenient hooks to "do something smart" in this case! E.g., *block* inside the allocator until the request

*is* satisfied! Of course, this just *changes* the problem!)

Is this because so many "write code" that interacts with "people"? And, they rationalize that the peson can just "try again"?

Reply to
Don Y

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.