Engineering degree for embedded systems

Interesting read, thanks. I started writing multithreaded programs in

1992, using IBM C/Set 2 on OS/2 2.0. My only GUI programs were on OS/2.

My biggest effort to date is a clusterized 3-D EM simulator, which is multithreaded, multicore, multi-box, on a heterogeneous bunch of Linux and Windows boxes. (I haven't tested the Windows version in awhile, so it's probably broken, but it used to work.) It's written in C++ using the C-with-classes-and-a-few-templates OOP style, which is a really good match for simulation and instrument control code. The optimizer is a big Rexx script that functions a lot like a math program. A pal of mine wrote an EM simulator that plugs into Matlab, but his isn't clusterized so I couldn't use it.

My stuff is all pthreads, because std::thread didn't exist at the time, but it does now, so presumably Boehm's input has been taken into account.

Well, being Christians helps, as does being fond of each other. He'll probably want to start running the business after he finishes grad school--I want to be like Zelazny's character Dworkin, who spends his time casually altering the structure of reality in his dungeon. ;)

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs
Loading thread data ...

I used to program in RPN routinely, still use RPN calculators exclusively, and don't like Forth. Worrying about the state of the stack is something I much prefer to let the compiler deal with. It's like C functions with ten positional parameters.

Cheers

Phil "existence proof" Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

The old Linux threads library used heavyweight processes to mimic lightweight threads. That's a mess. Pthreads is much nicer.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

References, containing the red-flag words "discovered" and "accident", plus some offending code: "...TMP is something of an accident; it was discovered during the process of standardizing the C++..."

formatting link
formatting link

It is, isn't it.

I'm told C/C++12 /finally/ has a memory model, so perhaps that will (a few decades too late) ameliorate the problem. We'll see, but I'm not holding my breath.

Depends on the Christian :( My maternal grandmother and my ex's grandmother were avowedly Christian and pretty horrible specimens to boot. My grandmother used to write poison-pen letters, my ex's used to viciously play favourites. So, the "fond of each other" didn't come into play :(

:)

I'll content myself with defining the structure of reality to be what I want it to be. (Just like many denizens of this group :)

And then I'll find a way of going out with my boots on.

Reply to
Tom Gardner

If you are writing Forth code and passing 10 items into a definition, you have missed a *lot* on how to write Forth code. I can see why you are frustrated.

--

Rick C
Reply to
rickman

I'm not frustrated, partly because I haven't written anything in Forth for over 30 years. ;)

And I didn't say I was passing 10 parameters to a Forth word, either. It's just that having to worry about the state of the stack is so 1975. I wrote my last HP calculator program in the early '80s, and have no burning desire to do that again either.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

You clearly mentioned 10 parameters, no?

I get that you don't fully understand Forth. When I said "The only people who think it is a bad idea are those who think RPN is a problem and object to other trivial issues" by other trivial issues I was referring to the use of the stack.

--

Rick C
Reply to
rickman

Yes, I was making the point that having to keep the state of the stack in mind was error prone in the same way as passing that many parameters in C. It's also annoying to document. In C, I don't have to say what the values of the local varables are--it's clear from the code.

Well, the fact that you think of Forth's main wart as a trivial issue is probably why you like it. ;)

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

Yes, it is error prone in the same way adding numbers is to a fourth grader. So use a calculator... but that's actually slower and can't be done if you don't have a calculator! That's the analogy I would use. Dealing with the stack is trivial if you make a small effort.

Once I was in a discussion about dealing with the problems of debugging stack errors which usually are a mismatch between the number of parameters passed to/from and the number the definition is actually using. This is exactly the sort of problem a compiler can check, but typically is not done in Forth. Jeff Fox simply said something like, this proves the programmer can't count. I realized how simple the truth is. When considered in the context of how Forth programs are debugged this is simply not a problem worth dealing with by the compiler. If you learn more about Forth you will see that.

The stack is not the problem.

Yes, I expect you would call this a wart too...

formatting link

I think Forth's biggest problem is people who can't see the beauty for the mark.

--

Rick C
Reply to
rickman

Could both of you learn to trim your posts? Then I might read enough of them to be interested.

Stephen

--
Stephen Pelc, stephenXXX@mpeforth.com 
MicroProcessor Engineering Ltd - More Real, Less Time 
 Click to see the full signature
Reply to
Stephen Pelc

Hit "end" when you load the post. Works in Thunderbird at least.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

It may resemble Pascal, but it's still limited in what it can do. It's good enough for ... 90% of things that will need to be done, but I live outside that 90% myself.

--
Les Cargill
Reply to
Les Cargill

Yes UB is special. All those non-UB bugs you mention will have a defined behaviour that just isn't the behaviour that you wanted. UB, as the name implies, has no defined behaviour at all: anything can happen, including the proverbial nasal demons.

I can't speak for Les, but guaranteeing C programs to be free of UB is so difficult that one can debate whether writing complex critical programs in C is morally irresponsible. That type of debate tends to take on a political flavor like PC vs Mac, Emacs vs Vi, and other similar burning issues.

Reply to
Paul Rubin

Yes, in all respects.

And more people /think/ they can avoid UB than can actually achieve that nirvana. That's dangerous Dunning-Krueger territory.

Reply to
Tom Gardner

You mean C11/C++11 ? There are, I believe, very minor differences between the memory models of C11 and C++11, but they are basically the same. And they provide the required synchronisation and barrier mechanisms in a standard form. Whether people will use them appropriately or not, is another matter. In the embedded world there seems to be a fair proportion of people that still think C89 is a fine standard to use. Standard atomics and fences in embedded C basically means gcc 4.9 or newer, when C11 support was complete. For C++ it was a little earlier. I don't know what other C or C++ compilers for embedded use have C11/C++11 support, but gcc is the main one, especially for modern standards support. GNU ARM Embedded had 4.9 at the end of 2014, but it takes time for manufacturer-supplied toolchains to update.

So yes, C11/C++11 solves the problem in a standardised way - but it will certainly take time before updated tools are in common use, and before people make use of the new features. I suspect this will happen mainly in the C++ world, where C++11 is a very significant change from older C++ and it can make sense to move to C++11 almost as a new language. Even then, I expect most people will either rely on their OS primitives to handle barriers and fences, or use simple full barriers:

C11: atomic_thread_fence(memory_order_seq_cst);

C++11: std::atomic_thread_fence(std::memory_order_seq_cst);

replacing

gcc Cortex-M: asm volatile("dmb" : : : "memory");

Linux: mb()

The tools have all existed, even though C and C++ did not have memory models before C11/C++11. cpus, OS's, and compilers all had memory models, even though they might not have been explicitly documented.

And people got some things right, and some things wrong, at that time. I think the same thing will apply now that they /have/ memory models.

Reply to
David Brown

Oh.... picky picky picky :)

My experience is that they won't. That's for two reasons:

1) not really understanding threading/synchronisation issues, because they are only touched upon in schools. Obviously that problem is language agnostic. 2) any subtleties in the C/C++ specification and implementation "suboptimalities"; I expect those will exist :(

Plus, of course, as you note below...

...

ISTR that in the early-mid naughties there was a triumphant announcement of the first /complete/ C or C++ compiler - 5 or 6 years after the standard was published! Of course many compilers had implemented a usable subset before that.

No, didn't save a reference :(

Agreed.

I'm gobsmacked that it took C/C++ so long to get around to that /fundamental/ requirement. The absence and the delay reflects very badly on the C/C++ community.

Reply to
Tom Gardner

Well, if you decide to look this up on google, it should save you a bit of false starts.

Agreed. This stuff is hard to understand if you want to get correct /and/ optimally efficient.

I have read through the specs and implementation information - quite a bit of work has gone into making it possible to write safe code that is more efficient than was previously possible (or at least practical). It is not so relevant for small embedded systems, where you generally have a single core and little in the way of write buffers - there is not much, if anything, to be gained by replacing blunt full memory barriers with tuned load-acquire and store-release operations. But for bigger systems with multiple cpus, a full barrier can cost hundreds of cycles.

There is one "suboptimality" - the "consume" memory order. It's a bit weird, in that it is mainly relevant to the Alpha architecture, whose memory model is so weak that in "x = *p;" it can fetch the contents of

*p before seeing the latest update of p. Because the C11 and C++11 specs are not clear enough on "consume", all implementations (AFAIK) bump this up to the stronger "acquire", which may be slightly slower on some architectures.

Things have changed a good deal since then. The major C++ compilers (gcc, clang, MSVC) have complete C++11 and C++14 support, with gcc and clang basically complete on the C++17 final drafts. gcc has "concepts", slated for C++20 pretty much "as is", and MSVC and clang have prototype "modules" which are also expected for C++20 (probably based on MSVC's slightly better version).

These days a feature does not make it into the C or C++ standards unless there is a working implementation in at least one major toolchain to test it out in practice.

As I said, people managed fine without it. Putting together a memory model that the C folks and C++ folks could agree on for all the platforms they support is not a trivial effort - and I am very glad they agreed here. Of course I agree that it would have been nice to have had it earlier. The thread support (as distinct from the atomic support, including memory models) is far too little, far too late and I doubt if it will have much use.

Reply to
David Brown

Unfortunately google doesn't prevent idiots from making tyupos :) (Or is that fortunately?)

Agreed, with the caveat that "small" ain't what it used to be. Consider Zynqs: dual-core ARMs with caches and, obviously, FPGA fabric.

There are many other examples, and that trend will continue.

One of C/C++'s problems is deciding to cater for, um, weird and obsolete architectures. I see /why/ they do that, but on Mondays Wednesdays and Fridays I'd prefer a concentration on doing common architectures simply and well.

Yes, but I presume that was also the case in the naughties. (I gave up following the detailed C/C++ shenanigans during the interminable "cast away constness" philosophical discussions)

The point was about the first compiler that (belatedly) correctly implemented /all/ the features.

While there is no doubt people /thought/ they managed, it is less clear cut that it was "fine".

I'm disappointed that thread support might not be as useful as desired, but memory model and atomic is more important.

Reply to
Tom Gardner

Bugs are problems, no matter whether they have defined behaviour or undefined behaviour. But it is sometimes possible to limit the damage caused by a bug, and it can certainly be possible to make it easier or harder to detect.

The real question is, would it help to give a definition to typical C "undefined behaviour" like signed integer overflows or access outside of array bounds?

Let's take the first case - signed integer overflows. If you want to give a defined behaviour, you pick one of several mechanisms. You could use two's complement wraparound. You could use saturated arithmetic. You could use "trap representations" - like NaN in floating point. You could have an exception mechanism like C++. You could have an error handler mechanism. You could have a software interrupt or trap.

Giving a defined "ordinary" behaviour like wrapping would be simple and appear efficient. However, it would mean that the compiler would be unable to spot problems at compile time (the best time to spot bugs!), and it would stop the compiler from a number of optimisations that let the programmer write simple, clear code while relying on the compiler to generate good results.

Any kind of trap or error handler would necessitate a good deal of extra run-time costs, and negate even more optimisations. The compiler could not even simplify "x + y - y" to "x" because "x + y" might overflow.

It is usually a simple matter for a programmer to avoid signed integer overflow. Common methods include switching to unsigned integers, or simply increasing the size of the integer types.

Debugging tools can help spot problems, such as the "sanitizers" in gcc and clang, but these are of limited use in embedded systems.

Array bound checking would also involve a good deal of run-time overhead, as well as re-writing of C code (since you would need to track bounds as well as pointers). And what do you do when you have found an error?

C is like a chainsaw. It is very powerful, and lets you do a lot of work quickly - but it is also dangerous if you don't know what you are doing. Remember, however, that no matter how safe and idiot-proof your tree-cutting equipment is, you are still at risk from the falling tree.

I would certainly agree that a good deal of code that is written in C, should have been written in other languages. It is not the right tool for every job. But it /is/ the right tool for many jobs - and UB is part of what makes it the right tool. However, you need to understand what UB is, how to avoid it, and how the concept can be an advantage.

Reply to
David Brown

True. I'd be happy to see people continue to use full memory barriers - they may not be speed optimal, but they will lead to correct code. Let those who understand the more advanced synchronisation stuff use acquire-release. And of course a key point is for people to use RTOS features when they can - again, using a mutex or semaphore might not be as efficient as a fancy lock-free algorithm, but it is better to be safe than fast.

The xCORE is a bit different, as is the language you use and the style of the code. Message passing is a very neat way to swap data between threads or cores, and is inherently safer than shared memory.

Yes.

In general, I agree. In this particular case, the Alpha is basically obsolete - but it is certainly possible that future cpu designs would have equally weak memory models. Such a weak model is easier to make faster in hardware - you need less synchronisation, cache snooping, and other such details.

No, not to the same extent. Things move faster now, especially in the C++ world. C++ is on a three year update cycle now. The first ISO standard was C++98, with C++03 being a minor update 5 years later. It took until C++11 to get a real new version (with massive changes) - and now we are getting real, significant improvements every 3 years.

The trouble with thread support in C11/C++11 is that it is limited to very simple features - mutexes, condition variables and simple threads. But real-world use needs priorities, semaphores, queues, timers, and many other features. Once you are using RTOS-specific API's for all these, you would use the RTOS API's for thread and mutexes as well rather than calls.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.