Engineering degree for embedded systems

don't think so

the IoT hype is all about marketing benefits - selling consumers extra features (that they never knew they ever wanted and probably don't need)

using PLC's is an engineering benefit (or not)

tim

Reply to
tim...
Loading thread data ...

Yes, that's all true. The speed of getting something going is important for a beginner. But if the foundation is "sandy" then it can be necessary and difficult to get beginners (and managers) to appreciate the need to progress to tools with sounder foundations.

The old time "sandy" tool was Basic. While Python is much better than Basic, it is still "sandy" when it comes to embedded real time applications.

I wish more people took that attitude!

Agreed. The key difference is that with simple-but-unreliable tools it is possible to conceive that mortals can /understand/ the tools limitations, and know when/where the tool is failing.

That simply doesn't happen with modern tools; even the world experts don't understand their complexity! Seriously.

Consider C++. The *design committee* refused to believe C++ templates formed a Turing-complete language inside C++. They were forced to recant when shown a correct valid C++ program that never completed compilation - because, during compilation the compiler was (slowly) emitting the sequence of prime numbers! What chance have mere mortal developers got in the face of that complexity.

Another example is that C/C++ is routinely used to develop multi threaded code, e.g. using PThreads. That's despite C/C++ specifically being unable to guarantee correct operation on modern machines! Most developers are blissfully unaware of (my *emphasis*):

Threads Cannot be Implemented as a Library Hans-J. Boehm HP Laboratories Palo Alto November 12, 2004 * In many environments, multi-threaded code is written in a language that was originally designed without thread support (e.g. C), to which a library of threading primitives was subsequently added. There appears to be a general understanding that this is not the right approach. We provide specific arguments that a pure library approach, in which the compiler is designed independently of threading issues, cannot guarantee correctness of the resulting code. We first review why the approach *almost* works, and then examine some of the *surprising behavior* it may entail. We further illustrate that there are very simple cases in which a pure library-based approach seems

*incapable of expressing* an efficient parallel algorithm. Our discussion takes place in the context of C with Pthreads, since it is commonly used, reasonably well specified, and does not attempt to ensure type-safety, which would entail even stronger constraints. The issues we raise are not specific to that context.
formatting link

There are always crap instantiations of tools, but they can be avoided. I'm more concerned about tools where the specification prevents good and safe tools.

Lucky you -- I think! I've never been convinced of the wisdom of mixing work and home life, and family businesses seem to be the source material for reality television :)

Reply to
Tom Gardner

Yes, this seems to be the main motivation.

The greatly reduced hardware cost (both processing power and Ethernet/WLAN communication) has made it possible to just handle a single signal (or a small set of related I/O signals) in a dedicated hardware for each signal. Thus the controlling "IoT" device could read a measurement and control an actuator in a closed loop and receive a setpoint from the network.

This means that the controlling device can be moved much closer to the actuator, simplifying interfacing (not too much worrying about interference). Taking this even further, this allows integrating the controller into the actual device itself such as a hydraulic valve (mechatronics). Just provide power and an Ethernet condition and off you go. Of course, the environment requirements for such integrated products can be quite harsh.

Anyway, with most of the intelligence moved down to the actual device reduces the need for PLC systems, so some PC based control room programs can directly control those intelligent mechatronics units.

Anyway, if the "IoT" device is moved inside the actual actuator etc. device, similar skills are needed to interface to the input sensor signals as well as controlling actuators as in the case of external IoT controllers. With everything integrated into the same case, some knowledge of thermal design will also help.

While some courses in computer science is useful, IMHO, spending too much time on CS might not be that productive.

Reply to
upsidedown

I don't think that particular criticism is really fair - it seems the (rather simple) C preprocessor is also "turing complete" or at least close to it e.g,.

formatting link

Or a C prime number generator that mostly uses the preprocessor

formatting link

At any rate "Compile-time processing" is a big thing now in modern c++, see e.g.

Compile Time Maze Generator (and Solver)

formatting link

Or more topically for embedded systems there are things like kvasir which do a lot of compile-time work to ~perfectly optimise register accesses and hardware initialisation

formatting link

[...]
--

John Devereux
Reply to
John Devereux

What is multithreaded code ?

I can think of two definitions:

  • The operating system is running independently scheduled tasks, which happens to use a shared address space (e.g. Windows NT and later)

  • A single task with software implementation task switching between threads. This typically requires that the software library at least handles the timer (RTC) clock interrupts as in time sharing systems. Early examples are ADA running on VAX/VMS, MS-DOS based extenders and later on early Linux PThread.

If I understand correctly, more modern (past Linux 2.6) actually implements the PTHread functionality in kernel mode.

Now that there is a lot of multicore processors, this is a really serious issue.

But again, should multitasking/mutithreading be implemented in a multitasking OS or in a programming language is a very important question.

To the OP, what you are going to need in the next 3 to 10 years is hard to predict.

Reply to
upsidedown

Similar. But PLCs are more pointed more at ladder logic for use in industrial settings. You generally cannot, for example, write a socket server that just does stuff on a PLC; you have to stay inside a dev framework that cushions it for you.

There is a great deal of vendor lockin and the tool suites are rather creaky. And it's all very costly.

Yep.

Very much so. While doing paper-engineering - as in PE work - for power distro has some learning curve, the basics of power distro aren't rocket surgery.

You might end up building a flaky hunk of garbage if you don't...

Absolutely.

--
Les Cargill
Reply to
Les Cargill

The IoT hype that relates to people trying to get funding for things like Internet enabled juicers might be more frothy than the potential for replacing PLCs with hardware and software that comes from the IoT/Maker space.

It's not difficult to get beyond the capability of many PLCs. The highly capable ones ( like NI) tend to be "hangar queens" - they're not mechanically rugged.

--
Les Cargill
Reply to
Les Cargill

Not sure what you mean by "sandy". Like walking on sand where every step is extra work, like sand getting into everything, like sea shore sand washing away in a storm?

That is one of the things Hugh did right, he came up with a novice package that allowed a beginner to become more productive than if they had to write all that code themselves. He just has trouble understanding that his way isn't the only way.

Sounds like Forth where it is up to the programmer to make sure the code is written correctly.

--

Rick C
Reply to
rickman

Funny, compile time program execution is something Forth has done for decades. Why is this important in other languages now?

--

Rick C
Reply to
rickman

I have just received a questionnaire from the manufactures of my PVR asking about what upgraded features I would like it to include.

Whilst they didn't ask it openly, reading between the lines there were asking:

"would you like to control your home heating (and several other things) via your Smart TV (box)"

To which I answered, of course I bloody well don't

Even if I did seen a benefit in having an internet connected heating controller, why would I want to control it from my sofa using anything other than the remote control that comes with it, in the box?

tim

Reply to
tim...

It isn't important.

What is important is that the (world-expert) design committee didn't understand (and then refused to believe) the implications of their proposal.

That indicates the tool is so complex and baroque as to be incomprehensible - and that is a very bad starting point.

Reply to
Tom Gardner

There have been multicore processors for *decades*, and problems have been surfacing - and being swept under the carpet for decades.

The only change is that now you can get 32 core embedded processors for $15.

13 years after Boehm's paper, there are signs that C/C++ might be getting a memory model sometime. The success of that endeavour is yet to be proven.

Memory models are /difficult/. Even Java, starting from a clean sheet, had to revise its memory model in the light of experience.

That question is moot, since the multitasking OS is implemented in a programming language, usually C/C++.

Reply to
Tom Gardner

That's the point. Forth is one of the simplest development tools you will ever find. It also has some of the least constraints. The only people who think it is a bad idea are those who think RPN is a problem and object to other trivial issues.

--

Rick C
Reply to
rickman

tim... wrote on 8/6/2017 1:06 PM:

None of this makes sense to me because I have no idea what a PVR is.

--

Rick C
Reply to
rickman

Stupid compiler games aside, macro programming with the full power of the programming language has been tour de force in Lisp almost since the beginning - the macro facility that (essentially with only small modifications) is still in use today was introduced ~1965.

Any coding pattern that is used repeatedly potentially is fodder for a code generating macro. In the simple case, it can save you shitloads of typing. In the extreme case macros can create a whole DSL that lets you mix in code to solve problems that are best thought about using different syntax or semantics ... without needing yet another compiler or figuring out how to link things together.

These issues ARE relevant to programmers not working exclusively on small devices.

Lisp's macro language is Lisp. You need to understand a bit about the [parsed, pre compilation] AST format ... but Lisp's AST format is standardized, and once you know it you can write Lisp code to manipulate it.

Similarly Scheme's macro language is Scheme. Scheme doesn't expose compiler internals like Lisp - instead Scheme macros work in terms of pattern recognition and code to be generated in response.

The problem with C++ is that its template language is not C++, but rather a bastard hybrid of C++ and a denotational markup language. C++ is Turing Complete. The markup language is not TC itself, but it is recursive, and therefore Turing powerful ["powerful" is not quite the same as "complete"]. The combination "template language" is, again, Turing powerful [limited by the markup] ... and damn near incomprehensible.

YMMV, George

Reply to
George Neuner

All the pre-1980's multiprocessors that I have seen have been _asymmetric_ multiprocessors, i.e one CPU running the OS, while the other CPUs are running application programs.Thus, the OS handled locking of data.

Of course, there has been cache coherence issues even with a single processor, such as DMA and interrupts. These issues have been under control for decades.

Those coherence issues should be addressed (sic) by the OS writer, not the compiler. Why mess with these issues in each and every language, when it should be done only once at the OS level.

Usually very low level operations, such as invalidating cache and interrupt preambles are done in assembler anyway, especially with very specialized kernel mode instructions.

Reply to
upsidedown

In IEC-1131 (now IEC 61131-3) you can enter the program in the format you are mostly familiar with, such as ladder logic or structured text (ST), which is similar to Modula (and somewhat resembles Pascal) with normal control structures.

IEC-1131 has ben available for two decades

Reply to
upsidedown

A Personal Video Recorded (a disk based video recorder)

Reply to
tim...

C++11 and C11 both have memory models, and explicit coverage of threading, synchronisation and atomicity.

Reply to
David Brown

That is one way to look at it. The point of the article above is that coherence cannot be implemented in C or C++ alone (at the time when it was written - before C11 and C++11). You need help from the compiler. You have several options:

  1. You can use C11/C++11 features such as fences and synchronisation atomics.
  2. You can use implementation-specific features, such as a memory barrier like asm volatile("dmb" ::: "m") that will depend on the compiler and possibly the target.
  3. You can use an OS or threading library that includes these implementation-specific features for you. This is often the easiest, but you might do more locking than you had to or have other inefficiencies.
  4. You cheat, and assume that calling external functions defined in different units, or using volatiles, etc., can give you what you want. This usually works until you have more aggressive optimisation enabled. Note that sometimes OS's use these techniques.
  5. You write code that looks right, and works fine in testing, but is subtly wrong.
  6. You disable global interrupts around the awkward bits.

You are correct that this can be done with a compiler that assumes a single-threaded single-cpu view of the world (as C and C++ did before

2011). You just need the appropriate compiler and target specific barriers and synchronisation instructions in the right places, and often putting them in the OS calls is the best place. But compiler support can make it more efficient and more portable.

Interrupt preambles and postambles are usually generated by the compiler, using implementation-specific features like #pragma or __attribute__ to mark the interrupt function. Cache control and similar specialised opcodes may often be done using inline assembly rather than full assembler code, or using compiler-specific intrinsic functions.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.