C# for Embedded ?...

That's an entertaining document but it mostly describes:

1) Hazards in C++ that are also present in C, so you don't escape from them by choosing C over C++. 2) Hazards of complicated C++ features that aren't used that frequently and that programmers can decide not to use in a given project 3) Deficiencies in "old" (C++98) C++ that have been fixed in "modern" (C++11 and later) C++. C++11 was really a big improvement over earlier versions and I didn't get much interested in C++ til it came out.

I've written tons of C and a (so far) smallish amount of "modern" C++ and would consider both of them to be scary, but overall my current impression is that C++ is safer if you know take #2 above into account.

For example, in C++ it's, idiomatic to say array.at(i) instead of array[i], and the .at() method on STL arrays checks that the subscript is in range (measurements I've done in my own programs have so far encountered almost no performance loss from this). In C, most people end up using unchecked subscripts. C++ RAII avoids a lot of dangling- resource bugs, etc. None of this has anything to do with C# though.

I've fooled with Ada a little bit and would consider it to be much safer than either C or C++ while being somewhere between the two of them in expressiveness, though much worse in terms of tooling. I can't possibly be saying "my language is better" in naming Ada, since my languages are C and C++, and I've only looked into Ada as a possible alternative to them.

That C# allows escaping to unsafe operations doesn't seem worse to me than Java having a JNI. Even Haskell supports unsafe operations. Of course C# has other issues that make it not sound good for real time.

Here are Bjarne Stroustrup's currently recommended C++ core guidelines which spell out a much safer set of practices than I usually see in C programs:

formatting link

Reply to
Paul Rubin
Loading thread data ...

Op 04-Nov-16 om 7:22 PM schreef snipped-for-privacy@gmail.com:

That's easily solved by linking without heap support (so using the heap will cause a linker error.

Wouter "Objects? No Thanks!" van Ooijen

Reply to
Wouter van Ooijen

Not every hole *can* be tested. You run into the halting problem, first and mainly with the garbage collector. There is no way to prove it will always complete "in time" for any definition of "in time". The same is likely true of many of the application's own algorithms; but at least you're in control of those.

Clifford Heath.

Reply to
Clifford Heath

Your suggested actions w.r.t. C/C++ are ameliorations, not cures. But you know that.

Even if (and it is a big if) they were a cure, how could you guarantee that /everybody/ (in whatever company) that produced any code that ends up in your product has fully followed the recommended practices.

Using C/C++ for safety critical system is "building a castle on sand".

If you look back (to say 1993) in the archives you will find cases where competent compiler writers have asked expert users with a record of knowing where skeletons are likely to be buried, "what does the standard mean by X?" and "how do we simultaneously resolve the standard saying X and Y?". That doesn't warm the cockles of my heart.

Reply to
Tom Gardner

The guidelines basically say to use the current reasonably safe subset of C++ rather than the leftover legacy stuff. C++ itself can't get rid of the old stuff because old programs would break, but new code can avoid it straightforwardly.

1) Most (not all) of the guideline recommendations are statically machine checkable and the guideline discusses that for each item. 2) Critical code even in Ada usually has multi-person code reviews that should be able to spot departures from coding rules. 3) Stuff like excluding the use of certain runtime libraries can be enforced by replacing those libraries with versions that signal errors. 4) The same "amelioration" issues apply to C (MISRA guidelines for acceptable practices), Ada (Spark subset and profile), etc.

C and C++ these days are completely different languages and Stroustrup likes to say "I take most uses of the compound C/C++ as an indication of ignorance."

formatting link

The C++ FQA sometimes is used to support the view that C++ is more dangerous than C. Having used C++ for a while now, my current subjective impression is that if you follow the core guidelines and use traditional coding/debugging processes, C++ is safer than C, especially with modern (C++11 and later) dialects. Ada is probably safer than either. I definitely choked a little when I heard that the Tesla car and SpaceX rocket are both programmed in C++.

C has better static analysis tools like Frama-C that I'd like to try using, which could be considered an advantage of C over C++. I currently haven't used them though.

Yep. It's still like that. You also can see that type of thing in comp.lang.ada, so it's not C specific.

Reply to
Paul Rubin

C# was designed to compete head-to-head with Java. They did this because Sun's lawsuit forced them to abandon J++.

However, if you take a step back and look at the history, another picture emerges.

Managed C++ (the ancestor of C++/CLI and C#) was released just a few months after Java - proof that it was in development at the same time. When the Java (ne Oak) project began, Microsoft already was working on "COM+", a project to unite COM and OLE with the runtimes of their popular managed languages: VisualBasic and FoxPro. When Java appeared, COM+ already supported VisualBasic, FoxPro and Managed C++. After Java appeared, the project morphed into CLR and the aim became to (eventually) support all the Microsoft languages.

So it's true that Java arrived before C#. It's not true that JVM arrived before (what became) CLR.

That isn't exactly true either. JVM does not directly support tail calls. CLR does directly support tail calls and so better supports languages that require them. F# is an example of this - there is no ML family language for JVM.

C# does not require you to use anything but C#

The JVM machine is not bad, but I dislike Java the language and I avoid it whenever possible. There are better languages to use if you need to target the JVM. YMMV.

I don't much like C# either and I'm not particularly a fan of Microsoft, but I don't support bashing anyone with innuendo and half-truths.


Reply to
George Neuner

This is apparently a real issue, but I'm not sure why, since tail calls can be compiled into jumps. I wonder how Scala, Clojure, and Frege (all JVM functional languages) deal with it.

Reply to
Paul Rubin


Don't forget that MS already had a reasonably good JVM and could have continued to offer Java, but decided not to on business grounds and out of hubris.

My understanding is that Com+ is mainly a means of bolting together components and passing state (especially transational state) between them. The components are usually much larger than a class, and can in theory be written in any language provided that the /external/ semantics are preserved.

That's very different in scope and objectives to the JVM.

Translation: something came before the JVM and turned into something else after the JVM arrived. Shrug :)

Most things are an evolution of what went before, and things are continually "repurposed" in the light of changing environment. If you want to go down that route, you'll need to consider Smalltalk, Oak, ANDF and many other initiatives.

Java arrived and was remarkably usable very quickly. Within six months of its arrival, I was able to buy a library that instantly gave me interactive 2D and 3D charts/graphs - something that C++ didn't manage in

10 years!

C# and the CLR came years later, not that that is particularly important.

There are many different languages that compile down to the JVM, with many different characteristics. Many are commercially important.

That was primarily a business/marketing decision, based on the need to support historic code (an MS strength) and to claim ubiquitous applicability (poor, very debatable).

Otherwise it is somewhat true, but the consequence is that you explicitly destroy all the valuable guarantees that the managed environment provides.

That's a bad tradeoff, IMNSHO.

Depends on the problem at hand. IMNSHO, in the absence of any requirements and constraints, Java is the best application language. But people should always identify their requirements and constraints and use a screw, pop-rivet, nail, glue as appropriate.

Reply to
Tom Gardner

ISTR it is to do with the lack of specific JVM instructions. There are plans to add new instructions like "invokedynamic" but they don't seem to get anywhere. (I'm out of touch on the specifics, and what has/hasn't been added)

I suspect their importance is reduced by new features being added to Java that are claimed to give many of the advantages of functional programs. I haven't formed an opinion as to whether such additions are worth it, but fear they might enable people to write even more incomprehensible code.

History shows that languages tend to start out simple, pure and comprehensible, but mutate into a less comprehensible mish-mash that attempts to be all things to all people. Java is not immune to those tendencies.

Reply to
Tom Gardner

Yes, but "legacy" => "sand", and very few programs are completely new code.

Such guidelines are certainly a significant help, but for safety critical work /guarantees/ are valuable. Other languages are a better starting point where guarantees are required.

Of course. You can create a bad/dangerous/stupid system in any language.

Dangerous in an embedded safety-critical application. Avoidance by design and construction is highly preferable. While not completely achievable, other languages get you considerably closer to nirvana.

He's said many things over the decades, of course :)

To me what's important is what the average programmer

*doesn't* have to know/worry about. Most want to (and are paid to) drain a swamp, not fend off alligators.

C++ /is/ more dangerous in that it is more poorly specified and more misunderstood by typical programmers (and language implementers, if anecdotes from those involved in such things are to be believed).

In other respects they are equally safe/dangerous. The question then becomes how much damage a typical programmer could inflict knowingly or inadvertently.

Gag :(

And that's a problem: not many people do use them.

Tools like valgrind are used because they uncover very common classes of problem that are completely avoided by using "more advanced" languages and managed environments that give you extra guarantees.

Agreed, but the issue is more widespread and significant in C++ and (to a slightly lesser extent) in C.

Reply to
Tom Gardner

C# or Java (C# is essentially MS/Java) are not suitable for embedded systems, and in a safety critical environment with very likely also some realtime constraints, is the worst possible choice ever. In those contexts even C++ (which runs circles around C#/Java performance wise) would be a suboptimal choice.

Reply to

Surprisingly to those that have a narrow knowledge of computing research and practice, that's wrong.

The core problems are that:

1) lack of GC leads many programmers to be more defensive than is necessary in a managed environment. Typically that manifests itself as "unnecessary copies", which is a real problem since memory is usually the bottleneck nowadays.

2) C/C++ can only optimise what it /guesses/ will happen at runtime. HotSpot optimises what /actually/ happens. If you doubt the efficacy of that, consider the results of the 25 year old "Dynamo" results. The TL;DR is take optimised C running on processor X, then instrument what actually happens at runtime by running it inside a emulator of X running on X. Naturally that is slow. Then optimise the binary and run it inside the emulator of X running on X. Sometimes the *emulated* C binary code is *faster* than the *native* C binary. FFI, see

formatting link

N.B. those that are /very/ interested in low predictable latencies use Java. The "high performance trading" mob are perfectly happy to spend $600m on laying their own trans-Atlantic fibreoptic cable, or buying up and using the old microwave links between Chicago and New York. Why? To shave a few *milliseconds* off the round trip time.

They also like to cast the trading /algorithms/ and network stack in *hardware*, i.e. FPGAs.

If Java was slow, it would be out the door in an instant; instead it is becoming the normal platform.

Reply to
Tom Gardner

With truly safety critical systems (such as nuclear reactors), you throw in as much hardware as required, cost is absolutely not an issue.

In this thread, there has been an attitude of claiming that a specific programming language would "solve" any programming safety issues. In practice, the worst errors are made in the design phase, typically written in plain (English) text.

Noting that for safety critical projects, there is a _huge_ amount of paperwork done before any single line of a computer programming language line is actually written,

For truly safety critical systems, the programming language used, is not really an issue. Even assembly language is quite acceptable due to the huge paper work.

Reply to

C++ avoids that pretty well these days, with move semantics and smart pointers introduced in C++11. If you haven't used it, give it a try.

Serious compilers support profile-based optimization these days.

It seems to be going out of style in the past few years, though maybe mostly because of Oracle being annoying than technical issues.

Reply to
Paul Rubin

"Pretty well" is like "only a little bit pregnant".

I'm out of touch w.r.t. that. I wonder how effective is the support w.r.t Hotspot technology. For realtime embedded work, neither is particularly attractive, of course.

And for hard realtime work, even hardware caches present significant problems!

Hah! I'm not going to argue about Oracle; arguably they have worse behaviour than Microsoft ;}

What appears to you to be "the new black"?

Reply to
Tom Gardner

While it is /necessary/ to choose the "right" language, it is not /sufficient/ to choose the right language. Anyone that thinks otherwise hasn't thought about the topic!

Reply to
Tom Gardner

Without knowing more of the architecture, it is not uncommon to have a mix of emmbedded MISRA C on one processor and a higher language for use as a GUI or other network processes on another.

The key is to make sure the safety critical component is compact and well tested and can determine and react correctly to any fault, and possibly even restart the second CPU if it behaves incorrectly.

In short I would like to know more about the system before passing judgement. If as you say the whole system, including any safety critical component, is written in C# then alarm bells would indeed be ringing.

Mike Perkins 
Video Solutions Ltd 
 Click to see the full signature
Reply to
Mike Perkins

I just wanted to thank you all for the replies on this. Tt just confirms the view here and should provide additional evidence for the meeting. Can't say more now, but will try to report back to group later next week...



Reply to

Thanks Chris, Looking forward to hear how this went... Best Regards, Dave

Reply to
Dave Nadler

Well, it works pretty well in the sense that it gets rid of a lot of cases where you'd have to do it manually. You can of course always still do it manually like before, or you can use shared_ptr which is sort of a poor man's GC. There's also an idea in development for gc'd regions:

formatting link

C++ itself has gc hooks in the language and (independently) the Boehm-Demers gc has been successful in various C and C++ applications. The idea of these systems languages though (C++, Rust, Ada) is to control resources precisely, which is in tension with the concept of gc.

PenguinOfDoom (from irc #haskell) once said something like "being sophisticated gents, we classify programming languages into sucks and doesn't-suck, and put all of them in the first category".

I've never been involved with realtime critical systems (only vicariously interested) but given enough budget I'd consider the commercial Adacore Spark/Ada stuff. I've fooled around a little with the free stuff but for a genuine critical system I'd want their support package.

I don't currently understand where complex HRT software is actually needed. E.g. I don't know squat about the subject, but I imagine something like a flight control system being a fast PID jiggling some actuators towards a desired flight vector (this is critical HRT but fairly simple), wrapped in a slower control loop that adjusts the desired vector at maybe a few Hz, i.e. critical and complex, but non-HRT since it's ok if an update is late by some milliseconds now and then. Or the whole airplane would contain a monstrous amount of code (complex) but a lot of it would be stuff like in-flight entertainment (non-critical).

There's a youtube video of Adacore founder Bob Dewar talking about the F-22 fighter software (written in Ada). He asked whether fighter plane software was safety-critical and the audience laughed. But then he explained that a passenger airliner is obviously safety-critical and so you have to run everything in a very conservative regime. But the purpose of a fighter is to go out and get shot at (unsafe by definition), and they emphasize performance over safety in other parts of the plane (unstable aerodynamics, driving engines to their limits, etc.) so maybe they should think of the software that way too. However, he says, they don't.

Reply to
Paul Rubin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.