C++, Ada, ...

I have looked at it, and come back as sane as ever (apart from the pencils up my nose and underpants on my head). But I've worked up to it through many versions of the C standards.

More seriously, if I need to look up any details of C or C++, I find vastly more user-friendly.

Yes - and "better" is usually highly subjective.

In the case of compile-time calculations, modern C++ is certainly better than older C++ versions or C (or, AFAIK, Ada). It can't do everything that I might do with external Python scripts - it can't do code generation, or make tables that depend on multiple source files, or make CRC checks for the final binary. But it can do some things that previously required external scripts, and that's nice.

I don't believe that the surprise was "unpleasant" - it's just something they hadn't considered. (I'm not even sure of that either - my feeling is that this is an urban myth, or at least a story exaggerated in the regular retelling.)

Template-based calculations were certainly very convoluted - they needed a functional programming style structure but with much more awkward syntax (and at the height of their "popularity", horrendous compiler error messages when you made a mistake - that too has improved greatly). And that is why constexpr (especially in latest versions) is so much better.

Template-based calculations are a bit like trying to do calculations and data structures in LaTeX. It is all possible, but it doesn't roll off the tongue very easily.

(I wonder if anyone else understands the pencil and underpants reference. I am sure Tom does.)

Reply to
David Brown
Loading thread data ...

You don't tell the new guy that he or she must maintain old code! You spring that as a surprise, once you've got them hooked on the quality of your coffee machine.

Indeed.

But code written in the latest fad language is not the worst. I've had to deal with code (for a PC) written in ancient versions of a propriety "RAD tool" where the vendor will no longer sell the outdated version and the new tool version is not remotely compatible. I'd pick Ada over that any day of the week.

Reply to
David Brown

I was thinking of "unpleasant" /because/ it was a surprise. Bjarne dismissed the possibility before being stunned a couple of days later.

Here's a google (mis)translation of the Erwin Unruh's account at

formatting link

Temple meta programming

The template meta programming is a way to carry out calculations already in C

++ during the translation. This allows additional checks to be installed. This is particularly used to build efficient algorithms. In 2000, a special workshop was held for this purpose.

This started with the C ++ standardization meeting in 1994 in Sandiego. Here is my personal memory:

We discussed the possibilities of determining template arguments from a template. The question came up as to whether the inverse of a function could be determined. So if "i + 1 == 5" could be closed, that "i == 4". This was denied, but the question inspired me to the idea to calculate primes during the translation. The first version I crafted on Monday, but she was fundamentally wrong. Bjarne Strouffup said so something would not work in principle. This stacked my zeal, and so I finished the scaffolding of the program Wednesday afternoon. On Wednesday evening another work meeting was announced where I had some air. There I met Tom Pennello, and we put together. He had his notebook and we tapped my program briefly. After some crafts, the program ran. We made a run and printed program and error message. Then Tom came to the idea of ??taking a more complicated function. We chose the Ackermann function. After a few hours, this also ran and calculated the value of the Ackermann function during the translation. On Thursday I showed the term Bjarne. He was extremely stunned. I then made copies for all participants and officially distributed this curious program. I kept the whole thing for a joke.

A few weeks later, I developed a proof that the template mechanism is turbine-complete. However, since this proof was quite dry, I just put it to the files. I still have the notes. On the occasion, I will tempt this time and provide here.

Later, Todd Veldhuizen picked up the idea and published an article in the C ++ Report. This appeared in May 1995. He understood the possibilities behind the idea and put them in concrete metatograms that make something constructive. This article was the basis on which template meta programming was built. Although I gave the kick-off, but did not recognize the range of the idea.

Erwin Unruh, 1. 1. 2002

Being perverse can be fun, /provided/ it doesn't happen accidentally in everyday life.

Reply to
Tom Gardner

Yes, you must inherit from one of the types Ada.Finalization.Controlled or Ada.Finalization.Limited_Controlled when you create a type for which you can program an initializer and/or a finalizer.

However, you can aggregate a component of such a type into some other composite type, and then that component's initializer and finalizer will be called automatically when any object of the containing composite type is constructed and destroyed.

No, and yes. Subobjects (components) are automatically initialized before the composite is initialized (bottom-up), and are automatically finalized after the composite is finalized (top-down). But there is no automatic invocation of the initializer or finalizer of the parent class; that would have to be called explicitly (except in the case of an "extension aggregate" expression, where an object of the parent type is created and then extended to an object of the derived class).

The Ada initializer and finalizer concept is subtly different from the C++ constructor and destructor concept. In Ada, the construction and destruction are considered to happen implicitly and automatically. The construction step can assign some initial values that can be defined by default (pointers default to null, for example) or can be specified for the type of the component in question, or can be defined for that component explicitly. For example:

type Down_Counter is range 0 .. 100 with Default_Value => 100;

type Zero_Handler is access procedure;

type Counter is record

Running : Boolean := False; -- Explicit init. At_Zero : Zero_Handler; -- Default init to null. end record;

Beyond that automatic construction step, the programmable initializer is used to perform further automatic activities that may further initialize the object, or may have some other effects. For example, we might want to automatically register every instance of a Counter (as above) with the kernel, and that would be done in the initializer. Conversely, the finalizer would then deregister the Counter, before the Counter is automatically destroyed (removed from the stack or from the heap).

So the Ada "initializer" is not like a C++ constructor, which in Ada corresponds more closely to a function returning an object of the class.

An Ada "finalizer" is more similar to a C++ destructor, taking care of any clean-up that is needed before the object disappears.

I won't try to write Ada equivalents of the above :-) though I have of course written much Ada code to manage and handle interrupts.

Here is the same in Ada. I chose to derive from Limited_Controlled because that makes it illegal to assign a Critical_Section value from one object to another.

-- Declaration of the type:

type Critical_Section is new Ada.Finalization.Limited_Controlled with record old_pri : Interfaces.Unsigned_32; end record;

overriding procedure Initialize (This : in out Critical_Section); overriding procedure Finalize (This : in out Critical_Section);

-- Implementation of the operations:

procedure Initialize (This : in out Critical_Section) is begin This.old_pri := disableGlobalInterrupts; end Initialize;

procedure Finalize (This : in out Critical_Section) is begin restoreGlobalInterrupts (This.old_pri); end Finalize;

function Compare_and_Swap64 ( p : access Interfaces.Unsigned_64; old, x : in Interfaces.Unsigned_64) return Boolean is Lock : Critical_Section; begin if p.all /= old then return False; else p.all := x; return True; end if; end Compare_and_Swap64;

(I think there should be a "volatile" spec for the "p" object, don't you?)

See above. I don't have an Ada-to-Cortex-M compiler at hand to compare the target code, sorry.

But critical sections in Ada applications are more often written using the Ada "protected object" feature. Here is the same as a protected object "CS", with separate declaration and body as usual in Ada. Here I must write the operation as a procedure instead of a function, because protected objects have "single writer, multiple readers" semantics, and any function is considered a "reader" although it may have side effects:

protected CS with Priority => System.Interrupt_Priority'Last is

procedure Compare_and_Swap64 ( p : access Interfaces.Unsigned_64; old, x : in Interfaces.Unsigned_64; result : out Boolean);

end CS;

protected body CS is

procedure Compare_and_Swap64 ( p : access Interfaces.Unsigned_64; old, x : in Interfaces.Unsigned_64; result : out Boolean); is begin result := p.all = old; if result then p.all := x; end if; end Compare_and_Swap64;

end CS;

However, it would be more in the style of Ada to focus on the thing that is being "compared and swapped", so that "p" would be either a discriminant of the protected object, or a component of the protected object, instead of a parameter to the copy-and-swap operation. But it would look similar to the above.

Ada does not have programmable implicit conversions, but one can override some innocuous operator, usually "+", to perform whatever conversions one wants. For example:

function "+" (Item : Boolean) return Float is (if Item then 1.0 else 0.0);

or more directly

function "+" (Item : Boolean) return Float is (Float (Boolean'Pos (Item)));

Ada also has some warts, but perhaps not as easily illustrated.

Reply to
Niklas Holsti

If the company trained only one person in the language, that was a stupid (risky) decision by the company, or they should not have started using that language at all.

During all my years (since about 1995) working on on-board SW for ESA spacecraft, the company hired one person with earlier experience in Ada, and that was I. All other hires working in Ada projects learned Ada on the job (and some became enthusiasts).

Sadly, some of the large aerospace "prime" companies in Europe are becoming unwilling to accept subcontracted SW products in Ada, for the reason discussed: because their HR departments say that they cannot find programmers trained in Ada. Bah, a competent programmer will pick up the core concepts quickly, says I.

Of course there are also training companies that offer Ada training courses.

Reply to
Niklas Holsti

This is valid not just for ADA. An experienced programmer will need days to adjust to this or that language. I guess most if not all of us have been through it.

Dimiter

====================================================== Dimiter Popoff, TGI

formatting link
======================================================
formatting link

Reply to
Dimiter_Popoff

OK. Am I right in assuming the subobjects here also need to inherit from the "Finalization" types individually, in order to be automatically initialised in order?

Are there any overheads (other than in the source code) for all this inheriting? Ada (like C++) aims to be minimal overhead, AFAIUI, but its worth checking.

C++ gives you the choice. You can do work in a constructor, or you can leave it as a minimal (often automatically generated) function. You can give default values to members. You can add "initialise" member functions as you like. You can have "factory functions" that generate instances. This lets you structure the code and split up functionality in whatever way suits your requirements.

For a well-structured class, the key point is that a constructor will always establish the class invariant. Any publicly accessible function will assume that invariant, and maintain it. Private functions might temporarily break the invariant - these are only accessible by code that "knows what it is doing". And the destructor will always clean up after the object, recycling any resources used.

Having C++ style constructors are not a requirement for having control of the class invariant, but they do make it more convenient and more efficient (both at run-time and in the source code) than separate minimal constructors (or default values) and initialisers.

I think Ada has built-in (or standard library) support for critical sections, does it not? But this is just an example, not necessarily something that would be directly useful. Obviously the code above is highly target-specific.

Are "Initialize" and "Finalize" overloaded global procedures, or is this the syntax always used for member functions?

It might be logical to make it volatile, but the code would not be different (the inline assembly has memory clobbers already, which force the memory accesses to be carried out without re-arrangements). But adding "volatile" would do no harm, and let the user of the function pass a volatile pointer.

The Ada and C++ code is basically the same here, which is nice.

How would it look with block scope?

extern int bar(int x); int foo(volatile int * p, int x, int y) { int u = bar(x); { CriticalSectionLock lock; *p += z; } int v = bar(y); return v; }

The point of this example is that the "*p += z;" line should be within the calls to disableGlobalInterrupts and restoreGlobalInterrupts, but the calls to "bar" should be outside. This requires the lifetime of the lock variable to be more limited.

godbolt.org has Ada and gnat 10.2 too, but only for x86-64. The enable/restore interrupt functions could be changed to simply reading and writing a volatile int. Then you could compare the outputs of Ada and C++ for x86-64. If you have the time and inclination, it might be fun to see.

I think the idea of language support for protected sections is nice, but I'd be concerned about how efficiently it would map to the requirements of the program and the target. Such things are often a bit "brute force", because they have to emphasise "always works" over efficiency. For example, if you have a 64-bit atomic type (on a 32-bit device), you don't /always/ need to disable interrupts around it. If you are already in an interrupt routine and know that no higher priority interrupt accesses the data, no locking is needed. If you are in main thread code but only read the data, maybe repeatedly reading it until you get two reads with the same value is more efficient. Such shortcuts must, of course, be used with care.

In C and C++, there are atomic types (since C11/C++11). They require library support for different targets, which are (unfortunately) not always good. But certainly it is common in C++ to think of an atomic type here rather than atomic access functions, just as you describe in Ada.

A language without warts would be boring!

Thank you for the insights and updates to my Ada knowledge.

Reply to
David Brown

I agree with you there. But so many people learn "programming in C++" or "programming in Java", rather than learning "programming".

Reply to
David Brown

I just wonder if templates and C++ would be valid for the IOCCC contest. The example would be a good candidate.

--

-TV
Reply to
Tauno Voipio

I am sure that if the IOCCC were open to C++ entries, templates would be involved.

But that particular example is not hard to follow IMHO. The style is more like functional programming, with recursion and pattern matching rather than loops and conditionals, so that might make it difficult to understand at first.

Reply to
David Brown

Yes, if they need more initialization than provided by the automatic "construction" step (Default_Value etc.)

If the compiler sees the actual type of an object that needs finalization (as in the critical-section example) it can generate direct calls to Initialize and Finalize without dispatching. If the object is polymorphic (what in Ada is called a "class") the calls must go through a dispatch table according to the actual type of the object.

So, just as in Ada.

In Ada one can write the preconditions, invariants and postconditions in the source code itself, with standard "aspects" and Ada Boolean expressions, and have them either checked at run-time or verified by static analysis/proof.

Yes, "protected objects". See below.

They are operations ("member functions") of the Ada.Finalization.Limited_Controlled type, that are null (do nothing) for that (base) type, and here we override them for the derived Critical_Section type, to replace the inherited null operations.

The "overriding" keyword is optional (an unfortunate wart from history).

So you are relying on the C++ compiler actually respecting the "inline" directive? Are C++ compilers required to do that?

(Personally I dislike this style of critical sections. I think it is a confusing mis-use of local variables. Its only merit is that it ensures that the finalization occurs even in the case of an exception or other abnormal exit from the critical section.)

Much as you would expect; the block is

declare Lock : Critical_Section; begin p.all := p.all + z: end:

(In the next Ada standard -- probably Ada 2022 -- one can write such updating assignments more briefly, as

p.all := @ + z;

but the '@' can be anywhere in the right-hand-side expression, in one or more places, which is more flexible than the C/C++ combined assignment-operations like "+=".)

I haven't had any problems so far. Though in some highly stressed real-time applications I have resorted to communicating through shared atomic variables with lock-free protocols.

Interrupt handlers in Ada are written as procedures in protected objects, with the protected object given the appropriate priority. Other operations in that same protected object can then be executed in automatic mutual exclusion with the interrupt handler. The protected object can also provide one or more "entry" operations with Boolean "guards" on which tasks can wait, for example to wait for an interrupt to have occurred. I find this works very well in practice.

Sure.

The next Ada standard includes several generic atomic operations on typed objects in the standard package System.Atomic_Operations and its child packages. The proposal is at

formatting link

Follow the "Next" arrows to see all of it.

In summary, it seems to me that the main difference we have found in this discussion, so far, between Ada and C++ services in the areas we have looked at, is that Ada makes it simpler to define one's own scalar types, while C++ has more compile-time programmability like constexpr functions.

Reply to
Niklas Holsti

What do you suggest for a poor C embedded developer that wants to try C++ on the next project?

I would use gcc on Cortex-M MCUs.

Reply to
pozz

I'm not sure what kind of answer you are looking for, but I recommend the book "Effective Modern C++" by Scott Meyers. C++ is a mudball with many layers of cruft, that improved tremendously over the past few revisions. The book shows you how to do things the right way, using the improvements instead of the cruft.

Reply to
Paul Rubin

Choose a /very/ small project, and try to Get It Right (TM).

When you think there might be a better set of implementation cliches and design strategies, refactor bits of your code to investigate them.

Don't forget to use your favourite IDE to do the mechanics of that refactoring.

Reply to
Tom Gardner

Wow, that is disappointing. I had thought Ada access types were like C++ or ML references, i.e. they have to be initialized to point to a valid object, so they can never be null. I trust or at least hope that SPARK has good ways to ensure that a given access variable is always valid.

Debugging null pointer exceptions is a standard time-waster in most languages that have null pointers. Is it that way in Ada as well?

Fwiw, I'm not a C++ expert but I do use it. I try to write in a style that avoids pointers (e.g. by using references instead), and still find myself debugging stuff with invalid addresses that I think wouldn't happen in Ada. But I've never used Ada beyond some minor playing around. It seems like a big improvement over C. C++ it seems to me also improves on C, but by going in a different direction than Ada.

I plan to spend some time on Rust pretty soon. This is based on impression rather than experience, but ISTM that a lot of Rust is designed around managing dynamic memory allocation by ownership tracking, like C++ unique_ptr on steroids built into the language. That lets you write big applications that heavily use dynamic allocation while avoiding the usual malloc/free bugs and without using garbage collection. Ada on the other hand is built for high assurance embedded applications that don't use dynamic allocation much, except maybe at program initialization time. So Rust and Ada aim to solve different problems.

I like to think it is reasonable to write the outer layers of complex applications in garbage collected languages, with critical or realtime parts written in something like Ada. Tim Sweeney talks about this in his old slide deck "The Next Mainstream Programming Language":

formatting link

The above url currently throws an expired-certificate warning but it is ok to click past that.

Reply to
Paul Rubin

No it's much worse than that. First of all some languages are really different and take considerable conceptual adjustment: it took me quite a while as a C and Python programmer to become anywhere near clueful about Haskell. But understanding Haskell then demystified parts of C++ that had made no sense to me at all.

Secondly, being competent in a language now means far more than the language itself. There is also a culture and a code corpus out there which also have to be assimilated for each language. E.g. Ruby is a very simple language, but coming up to speed as a Ruby developer means getting used to a decade of Rails hacks, ORM internals, 100's of "gems" (packages) scattered over 100s of Github repositories, etc. It's the same way with Javascript and the NPM universe plus whatever framework-of-the-week your project is using. Python is not yet that bad, because it traditionally had a "batteries included" ethic that tried to standardize more useful functions than other languages did, but it seems to have given up on that in the past few years.

Maybe things are better in the embedded world, but in the internet world any significant application will have far too much internal functionality (dealing with complex network protocols, file formats, etc) for the developers to get anything done without bringing in a mass of external dependencies. A lot of dev work ISTM now is about understanding and managing those dependencies, and also in connecting to a wider dev community that you can exchange wisdom with. "Computer science", such as knowing how to balance binary trees, is now almost a useless subject. (On the other hand, math in general, particularly probability, has become a lot more useful. This is kind of satisfying for me since I studied a lot of it in school and then for many years never used it in programming.)

Reply to
Paul Rubin

Yes, I like that for Ada. These are on the drawing board for C++, but it will be a while yet before they are in place.

No, it is not relying on the "inline" at all - it is relying on the semantics of the inline assembly code (which is compiler-specific, though several major compilers support the gcc inline assembly syntax).

Compilers are required to support "inline" correctly, of course - but the keyword doesn't actually mean "generate this code inside the calling function". It is one of these historical oddities - it was originally conceived as a hint to the compiler for optimisation purposes, but what it /actually/ means is roughly "It's okay for there to be multiple definitions of this function in the program - I promise they will all do the same thing, so I don't mind which you use in any given case".

The compiler is likely to generate the code inline as part of normal optimisation, but it would do that anyway.

Fair enough - styles are personal things. And they are heavily influenced by what is convenient or idiomatic in the language(s) we commonly use (and vice versa).

Fair enough. (I expected there was some way to have smaller block-scope variables in Ada, though I didn't know how to write them. And it is not a given that the scope and the lifetime would be the same, though it looks like it is the case here.)

As a matter of style, I really do not like the "declare all variables at the start of the block" style, standard in Pascal, C90 (or older), badly written (IMHO) newer C, and apparently also Ada. I much prefer to avoid defining variables until I know what value they should hold, at least initially. Amongst other things, it means I can be much more generous about declaring them as "const", there are almost no risks of using uninitialised data, and the smaller scope means it is easier to see all use of the variable.

It may be flexible, but I'm not convinced it is clearer nor that it would often be useful. But I guess that will be highly related to familiarity.

These are certainly example differences. In general it would appear that most things that can be written in one language could be written in the other in roughly the same way (given appropriate libraries or type definitions).

And we can probably agree that both are better than plain old C in terms of expressibility and (in the right hands) writing safer code by reducing tedious and error-prone manual boilerplate.

Reply to
David Brown

I'm not entirely sure what you are asking - "gcc on Cortex-M" is, I would say, the right answer if you are asking about tools.

Go straight to a new C++ standard - C++17. (If you see anything that mentions C++98 or C++03, run away - it is pointless unless you have to maintain old code.) Lots of things got a lot easier in C++11, and have improved since. Unfortunately the law of backwards compatibility means old cruft still has to work, and is still there in language. But that doesn't mean you have to use it.

Go straight to a newer gcc - gcc 10 from GNU Arm Embedded. The error messages are much better (or at least less horrendous), and the static checking is better. Be generous with your warnings, and use a good IDE with syntax highlighting and basic checking in the editor.

Disable exceptions and RTTI (-fno-exceptions -fno-rtti), and enable optimisation. C++ (used well) results in massive and incomprehensible assembly unless you have at least -O1.

Don't try and learn everything at once. Some things, like rvalue references, are hard and rarely useful unless you are writing serious template libraries. There are many features of C++ that are needed to write libraries rather than use them.

Don't be afraid of templates - they are great.

Be wary of the bigger parts of the C++ standard library - std::vector and std::unordered_map are very nice for PC programming, but are far too dynamic for small systems embedded programming. (std::array, however, is extremely useful. And I like std::optional.)

Think about how the code might be implemented - if it seems that a feature or class could be implemented in reasonably efficient object code on a Cortex-M, then it probably will be. If it looks like it will need dynamic memory, it probably does.

Big class inheritance hierarchies, especially with multiple inheritance, virtual functions, etc., is old-fashioned. Where you can, use compile-time (static) polymorphism rather than run-time polymorphism. That means templates, overloaded functions, CRTP, etc.

Keep handy. Same goes for .

And keep smiling! [](){}(); (That's the C++11 smiley - when you understand what it means, you're laughing!)

Reply to
David Brown

I mentioned the tools just as a starting point. I don't know almost anything about C++, but I coded C for many years. I think there are some precautions to take in this situation to learn C++ respect learing C++ as the first language.

Dynamic memory... is it possible to have a C++ project without using heap at all?

Reply to
pozz

You can impose that constraint if you want to: if I had defined

type Zero_Handler is not null access procedure;

then the above declaration of the At_Zero component would be illegal, and the compiler would insist on a non-null initial value.

But IMO sometimes you need pointers that can be null, just as you sometimes need null values in a database.

There are also other means in Ada to force the explicit initialization of objects at declaration (the "unspeficied discriminants" method).

The state of the art in Ada implementations of critical systems is slowly becoming to use static analysis and proof tools to verify that no run-time check failures, such as accessing a null pointer, can happen. That is already fairly easy to do with the AdaCore tools (CodePeer, SPARK and others). Proving functional correctness still remains hard.

There is a proposal and an implementation from AdaCore to augment Ada pointers with an "ownership" concept, as in Rust. I believe that SPARK supports that proposal. Again, in some cases you want shared ownership (multiple pointers to the same object), and then the Rust ownership concept is not enough, as I understand it (not expert at all).

Reply to
Niklas Holsti

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.