Absolute addressing on the ARM

Paul Rubin has already posted some links related to your question.

This is an area that at has to a greater or lessor extent been part of my life for 30+ years.

Here are a few places to start.

A unambiguously defined language. In some was SubC has that. Ada, Pascal and the like are better than C and far from perfect. Implementation defined is convention but has many side effects most of which are unintended.

Focus on goals. The reason for my FDA reference was because they have both goals based on product reliability and safety but also recognize that the tools and the people who use them are far from perfect.

A lot more real research on software engineering and implementation practices.. A simple example of this is tools that walk the generated code applying normal reliability math to the code with any assumption about the reliability of an instruction based on pick one ( 1 , size in bytes, execution cycles) . This will not give you an MTTF that isn't the goal. The goal is to independently evaluate comparative reliabilities. Compare two implementations and it will reasonably accurately predict which will run most reliably. We have created and used such tools and it has had a remarkable positive effect on changes in the way we think about and plan software.

The topic lists for software engineering as opposed to software science is now quite large. Some of the things that are important that I see our better customers using are.

- Application design documents.

- Ram, rom, and execution cycle budgets

- Formal function interface documentation. (Independent component for reliably analysis)

- Coding practice document that support the application goals for a project

That's where I would start

Walter Banks Byte Craft Limited

Reply to
Walter Banks
Loading thread data ...

Some very bad tools have been used to create some very reliable application code. It has a lot to do with design, coding and testing practices. Focus on final product.

One thing I see is companies that standardize on a particular version of a tool and although the tool has evolved over many years they use the tools they are familiar with. The trade-off is dealing with a single set of changes as a product evolves related to their product and not side effects of new tools and product changes.

w..

Reply to
Walter Banks

Embedded is a pretty wide space and in more critical applications, there may be prescribed processes that I'd have to follow regardless of my preferences. In less critical applications, memory and realtime constraints can still be determining factors. Another issue is the skill sets of the programmers likely to work on the code. Using DSL's can require a level of PL geekery that a typical embedded developer would probably not have. I could imagine an approach of prototyping using a DSL, then conferring with the customer about whether they were comfortable staying with that approach or wanted a more traditional approach. If I were the main consumer of the code, then as a PL geek I'd probably use DSL's rather freely.

I don't see why not, if the target CPU is powerful enough to run it, and there aren't realtime constraints that would preclude garbage collection. I've done embedded stuff in Python which amounts to the same thing.

The two Haskell EDSL's that I mentioned (Atom and ImProve) both compile to C or Ada. I've played with Atom. You write your program in a style that looks like multitasking (plus you get to use Haskell syntax and type safety), and then the Atom library converts it to a C-coded state machine with deterministic timing, that you run through a C compiler to produce target code. ImProve operates about the same way, though its target problem set is a bit different.

Strictly functional code can have side effects: they just have to be known to the type system. So if some function in your program wants to print something (or call some other function that prints stuff), it has to have "IO" in its type signature. The function output is e.g. a "print command" that gets handed off to the runtime library that does the printing. So the type system lets you segregate the side-effecting code from the "pure" code, which helps write in a style where most of the code is pure and easier to reason about.

Reply to
Paul Rubin

Here's a talk about critical systems experience in Haskell:

(slides)

formatting link

(video)

formatting link

I haven't watched the video and I'm not sure if the slides and the video are from the exact same presentation, but they're both the same guy discussing the same subject matter.

Reply to
Paul Rubin

Try to minimize the total effort involved. Is the investment in tool building and learning (not only your learning, but of all people involved) compensated by the effort saved in using the tool and writing/reading the (much shorter/readable) solution(s)?

favours existing paradigms in an existing language:

- one-time, small problem

- many people need to understand it, especially if they need to understand only a small part, and already know the language/paradigms

- semantic difference between programming language/paradigms and problem domain is small

favours full special-purpose language:

- large recurring problems

- only few people involved, and they all need to understand large amounts of it

- semantic difference is hughe

IME this is a 'sweet spot' inbetween that is often a good choice. Named parameters make complex 'specifications' (more) readable. My professional experience was with Ada, but I also used Python for this purpose for mopre private projects.

Wouter

Reply to
Wouter van Ooijen

Just remember that people can write bad software in any language. A good choice of language and tools makes it easier to write good software and harder to write bad software, but it is no guarantee. And the language and tools are very much secondary to factors such as development methods, testing procedures, reviews, documentation, specifications, and everything else around the actual coding.

But of course there is a strong correlation - a company or organisation that is willing to invest in Ada development rather than C is likely to pay attention to the other parts of the development ecosystem.

Compilers are not bug-free - but bugs in the compiler users' code outweigh the bugs in compilers by many orders of magnitude. Since the end result is only bug-free if the entirely of the user code is bug free, and that it does not use any part of the compiler that contains a bug, it seems obvious that reducing the risk of user error (such as by using a "safer" language like Ada, or by restricting C to a carefully chosen subset) is far more important than worrying about the particular choice of toolchain.

Also, as noted before in this thread, simplicity or complexity of a toolchain is no indication of the quality or the risk of hitting a bug. What /does/ increase your chance of meeting a bug is using odd or complicated structures in your own code - it is not hard to write your C code in a way that significantly reduces the risk of hitting a compiler bug (while simultaneously making the code clearer and easier to write correctly).

Reply to
David Brown

I don't know, but I guess it might be something to do with the user-defined literal syntax in newer C++ (that lets you write things like "2_hours + 45_minutes" by defining a "_hours" operator to convert a number into a time class).

Reply to
David Brown

(Hey, I had that over twenty years ago! But mine are just scale factors, and are not connected to the literal:

2 m + 35 cm, 5 miles+375 yds and so on (all these are converted to mm as that was my basic unit).

As I implement it, the "m", "cm" etc don't interfere with the normal namespace. Originally used in an application scripting language, these days I mainly use the feature to be able to write 5 million, 1 billion and so on.)

But the ' character as separator is not that terrible, at least it's easy to type with no shift needed.

One extra literal connected thing they might look at, but it's probably too late, is the 'bug' where writing leading zeros such as 064 actually gives you the number 52 not 64. Data, especially from external sources, can sometimes have leading zeros. Perhaps 'avoid leading zeros' should be added to the lists that people have posted of how to ensure more bug-free code.

--
Bartc
Reply to
BartC

Those are two points that I think are critical: tools and people.

There are many very good toolsets (i.e. not just compilers) available for standard languages, and appropriate use can significantly improve the end result. Re-creating vaguely equivalent tools for a special-purpose language is an enormous burden, and I've never seen it happen. End result: you spend more time on the nitty-gritty crud that good tools make easy.

If your product is wildly successful then you will need to recruit more people to get the job done. By definition you can't hire people with experience, so you have to spend time and energy training them. Plus it is unlikely that really good people will want to develop their career around something that has no transferable skills.

But the key point is that /usually/ the combination of standard language plus DSLibrary gets /most/ of the benefits of a DSLanguage. Without the disadvantages of a DSLanguage.

In addition, my experience is that a small DSLanguage for limited purposes usually grows "organically" like topsy until nobody understands it any more! ("Organically" is clearly a euphemism, of course). Unfortunately some mainstream standard languages also suffer from that problem :(

Reply to
Tom Gardner

Agreed.

Sounds close to the "executable requirements specification" concept. I have no problems with that.

The main issue becomes ensuring the "executable requirements" are fully implemeted in the standard language plus DSLibrary. I know aerospace companies go to considerable trouble to develop/use special-purpose tools to ensure such tracability.

Nagging doubt: the small, limited-purpose DSLanguages I've seen become successful all gradually evolved over the years until they were so large and complex that the designers didn't really understand them fully - let alone the people that used them.

Naturally DSLibraries also tend to have that problem, but at least there should be good tool support for understanding them.

Yeah, I'm getting old w.r.t. processor/memory capabilities. But my embedded projects have all had a significant soft or hard real-time or low-power elements to them.

I'd need to glimpse the compelling advantages (vs standard language plus DSLibrary) of that before I invested my time.

When it suits me I become a purist, so I'll claim that's merely hiding the dirty stuff under the carpet, pretending it isn't there. But it is there, so it cannot be strictly functional.

Reply to
Tom Gardner

Just for your interest:

I'd rather have a _ as a separator than ', but it could be worse - someone mentioned another language that uses $.

Octal literals are intentional by design, but I too think that in most programs they are likely to be a mistake. The only common usage I know of them is for posix file modes. Personally, I would far rather see leading zeros ignored and 0o64 being used for "octal 64", or something akin to Ada like 8#64. But octal in C is well established - the best we can hope for is optional warning messages added to compilers.

Reply to
David Brown

I've not yet seen anyone bring up the languages designed for various research purposes which are designed to explore new scenarios for which the current languages would be considered stale or ossified.

A good example would be Wirth's research with the Oberon range of languages.

Simon.

PS: And while I am thinking about Wirth, don't forget that Pascal was originally a small scale specialised language designed only for teaching before some people decided it was unique enough and brought enough new ideas to the table to be developed into a successful commercial language.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

There's not any really strict definition of functional programming that's widely accepted. I'd say Haskell does sweep some stuff under the carpet, but the way it does i/o is purely functional in the sense that the programs are written as state transformers on the external world. If you look closely at the type signature of the "print" function, its input is an abstract data value of type RealWorld (it's actually written that way) and its output is another such value. So given a real world that contains a printer on your desk and a blank sheet of paper in the printer, the function produces a new real world, where the formerly blank paper now has stuff printed on it. This allows various theorems about functional programs to keep working, which is what makes it pure.

In the actual implementation, the i/o operations (such as printing) are done by the runtime environment, which is a separate entity from the (pure) program evaluator. Your program using the "print" function doesn't actually print anything. Instead it computes commands that it returns to the runtime environment, and the runtime environment prints stuff. It's not just a nomenclature thing--it has a fancy mathematical underpinning based on category theory, though programmers don't have to be directly concerned with that.

This is an old but fairly readable explanation:

formatting link

This goes into more (practical) detail:

formatting link

Reply to
Paul Rubin

I could probably make an argument along the lines that most DSLanguages effectively just that - but for a very narrow domain.

Hardly a domain specific language!

Yes indeed. Somewhere I still have the red/silver (2nd edition?) version of the language definition, probably from c1977.

But it was never intended as a domain specific language either!

Reply to
Tom Gardner

Formalization (in the CompCert style) is still a very specialized topic that's completely outside the skillset of normal real-world embedded developers. Even places that use it (e.g. Galois Corp.) I get the impression that it's a separate area of responsibility than messing with day to day code resident inside devices.

It's getting to be more accessible. The book "Software Foundations" is pretty readable and has good exercises:

formatting link

I haven't spent much time with it yet, but I want to.

Reply to
Paul Rubin

In many years the only time I've had occasion to deal with octal constants in C or C++ is when an unwanted leading zero snuck in somewhere causing a problem. Usually when a table of numbers was copied from an outside source.

A syntax like 0o123 would have been a far better choice (or an Ada

-like "base#number"), but the leading-zero form of octal number specification predates hex ("0x") in C, so getting rid of it is likely to be impossible. It's even propagated into a number of other languages (Java, for example).

Some lints can warn about any octal usage, but I've not noticed that feature on any compiler, except as part of MISRA checking (MISRA disallows octal constants and literals), but that's just too painful for most uses.

Reply to
Robert Wessel

Underscore might be a bit better, but the single quote is perfectly workable, and even looks a bit like a comma.

Reply to
Robert Wessel

If your 'vaguely equivalent' predicate is applicable the semantic distance between the problem domain and the language/paradigm you have available is small. This of course votes heavily against anything specific, because the main advantage of something specific is that it can reduce (the cost associated with) that gap!

Wouter

Reply to
Wouter van Ooijen

Maybe this is less of an issue with embedded DSL's, where you can use the features of the host language as well as the EDSL.

I think it's fine to use gc'd languages for soft real time, in the typical case where deadlines are in the tens of msec and it's ok to miss one now and then. It's possible to keep GC latency at that level without using any fancy and inefficient methods. If you look at the literature on Erlang (which is basically a concurrent Lisp with Prolog-like syntax glued on), soft real time applications were the design target from the beginning. Hard real time applications (microsecond deadlines that must never be missed) are different, of course.

The blurbs for ImProve and Atom at

formatting link
might interest you. ImProve uses an SMT solver to statically verify that your code meets assertions that you specify. I guess SPARK/Ada or some of the tools for the new Ada 2012 design-by-contract stuff does similar things. Atom transmogrifies your program from one that appears to be written with multiple concurrent tasks, into one with a single outer loop that (on a realtime cpu) spends the exact same number of cycles in each iteration regardless of the data. It gets rid of the need for locks, semaphores, etc. while doing all task scheduling statically at compile time.

I found Atom to be reasonably easy to use, given that I was already familiar with Haskell. Haskell's learning curve is notoriously steep though, and it's not really possible to use Atom without at least some Haskell understanding.

Reply to
Paul Rubin

Possibly, but given my experience I'd like to see some!

Agreed, and I've done just that for telecom systems running on a server. Some people didn't believe it was possible even when they saw it running.

I'd have loved the opportunity to use Erlang. I first came across it in the late80s/early90s and thought its USPs were relevant and important to its market. But I'd find it a hard sell at board level, given the alternative ways of achieving the same objectives in more "traditional conservative" languages and environments. Shame.

Interesting, but that might be too general purpose to be a DSLanguage!

I'm not sure what a "real-time" CPU is anymore, unless you mean one that doesn't have any I/L caches!

Nearest I've come across are the XMOS processors, where the IDE can tell you the real-time performance before the code is executed.

The learning curve can't be ignored when you have to hire new bodies. Nor the desire that the bodies have to work on something which isn't seen as a "dead end" career path.

Reply to
Tom Gardner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.