It has become more popular recently (I've used it for internet stuff) though it's kind of old-fashioned by today's standards.
No it's a very special purpose, hard realtime language. For example it has no "if" statement that decides which of two code branches to execute. Instead it has "mux", where x = mux(condition, expression1, expression2)
that means both branches are always executed (so the time spent is the same regardless of the condition) and the condition just selects which value to use.
Yes, that's what I meant. Like the RTU's in the Beaglebone Black. They have no cache and no pipeline, so every instruction uses a constant number of cycles. You can use them for things like motor control, high speed bit banging protocols, etc.
Haskell isn't a dead end at all, it's tremendously empowering even if you don't use it for anything "real". Using it is a sign of a geeky and flexible programmer, which is a good thing from a hiring perspective. Lisp used to be like that too, though nowadays it may signal crustiness.
I don't think Atom's "state transformation" rules are like tasks. They are periodically polled guarded statements. They do not maintain any local control-flow state but execute as atomic actions when polled and enabled.
But that is not a critique of Atom, just a note on what it is AIUI.
I sort of doubt that any "exact same number of cycles" follows from Atom's C generation. From the video presentation, Atom somehow estimates (or gets as user input) how long each state-transformation takes (I assume, in the worst case) and then distributes the transformation polling into a major/minor frame static schedule to avoid overrunning any minor frame, even if all transformations polled in that frame actually execute. Good enough.
But these state-transformations contain guards, and most probably other conditional and varying execution paths, which means that their execution times vary. In addition to any ET variation due to the processor's caches and other stuff.
As I understand it, the single outer loop (main loop), which invokes the C functions generated for each minor frame, determines the actual timing. This loop has to contain some timers or delay functions to implement the required minor and major frame periods. The Atom scheduler just ensures that no minor frame overruns, assuming that Atom's idea of the WCET of each transformation is safe.
That is a general property of static scheduling, and of other coroutine-like task-switching systems with a single real thread. Good enough, again.
This really calls for some feed-back into Atom of the WCETs measured or analysed for each state-transformation function. AbsInt has implemented such integration for some other synchronous languages and real-time SW design tools (see
The Atom presentation didn't explain how Atom gets its notion of the WCET of a state-transformation action.
I'm curious, did you use Atom for a real application? How did you find the WCETs for the state-transformation actions in this application?
Peter Puschner (TU Vienna) and colleagues have been proposing and exploring such "single-path programming" principles for several years. It works well if you do it on assembly level and if your processor has some special features such as a conditional store operation which updates the cache mapping in the same way whether the store is performed or not.
(If you put the code quoted above into a normal C compiler, the compiler's optimizations are quite likely to transform the code back into the standard conditional form where only one of the expressions is evaluated, depending on the condition, because this reduces the average execution time. You would have to make x1 and x2 "volatile".)
Yes, that looks like a good target for single-path programming.
Atom's target applications were things like the multi-kilowatt hydraulic rams on garbage trucks. I doubt they cared if the microprocessor used an extra few mA.
The RTU's don't use the ARM architecture. Their instruction set is weird and annoying and somewhat minimalistic. I suspect the RTU core was some macro cell that TI had lurking in its library from some earlier project, so they tossed a couple of them onto the die alongside the big A8 core. I'm sure the RTU's are tiny by comparison.
Hmm, ok. I'm not sure if the difference is significant since you can't launch new ones though (so you can keep any state in variables). Mostly I described it as task-like as a matter of style, in the sense that you don't have to explicitly think about the state transitions all the time, and also because Atom's own blurbs describe it that way.
It would require a CPU with deterministic timing, i.e. no cache, no pipeline, etc. But they do make cpu's like that.
Yes, ok. That's a reasonable point since any i/o has to happen there too, and that could potentially have variable timing. It's been a while since I messed with it.
So far I've only played with some toy examples on my laptop, and that was a few years ago. But I think I got a reasonable feel for it at the time.
The idea was just that on a deterministic processor, the transformations would run in constant time. For the applications I had in mind, observing this with a scope would have been good enough.
Yeah, I don't remember if the Atom output code did that or not (I expect it did). I think it wouldn't be too hard to modify Atom to generate assembly code directly though, and that would give more assurance about what the code was doing.
I think the idea is to use a cache-less processor.
And also require the single-path programming style (which I didn't see mentioned in the Atom presentations). But yes, this should bring it close to constant-time. (In fact many pipelines are deterministic, too, and such could be used.)