Software Architechture comparison, comments and sugestions...

Hi Niklas. Are you doing a lot of multicore stuff? I haven't had the pleasure yet, and that might be why we're missing each other. Multicore is certainly different. The systems I've worked on also used a minimal number of threads - usually haveing an additional thread meant we had a different interrupting device to manage.

I certainly appreciate your very well presented thoughts. "Highly granular" processing has been something of a deep assumption for a long time, and those are always good to challenge.

And it could be that I was simply corrupted by FPGA designers :)

Niklas Holsti wrote:

I should have left off "and waits".

Sure! I'm mainly thinking of how things are described in most academic literature in order to be as general as possible. Lots of ways to skin that cat.

Somewhat.

I think that is true. I'm not sure what to do about that, either :)

It has to get in, do a small task, then get out at each point in its state. "Circumspect" means "parsimonious" or "cheap" in this case - it must use the least CPU necessary to execute that state transition, and get back to a blocking call as quickly as it can.

Not universally; no. One realtime deadline may require many time quanta - a given thread may execute may times within a single deadline time period.

I think you are assuming a one-to-one map between *all* responses and that time quantum. So no. A single response may require multiple time quanta, still.

They varied. I don't know of a good working distinction between soft and hard realtime, so I can't speak to the last thing.

It's possible that what I mean lines up with that. Only a few had enforced time budgets. The reason I brought that up was that the failure modes were gentler - you got slow degradation of response rather than falling off the cliff.

That, of course, depends on what's desired of the system to start with...

That's the general idea. The overall idea is really to make a "loop" into a series (or cycle) of state transitions rather than use control constructs.

Ick. What I mean really doesn't hurt that bad :) These systems didn't use a large number of threads.

My use of the paradigm preceeds any of the object gurus, IMO. OO hadn't quite propagated to realtime in the '80s in a serious way. I have since used things like ObjecTime, Rose and Rhapsody, but we'd done things like this with nothing but a 'C' compiler on bare metal before.

Some of those things had hundreds of states ( which may be what you are saying is the horror of it ) but I did not see that as a curse. We were able to log events and state for testing and never had a defect that wasn't 100% reproducible because of it...

I, unfortunately, don't really know what that means.

If for any case, any of that is true, then yes :) There's no crime in using whatever works.

In summary, though, my statement stands:

I do not see how having the system timer tick swap out a running thread improves the reliability or determinacy of a system, nor how it makes the design of a system easier.

-- Les Cargill

Reply to
Les Cargill
Loading thread data ...

No... I have to admit that these days I don't do much application development at all, I mostly work on timing analysis tools.

But recenly I have been involved peripherally with a couple of applications (satellite on-board SW), one wih code generated from event-driven state-charts, the other using a minor/major-frame static, non-preemptive schedule. The latter system has lots of artificial splitting of large jobs into small pieces, giving the complex code that I have been warning about in this discussion.

Maybe. On the other hand, it is commonly held that a multi-threaded, pre-emptive SW can run on (symmetric) multi-core machines with no chamges, and make good use of the cores, since the SW is already prepared for threads to run concurrently, and is thus prepared for the real parallelism of a multi-core system.

If you use run-to-completion event-handling, you can run several state-machines concurrently within one thread, by interleaving their transitions and using a single, shared queue for incoming events to all these state-machines. For example, Rhapsody-in-C translates state-charts to this kind of C code. If that works with sufficient real-time performance, you can get by with a small number of threads.

Niklas:

Les:

What you say is all qualitative and not quantitative. Goals like "as quickly as it can" are typical of soft real-time systems (and non-real-time, throughput-oriented systems). Hard real-time systems must consider execution time quantitatively in the design.

Suppose the system has an event that causes a transition that takes 500 ms to execute, even when coded to be as quick as it can -- for example, some sensor-fusion or image-processing stuff that wrestles with large floating-point matrices. Is this fast enough? That depends on how much the system can afford to delay its response to *other* events. If a delay of 500 ms is tolerable, this design works, even without preemption. If some event requires a response time less than 500 ms, you must either let this event preempt the 500 ms transition, or split the

500 ms transition into smaller pieces, which I consider artificial.

But that strengthens my argument: the time between reschedulings must then be less than the corresponding fraction of the smallest deadline. For example, if thread A has a deadline of 10 ms, and needs to execute 5 times in that time, you need to have at least 5 reschedulings in 10 ms, so no thread can use more than 2 ms between reschedulings, or thereabouts.

Which makes the situation worse -- see above -- there must be even more frequent reschedulings, and even stronger constraints on the maximum execution time between reschedulings.

[snip]

Yep, it is about the only kind of concurrency you can do with just 'C' and no RTOS. The second application that I mentioned at the start of this post is built like that.

Lots of states are OK if they are implied by the requirements. But if you must split a single state into 10 states, just because the transition to this state would otherwise take too long (in a non-preemptive system), these 10 states are artificial and I don't like them.

It means that if you implement the state machines in their natural form, as implied by the application requirements, the state transitions are nevertheless fast enough and do not make the (non-preemptive) system too slow to respond.

It can let other threads meet their deadlines, even if the running thread exceeds its designed execution time, for some reason (unusual state, coding bug, bad luck with the cache, whatever).

At least in priority-based systems, it can reduce the jitter in the activation times of high-priority threads.

Without it, you may have to split long jobs (long state transitions) into smaller pieces, artificially, just to get frequent reschedulings.

But I have said all that before, so it is time to stop.

--
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .
Reply to
Niklas Holsti

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.