Software Architechture comparison, comments and sugestions...

I was thnking with myself today and i remembered an old software architecture for embedded systems division:

  1. Round-robin
  2. Round-robin with interrupts
  3. Function-queue-scheduling
  4. Real-time Operating Syste

IHMO the first one is just a subset of the second so i will just ignore it.

I was thinking on what kind of architecture i usually use on my projects (that usually range beween a PIC16F/MSP430 to a ARM CM4/ TI C2000) and i concluded that i probably use something between 2 and 3. Teh architecture that i use is basically inspired on a mechanism used on the Fnet stack. It poll a list of tasks continuously. What i did is to create several different handlers that can carry different tasks and use a void pointer as an argument. So in general i got a handler for timed tasks and a handler for async tasks. And i keep registering and unregistering services on both handlers. I must follow some rules to do not get a bad code, like never use an unpredictable while, or lock my code with nops, but that works most of the cases and is very portable. In order to improve the portability, most of the system is described by structures, and all the hardware dependent functions are separeted and abstractared by theses structures. So in general all the logic is reusable and all i must change is the hardware specific code, that is not that much.

However today using RTOS is a real trend and i could hear of good implementations of Function-queue-scheduling (however i got no example). Could you guys describe your experience with the software architecture you use? Comments on the benefits/drawbacks of each one? In special related to performance, scalability and reliability?

Thank you!

Reply to
Sink0
Loading thread data ...

This is one of those fundamental questions that is asked so many times in so many different places, and (one of the) conversation(s) that I hate getting involved in [he says and then immediately gets involved].

The best solution for a software architecture is completely dependent on the application being implemented - and even then completely open to subjective opinion. The best way to learn different architectures is to read one of the many different books on the subject, try things out, get experience (no book is a replacement for experience), learn from your peers.

As an opener, which will get shot down in flames I'm sure, there are some reasons why an RTOS *might* be a good solution in *some* situations here:

formatting link

...note the comments at the top of the section linked to though.

Regards, Richard.

  • formatting link
    Designed for Microcontrollers. More than 7000 downloads per month.
Reply to
FreeRTOS info

I'm going to quote the first line of your page, because I think it just about sums it up:

"You do not need to use an RTOS to write good embedded software. At some point though, as your application grows in size or complexity, the services of an RTOS might become beneficial..."

I've found that a simple half-RTOS (i.e., a non-preemptive 'function caller') works well for the size of applications that I have been writing here. These are all one-man applications, so they only have a handful of separate 'tasks' to perform, and there's only one guy to assign priorities, and that guy understands that the slowest task can bog down the fastest (because it's non-preemptive).

I've also found that an RTOS makes a huge (beneficial) difference in development and maintenance effort when you've got an application big enough that you need more than one developer. Once you get the task priorities straightened out (by cooperation, or by a strong software lead), and once you've smacked anyone who turns off interrupts for more than a few clock cycles, you can spin the less demanding tasks off to less skilled designers; any mistakes they make (either outright bugs or simple lack of optimal performance) are confined to the work that they do, while the better developers can pursue the really challenging work without having to constantly worry that "Joe Slow" is going to muck up the whole thing by putting 1ms of unbroken processing time into a human interface task.

I've also stood by watching in horror (or had to adopt software and fix it) as an application was written by software leads that completely strangled the "real-timliness" of an RTOS, by putting in seemingly-clever mechanisms that allowed slow tasks to block high priority tasks.

So an RTOS doesn't mean that _everyone_ on the team can be a "Joe Slow", and it _does_ mean that if the software lead is one then the whole effort is bound to fail.

But anything that's not preemptive pretty much guarantees that _everyone_ on the team _must not_ be a "Joe Slow", which makes it harder to assemble a team, and means more work for everyone during development.

--
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.
Why am I not happy that they have found common ground?

Tim Wescott, Communications, Control, Circuits & Software
http://www.wescottdesign.com
Reply to
Tim Wescott
n

to

t
s

of

ttdesign.com

Tim, thats pretty much one of the reasons i am thinking on trying RTOS. I pretty much can handle to have a very good RT behavior when i am programming alone. But with a team is much more complicated.

But still, i belive RTOS are good for projects with a reasonable complexity. Others might be the best choice for small-medium complexity.

Any way, do you guys have any good example of some projects you would chose between Round-robin and Function-queue-scheduling?

Richard, i am going to make a try on FreeRTOS. I am a Code Sourcery. Any good linker example for Code Sourcery with LPC17xx example I could find others comercial GCCs but they seems to use different linker file.

Also, i use Eclipse as my IDE. How should i build my project includes/ liked folders? I tried to link the whole Source folder but it does not work as there are many different ASM files for different archs. Should i just copy and paste al the files on my project?

Thank you! Any other comment about the original question is apreciated.

Reply to
Sink0

The thing I see that was missed in above is other operating system architectures like pre-emptive, fault-tolerant, fallover, then the categoreies such as sigle/multi-processor/multi-core. Then we could consider Real Time versions of them.

Whether it is realtime or not depends on the application e.g. a temperature monitor using a loop of 3 jumped to tasks can be deemed real time if it ALWAYS responds within its SPECIFIED response time.

What are the applications and industry sectors, will the product be reused at all (e.g. set top box/router/mobile phone) and have version upgrades.

Having written code for one off designs used for many years, one I had to look at software upgrade after 6 years but will never be used in another project for various reasons. Mainly processor core it was based on is not really going to be available within a few years and the application will have reached product life cycle end before then.

Like everything else break down what the application is doing, what are the repsonse times accuracy and resolution required and how many people will work on the project before even thinking what type of task scheduling is required. There is no one OS or type to suit all applications, some OS have requirements that realistically could be pointless and too costly for some applications. A bit like loading a full desktop OS including GUI onto an alarm sensor.

--
Paul Carpenter          | paul@pcserviceselectronics.co.uk
    PC Services
 Timing Diagram Font
  GNU H8 - compiler & Renesas H8/H8S/H8 Tiny
 For those web sites you hate
Reply to
Paul

Well, I haven't answered this because I'm not quite sure just what you mean by your first three definitions.

The other sort of solution that I'm used to are called "task loops": you have a loop (often in main), and you have external events (either pins or interrupt routines) that set flags. The task loop loops through a bunch of code that basically tests a flag then does or does not execute the corresponding code, tests the next flag then does or does not execute the corresponding code, etc.

Most of the ones that I worked with were just gargantuan if-then chains, or case statements. Every once and a while someone will make a more structured one, where all the tasks live in a structure and ISRs set the flags, then the main loop just queries each run flag in turn (or does so in order of priority, if someone's getting fancy).

--
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.
Why am I not happy that they have found common ground?

Tim Wescott, Communications, Control, Circuits & Software
http://www.wescottdesign.com
Reply to
Tim Wescott

When building embedded systems, you usually have full control what is running on the hardware, so I do not understand why bother e.g. with round robin. The situation might be different, if the end user can launch unspecified programs at their own will, in which case round robin might make sense.

Since the 1970's, I have used quite a few simply priority based operating systems and I have never needed any fancy scheduling algorithms.

A few rules of thumb:

1.) Do as little as possible in a high priority task. 2.) Always ask, if the priority of a certain task can be _lowered_, never ask if something task priority can be increased.

The RT extensions typically use their own environment for normal RT activities and when there is nothing valuable to do, the OS will run the null task, which might contain eg. the windows or Linux OS and their applications (but these are uninteresting from the RTOS point of view).

Reply to
upsidedown

Well in the most simplest of implememtations nearly all scheduling schemes are round robin in the respect that they have a list of tasks (or functions or routines) to perform, the list is gone through and then scanned from the beginning again.

I normally break tsks down so that they operate in small portion of time and are called again at another time slice, if necessary each task has it its own state machine, init, stop, start functions. Often they make 'OS' calls to reschedule their next time point.

Depends on what the application is doing as to what is appropriate.

Even with outside comms and potential time hogs it is easier if you break down the processes as much as possible first before even looking at scheduling.

--
Paul Carpenter          | paul@pcserviceselectronics.co.uk
    PC Services
 Timing Diagram Font
  GNU H8 - compiler & Renesas H8/H8S/H8 Tiny
 For those web sites you hate
Reply to
Paul

Come to think of it, there's also

  1. Interrupts without round-robin.

Some of the simplest applications can be done entirely in interrupt handlers, and the main loop, if any, is just a pro-forma thing to make sure the processor stays in sleep mode.

  1. could be considered separate in cases where the interrupt handlers do non-trivial processing, beyond setting flags for the round-robin loop to poll.

Mel.

Reply to
Mel Wilson

There's no good general answer. Pick one and live with it. Most of what's the right answer depends on the services your app needs.

I've seen round-robin used with an IP stack, so ....

-- Les Cargill

Reply to
Les Cargill

yeah, well... person-task design is almost the whole point of being Teh Lead. Give Joe something with a readily proveable interface, and involve him/her in the upper level decision processes about that, and that's how (s)he stops being Joe Slow.

We're all here to learn, after all... so somebody has to teach. If you're the Lead, it's probably in your job description. Many are called, but few are chosen, so the chosen have to teach the hooples. Diploma-mill victims are just a fact...

understanding what priority means seems ... difficult.

Meh. The system is still a hunk of garbage if it depends on "preemptive". I've had systems where you could *configure* "preemptiveness"*. It's an eye-opener.

*enable/disable to task-swap callback in teh timer...

Sometimes I think a semester with DOS in college would be a good thing... learn 'em that critical section flag and write "swap()" and you'll be better off. You still get a nice CRT and a keyboard fo' free... and serial ports.

teach 'em to fish.

-- Les Cargill

Reply to
Les Cargill

[snip]

[snip]

[snip]

That is a very surprising opinion. If a SW designer cannot depend on preemption happening as designed, the benefits of preemption for simple design of real-time behaviour are lost, and the SW has to be designed in a much more complex way.

IMO it is absolutely OK for the real-time correctness of a preemptive design to depend on preemption.

Perhaps you meant that the logical correctness (eg. mutual exclusions) should not depend on preemption? I can agree with that, but I would still accept the use of ceiling priorities to implement mutual exclusion (that is, to depend on non-preemption of a task of higher priority by one of lower priority).

Sure there are kernels that can be configured like that. But if an application is designed to use preemption, it is unfair to expect it to have the same real-time behaviour when preemption is disabled, and to call it "garbage" if it fails when preemption is disabled.

--
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .
Reply to
Niklas Holsti

I don't think it's particularly "more complex" myself - it's just closer to being deterministic.

I don't believe that is the case. Maybe that's just me; dunno.

No, I mean that the design itself should behave as if the envrionment is not preemptive.

Perhaps "garbage" was too strong a word. How's "untrustworthy"?

Suit yourself; I believe that depending on preemption is a recipe for latent defects. But it might be good enough for the domain, and it might otherwise work out fine.

If I may... you seem to think that depending on preemption is somehow easier. That is an opinion I've seen before, but it doesn't seem to make much sense (to me). Even when I'm on a Linux system ( embedded or desktop ) , I tend to write things to behave as if there was no preemption.

That means they hard block on an object like a sempahore/queue/spinlock/ timer and quickly determine that conditions to execute based on that object are true.

For realtime especially, I think of things as being event driven. Events may be calculated from task loops and timer driven, but there's some sort of "regulator" ( think of an escapement on a pendulum clock ) and some fairly constant-time action regulated by that.

All this trouble is in the service of determinacy and correctness. And it seems to have paid off.

-- Les Cargill

Reply to
Les Cargill

That was my thought exactly: it the simplest solution, so much so that you can adopt it without even thinking about it. If instead of thinking "round robin" you think "main loop" it instantly becomes a whole lot more familiar.

Not that there's anything wrong with it if it gets the job done: I've never seen to point of adding several layers of complexity just to satisfy someone's notion of what the "right" way is. Doing so here instantly moves you up from the most basic devices. For example I don't see how you would do pre-emptive scheduling on the smaller PICs: sure, you can arrange the clock interrupt easily enough but you can't diddle the function call stack once you are handling it.

--
Andrew Smallshaw
andrews@sdf.lonestar.org
Reply to
Andrew Smallshaw

[snip]

[snip]

Still has to be motivated by an argument.

Yes, that is how one suspends a task. And when a task wakes up, it usually has to check the current state to decide what to do. But what has that do with preemption?

Assume you have a simple system with two types of events. Event A occurs at most once per second, takes 0.5 s to process, with a deadline of 1 s. Event B happens at most once per 10 ms, takes 1 ms to process, with a deadline of 10 ms.

How do you handle the B events in time, without preempting the processing of the A events? You can perhaps handle the B's in an interrupt handler, but interrupts are just a HW form of preemption.

The only two methods I can think of are (1) to insert lots of polls for event B in the code that handles event A, making sure that no interval between polls is more than 9 ms, or (2) to split the processing of event A into many small sub-functions ("sub-events" if you like), each taking at most 9 ms to execute, and to have a main loop that calls each sub-function in turn and checks for events in between sub-functions.

Both methods complexify the code that processes A events. Method (1) becomes a horror when there are more than two events with different periods and deadlines. Method (2) becomes a horror when the processing algorithm for A involves much temporary data and control state that must be passed between the sub-functions. For one thing, the limit on the execution time of the sub-functions can force one to divide long loops into parts, for example a 1000-iteration loop into 10 x 100 iterations.

Then consider what happens if the specs change so that event B occurs at

5 ms intervals, with a 5 ms deadline. In method (1) the number of polls must be doubled. In method (2) many sub-functions may have to be split into smaller sub-functions. With preemption, nothing in the code that processes event A has to be changed.

Both non-preemptive solutions introduce jitter in the processing of the B events: in method (1) because the interval between polls is hard to make constant, in method (2) because the execution time of the sub-functions is hard to make constant. With preemption, it is easier to compute the preemption latency and jitter from the execution time of the critical sections, which are typically few and typically simple.

If you are lucky enough to have a system in which the events, periods, deadlines, and processing algorithms are such that you can process any event to completion, before reacting to other events, and still meet all deadlines, you don't need preemption. In any other case, avoiding preemption is asking for trouble, IMO still.

--
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .
Reply to
Niklas Holsti

Ah! I see our disconnect.

I am referring to preemptive multitasking vs. "cooperative" multitasking.

Preemptive simply reruns the ready queue when the system clock timer ticks. Whoever is running when the system clock ticks gets put back on the ready queue and waits.

Cooperative does not. It is the responsibility of each thread of execution to be circumspect in its use of the CPU.

yes, I very much prefer run-to-completion for any kind of processing, but especially for realtime.

In thirty years, I've never seen a case where run to completion was more difficult than other paradigms. That does not mean other events were locked out; it simply means that the data for them was queued.

In a handful of cases, I was replacing unstable code that *wasn't* run to completion with code that was. Yeah, it took a bit more design but it was rock solid and stable after the change.

-- Les Cargill

Reply to
Les Cargill

Les Cargill:

Niklas Holsti:

Les:

[snip]

Niklas:

Les:

That describes preemptive time-sliced round-robin scheduling without priorities. I believe that tends to be used in soft real-time systems, not so much in hard real-time systems.

In priority-based preemptive scheduling, the running task keeps running until it suspends itself (waits for something), or until some task of higher priority becomes ready to run.

Which adds to the design constraints and makes the design of each thread more complex. Why should the code of thread A have to change, just because the period or deadline of thread B has changed?

Ok, you have been lucky. In more heavily stressed real-time systems, run-to-completion is a strait-jacket that forces the designer to chop large jobs into artificial, small ones, until the small ones can be said to "run to completion", although they really are just small steps in a larger job.

--
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .
Reply to
Niklas Holsti

No; the queue can be a priority queue. Soft vs. hard realtime is a rhetorical swamp ":)

Right. In fact, the "swap()"* verb simply puts the last guy running back on ( in all cases I know about ) if he has the highest priority.

*defined as the code that exchanges register stacks between operating threads.

Erm.... it doesn't. That's rather the point....

What is even odder is: heavily stressed systems I've seen were the ones

*mainly* that *used* run to completion. You'd allow some events, less important ones, to be dropped. Or go to a task loop architecture. In either case, having good instrumentation to count dropped events is important.

Possibly. Although the approach makes it possible to control how the overall system fails. That's mainly what's good about it.

If you'll look at Bruce Powell Douglass' book, I beleive it stresses that run to completion is a virtue in high reliability systems. Not pushing that but it's simply one book I know about.

-- Les Cargill

Reply to
Les Cargill

Its is not that i missed, it is a more global division, much more specific for RT Embedded systems. I belive that division os mentioned on the "An Embedded Software Primer" book but i might be wrong. Still, a Round-Robin with interriputs is preemptive, and can be or not fault tolerant as an example. The system can be multi-core but i am disconsidering multi-core specific architectures as they have a much more limited range of applications.

I know, but in theory the choice should be according to the application. But that is not totally true and linear. Several applications can be implemented with any of the three choices (again considering 1 and 2 as the same), and probably the final choice is much related to the developer previous experiences. And that the sort of comment and sugestion i am looking for.

Still the discussion here is very productive for me. Several developers have a very different experience, so that might help others (as me) to open their minds related to the embedded software development. As an example, I tend to avoid COTS RTOS and most of the times i go for a semi-OS approach as Tim mentioned early. However there are several arguments on why to use a COTS RTOS, and i want to hear others experience on that.

Thank you every one for the comments and sugestions.

Cya

Reply to
Sink0

You said that "whoever is running ... gets put back in the ready queue and waits", which is not priority scheduling.

In a priority-driven system, there is no need to mess with the ready queue at every clock tick, only when event makes some waiting task ready to execute. (The better systems don't even waste time on handling periodic clock ticks, but program a HW timer to interrupt when the next timed event comes up, whenever that is.)

The distinction is fuzzy, but real.

We do not understand each other.

You say that each thread has to be "circumspect" in CPU usage. That is rather vague. If the system has real-time deadlines, but is not preemptive, it can work only if "circumspect" means that the thread execution times (between reschedulings) are smaller than the smallest required response time. Do you agree with this? (In reality the times must often be a lot smaller, if incoming events are sporadic without a fixed phasing.)

This means that the smallest required response time constrains the design of all threads, and therefore a reduction in the smallest required response time can force changes in the code of all threads, in a non-preemptive system. Do you agree?

Makes me suspect that they were not well designed, or were soft real-time systems.

Sounds more and more like soft real-time. If a hard-real-time system drops events, it is entering its abnormal, fault-tolerance mode. But dropping events can be normal for a soft-real-time system.

Here I can agree: when you have split the large jobs into several small pieces, and use some kind of scheduler to dispatch the pieces, it is easy to add some code that gets executed between pieces and can reorganize the sequence of pieces, for example aborting some long job after is current piece.

If you need to abort long jobs that have not been split into small pieces (because the system is preemptive), you either have to poll an "abort" flag frequently within the long job, or use kernel primitives to abort the whole thread, which can be messy. (I can't resist noting here that Ada has a nice mechanism for aborting computations, called "asynchronous transfer of control".)

The object-oriented gurus love run-to-completion because it makes it look as if the object-method-statechart structure is natural for real-time systems and lets one avoid the "difficulties" of preemption and critical sections. But in practice, in such designs it is often necessary to run different objects/statecharts in different threads, at different priorities, to get preemption and responsiveness.

Preemption brings some risks, since the programmers can mess up the inter-thread data sharing and synchronization. If your system can be designed in a natural way without preemption, do so. But if you can avoid preemption only by artificially slicing the longer jobs into small pieces, you introduce similar risks (the order of execution of the pieces, and their interactions, may be hard to foresee) and much unnecessary complexity of code.

--
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .
Reply to
Niklas Holsti

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.