cooperative multitasking scheme

Why should the OS need to get regular momentary control of the CPU in a preempt system ? The only reason for a reschedule of the tasks is when a higher priority task has changed state from blocked (waiting for something) to runnable. There are mainly two situations this can happen, an other task activates a signal or sends a message that the blocked task is waiting for or an interrupt service routine satisfies a wait condition.

To simplify things further, the task to task communication can issue a software interrupt, thus the interrupt handler can be nearly identical in all cases.

At the end of each interrupt service routine, be it for serial line interrupt, timer interrupt or the software interrupt from task to task communication, jump to the scheduler and scan through the task list in priority order to find the highest priority runnable task and switch to that task.

This is quite simple to implement in systems that always saves the full hardware context (CPU register) at each interrupt routine entry, since the interrupt saves the context of task A at the beginning of the interrupt. After interrupt servicing and checking the task list a switch to task B should be performed, simply swap the stack pointers and perform the return from interrupt in stack B.

The selected task may of course be the interrupted task, if the interrupt did not make any higher priority task runnable. Since this is a common situation and no stack switching is required, some shortcuts can be applied, which speeds up the return. This is particularly true on processors, that does not save the whole HW context at interrupt service routine entry, since the partial context can be reloaded at return from interrupt. However, when the task switch must be done, the rest of the HW context must be saved into stack A, then the stack pointer switched, then the main context of task B must be retrieved from stack B and finally the return from interrupt returns the minimal context of task B.

The timer interrupt (if used) does not require any special treatment, it can be handled as any other interrupt.

For instance a simple system copying characters in both directions between two serial ports should be doable with the UART interrupts only. Of course a timer interrupt may be useful, if some timeout control is required on the serial lines.

Paul

Reply to
Paul Keinanen
Loading thread data ...

This may be true for very well-behaved tasks. But what if you have several equal-priority tasks that are both trying to be CPU hogs? (say, because they need to poll for something that is not available as an interrupt). It is possible that no event would occur for a long time that would change the state of any tasks. So without some form of time-slicing, the other task could be suspended indefinitely, even though it is ready to run. Although, strictly speaking, I guess the term "pre-emptive multitasking" does not imply "time-slicing".

-Robert Scott Ypsilanti, Michigan (Reply through this forum, not by direct e-mail to me, as automatic reply address is fake.)

Reply to
Robert Scott

I'm not sure what you imagine is meant by "preempt" then. In my mind it means "not cooperative."

That's cooperation. The software interrupt is nothing more than a call which is known to be able to switch tasks.

...

Before we go further, Paul, let me say a couple of things.

The main distinction to me about preemptive systems and cooperative ones, the thing that distinguishes them more than anything else I can think of, is the amount of process state that needs to be found and saved/restored. Preemptive systems can interrupt a process at *any* point in time, not just selected points, and because of this the amount of important state can be much, much greater. There may be important hardware states in semi-completed conditions that must be saved and restored, there may be important floating point static variables that need to be saved and restored, there will probably be more registers needing a save/restore, etc. A software event, such as a "software interrupt" which is nothing more than a software function call, does not usually imply any of these additional burdens. However, hardware interrupts usually do.

Basically, it's the complexity of the resulting product that is implied by a cooperative-only or a preemptive system. Now, you can try to divide this along other lines of thought -- but then it's not along the line of the practical work needed. I tend to look at this from an implementers' point of view, as I implement these regularly and routinely. And from that viewpoint, the concept of preemption is about whether or not a process is interrupted at places where that process isn't consciously involved. When the current process executes a "software interrupt" event (for example, sending a message to a higher priority process), it is (in my mind) cooperatively relinquishing control. That is NOT preemption to me.

...

Here, to me, you are conflating several things. Serial line interrupts are, in fact, preemptive events. Software interrupts, such as message passing, are not. To conflate them together just because English sadly uses the same word or term in saying 'interrupt' does not make them the same thing, inside the box so to speak.

Regarding the preemptive parts of what you are talking about, for example the serial line interrupt or timer interrupt, you can indeed do what you are saying. In fact, one often does. A low-level serial interrupt may mean that a buffer goes from empty to not-empty and a process may be waiting on this event. Or the incoming serial character may be posted to a process as a message, causing it to "wake up." So, please don't misunderstand me here. I'm not suggesting that a timer is the ONLY way that an operating system may preempt a process.

I'm just saying that in the absence of interrupts from hardware functional units other than a timer, a timer will be needed in order to round-robin processes of equal priority (where implemented) or where priorities aren't specifically used.

But you are right in the sense that preemption can reasonably come from other sources. Sorry about not making that clear.

Jon

Reply to
Jonathan Kirwan

the

the

saying.

the

to

units

used.

I think that the question I was trying to ask. I didn't see where timers were needed to make a system preemptive. Needed for time-slicing at a single priority, but the only systems I've used that used timeslicing are larger systems (mainframes, workstations, desktops) although I know they are used in some embedded systems.

On the other hand every embedded system I've dealt with required a timer whether it was pre-emptive, cooperative or just a big loop. While I can imagine systems (preemptive or otherwise) that don't need a timer, I keep wanting to add timeouts to deal with issues like corrupted data and broken communication lines. And the systems that don't require timeouts seem to require pacing.

Mostly I was just wondering if I'd missed a subtle point that forced pre- emptive systems to have a timer, or maybe a definition that was different than I expected.

Robert

Reply to
R Adsett

the

the

saying.

the

to

units

of

used.

Generally, you'll need timers for embedded operating systems. Almost every such system I've worked on required some kind of precision timing mechanism for certain kinds of processing. So it's almost always a good idea to plan for that. Using a timer is almost second nature to me, because I almost always implement sleep queues even when the system is running cooperatively and not preemptively. In cooperative systems, where I choose not to use preemption at all, the action of the timing event is to move the process from the sleep queue to the ready queue but not to yank control to the new process, even if it has a higher priority.

I use simple round-robin in preemptive embedded systems, often enough, where I don't really see a need for (or cannot afford the cost of) a priority system. A process does NOT have to use its entire slice, you know. For example, if I have a process to (1) update the display, (2) scan and poll a keyboard input, (3) handle RS-232 parsing and query response, and (4) performs rather lengthy computations on inputs and then updates an output; then I may use no priorities at all but still use preemption so that I avoid excessively frequent use of switch() calls. If the O/S preempts every 20ms, for example, it may switch to the keyboard polling task, which may already be sitting in a scanning loop that just moves the scan line to the next one, checks for anything to do, and if nothing just calls switch() inside the loop before continuing again. In this way, that process gives up any remaining time in its slice, voluntarily, because it knows that there is nothing to do now. (The keyboard does not, in this case, generate any hardware interrupt events at all.) But the task that performs the long calculations doesn't need to salt those calculations with frequent switch() calls, because the 10ms timeout will interrupt an in-progress calculation to ensure that some time goes back to the polling keyboard routine every so often.

Probably a somewhat weak example, but perhaps it makes the idea clear enough for you to construct a better one in mind.

I very much appreciate having a sleep queue. A timer advances the time and may move a process from the sleep queue to the ready queue and is needed for that purpose, if you want any reasonable chance at having regular intervals for processes. On DSPs, I've used as fine-grained timing specs for a process as

2us, with no more than 100ns restart jitter. Decent for much work.

Hopefully, I relaxed your worries about that.

Jon

P.S. By the way, just by way of possible interest, here's the configuration header for the O/S I wrote for tiny embedded processors with scarce RAM. Mainly, I'm adding this so you can read the comments which explain some things it does:

/* File: config.h Author: Jonathan Dale Kirwan

Creation Date: Fri 24-Jan-2003 11:42:28 Last Modified: Sun 16-Nov-2003 12:37:21

Copyright 2003, Jonathan Dale Kirwan, All Rights Reserved.

DESCRIPTION

This file defines the normal compile-time configuration criteria for the operating system and some general symbolic constants used throughout. For those configuration parameters requiring a yes/no answer, set them to zero for no or to disable them and any non-zero value for yes (preferably 1.)

In addition, this file provides the type/size of certain key items in the operating system; such as the size of the sleep and semaphore counters.

TARGET COMPILER

This module is designed to be compiled with general purpose C compilers. MODIFICATIONS Original source.

*/

#ifndef SYS_KERNEL_DECL

/* SYS_RTCLOCK -------------------------------------------------------------------------- Whether or not to include a real-time clock for the operating system is the central decision a user must make, in configuring this operating system. A real-time clock is not necessary, but without it time doesn't advance on its own, so there is no possibility for automatically moving processes from the sleep queue to the ready process queue or for pre- empting the currently running process. Without the real-time clock as a resource, there is no determinism for sleeping processes, no round robin sharing of CPU time, and no pre-emptive relinquishing of the CPU by a lower priority process to a higher priority process - it all becomes a matter of process co-operation.

Making a real-time clock available has a price to pay. Some CPU time is used by the clock event handler each time the clock interrupts the CPU. If the clock is operated too rapidly the time spent in its event handler can consume nearly 100% of the CPU time, leaving almost nothing left for normal process operations. So be careful about deciding the interval. It also uses up a hardware timer resource, which may be scarce or used for something else more important to the application. Further, any kind of interrupt handling may require some non-portable feature in C (a #pragma, for example, to designate a function as an interrupt function) or else some assembly language and linker support in order to properly locate code at the right 'vector address.'

So, enable this feature if either automatic movement of sleeping processes to the ready process queue or pre-emption of the currently running process is needed. If co-operating processes are sufficient to get the job done, this feature can be disabled.

*/ #ifndef SYS_RTCLOCK #define SYS_RTCLOCK (1) #endif

/* SYS_SLEEPQ -------------------------------------------------------------------------- Whether or not providing a real-time clock enables automatic movement of sleeping processes, a user may choose to support the sleep queue. A sleep queue is simply a place for processes waiting until some period of time has expired. Processes are added into the sleep queue, in an order determined by the delay they specify.

If the real-time clock is available, a counter for the top process in the sleep queue is decremented and, when it reaches zero, the top process is moved from the sleep queue to the ready process queue. If there are any processes immediately following it in the sleep queue, which also have a zero counter, they are also moved at this time. (This does not cause a rescheduling event, so moving higher priority sleeping processes to the ready process queue doesn't preempt the currently running process.)

Supporting a sleep queue without a real-time clock is a bit unusual -- as normally there is no automatic change in their counters and no automatic motion of processes from the sleep queue to the ready process queue unless the currently running process to perform these functions as it deems them appropriate. The behavior of the operating system, in this case, is to automatically start the top entry in the sleep queue when there are no remaining processes running (the last running process puts itself to sleep or else waits on a semaphore or message.) This can actually be useful in cases where no preemption is desired but where some processes should run more frequently than others when cooperatively switching, using the delta time sleep queue as a way of achieving that process shuffling.

(If you are curious about how this may be achieved, consider the idea of four processes where P1 should run 4 out of 8 switches, P2 should run 2 out of 8, P3 should run 1 out of 8, and P4 should run 1 out of 8. If a delta sleep time for P1 is given as 1, a delta time for P2 as 2, a delta time for P3 as 4 and a delta time for P4 as 4 and then these processes use this associated value to sleep on when they want to cooperatively switch away, then the arrangement will work as desired.)

There is a secondary effect to specifying support for sleep queues. A sleep queue requires a counter for each process to support timing their motion back to the ready process queue. This will increase the RAM requirements. For systems with very little RAM, this must be a consciously considered decision.

*/ #ifndef SYS_SLEEPQ #define SYS_SLEEPQ (1) #endif

/* SYS_PRIORITIES -------------------------------------------------------------------------- Enabling process priorities allows the operating system to organize the ready process queue by process priority. Disabling this feature means that all processes have, in effect, the same priority. The operating system doesn't prevent processes with equal priority -- processes of the same priority are simply added into the ready process queue after all processes of equal or higher priority.

Process priorities have their expected impact when preemption isn't turned on -- nothing takes place until the current process attempts to resume a process (sleeping, waiting on a semaphore, or waiting on a message) with a higher priority, tries to change the priority of another ready process to a value higher than its own, or else tries to cooperatively reschedule. It is only at these times that the operating system may switch to a higher priority process.

There is a secondary effect when enabling process priorities. The process priority feature requires adding a priority value for each process to support sorting them in the ready process queue. This will increase the RAM requirements. For systems with very little RAM, this must be a consciously considered decision.

*/ #ifndef SYS_PRIORITIES #define SYS_PRIORITIES (1) #endif

/* SYS_MESSAGING -------------------------------------------------------------------------- Messages provide a method of unsynchronized process coordination. They are particularly useful, in contrast to semaphores, when a process doesn't know how many messages it will receive, when they will be sent, or which processes will send them.

These messages are posted directly to the process and are not of the type which are left at rendezvous points, since this operating system is designed for very small RAM requirements. Processes do not block when sending messages and only the first message sent to a process is retained, if several messages are sent to it before it can receive them.

Enabling this feature allocates room in each process node to hold the latest message as well as code space for the support routines (of course.)

*/ #ifndef SYS_MESSAGING #define SYS_MESSAGING (1) #endif

/* SYS_QUANTUM -------------------------------------------------------------------------- Set this to zero in order to disable pre-emption of the currently running process. Any positive value will enable the feature and specify the number of real-time clock ticks required in order to generate a rescheduling event. A negative value has 'undefined' meaning.

Enabling pre-emption and the real-time clock allows the operating system to automatically select higher priority processes in the ready process queue and allows round robin sharing of CPU time for processes with equal priority. Without these features, the only way a higher priority process can get control is if the currently running process voluntarily requests rescheduling.

The real-time clock is required for automatic action by the operating system. Without the real-time clock, the quantum of the currently running process isn't automatically updated and thus cannot expire -- so the operating system cannot automatically generate a rescheduling event. Enabling pre-emption this way, without the real-time clock, is permitted but it isn't very useful.

A secondary effect of enabling pre-emption is that a memory location is allocated for the remaining quanta available to the currently running process. It's only a small addition, but on memory starved systems it may be important.

*/ #ifndef SYS_QUANTUM #define SYS_QUANTUM (25) #endif

/* SYS_PLIMIT -------------------------------------------------------------------------- This value sets a limit on the number of allowable processes. Naturally, this affects the required RAM used to support them, as each process requires a queue node to keep track of which queue the process is in as well as space for process-specific state information. This value must be positive and probably greater than 1.

*/ #ifndef SYS_PLIMIT #define SYS_PLIMIT (20) #endif

/* SYS_SLIMIT -------------------------------------------------------------------------- This value sets a limit on the number of distinct semaphore queues supported by the operating system. A zero value disables semaphore support, entirely.

Semaphores provide a simple method of process coordination, often used to synchronize their actions or cooperate in sharing common resources. Each semaphore consists of an integer count, conceptually, with wait(s) calls decrementing the count and signal(s) calls incrementing it. If the semaphore count goes negative due to a wait(s) call, the process is suspended or delayed. In that case, the next signal(s) call will release exactly one suspended process. Semaphores are a synchronized method of process coordination, essentially requiring one wait(s) for each signal(s) call.

Each semaphore queue requires a small amount of RAM for a queue head and tail node.

*/ #ifndef SYS_SLIMIT #define SYS_SLIMIT (10) #endif

/* SYS_DLINKEDQ -------------------------------------------------------------------------- The system queues may be configured as either singly linked lists or doubly linked lists. Enabling the doubly linked lists provides a faster response from the operating system when moving processes from one queue to another. But it does so at the expense of an extra link in RAM for each queue node.

This is essentially a space versus speed option. Normally, doubly linked lists are desired. But if RAM is very tight, disabling this option could help. The amount of help will depend on the size of the qnode pointers (usually 1 byte) and the number of processes and queues. As I said, it is not a lot, but there are times when every byte counts.

*/ #ifndef SYS_DLINKEDQ #define SYS_DLINKEDQ (1) #endif

/* sys_status_t, SYS_FAIL, SYS_SUCCESS -------------------------------------------------------------------------- Operating system function calls often return a success/fail status result.

*/

typedef int sys_status_t; #define SYS_FAIL (0) #define SYS_SUCCESS (1)

/* sys_priority_t, SYS_PRIORITY_MIN, SYS_PRIORITY_MAX, SYS_PRIORITY_DEFAULT PRIORITY_MAXKEY -------------------------------------------------------------------------- Configure as appropriate for process priority support. These may use either signed or unsigned types, as desired.

SYS_PRIORITY_MIN and SYS_PRIORITY_MAX aren't used for more than some error checking, when calling system functions to set process priorities. Just make sure they are in the valid range of sys_priority_t and, of course, that the minimum value is less than the maximum value. The only other requirement is that PRIORITY_MAXKEY be greater than SYS_PRIORITY_MAX. PRIORITY_MAXKEY is used for the tail entry in queues, so it needs to be unique and larger than the greatest possible priority value.

SYS_PRIORITY_DEFAULT is only used when creating processes and shouldn't be relied on.

*/

#if SYS_PRIORITIES != 0 typedef signed char sys_priority_t; /* process priorities */ #define SYS_PRIORITY_MIN (-100) /* minimum allowed priority */ #define SYS_PRIORITY_MAX (100) /* maximum allowed priority */ #define SYS_PRIORITY_DEFAULT (0) /* default priority, when created */ #define PRIORITY_MAXKEY (127) /* larger than highest priority */ #endif

/* sys_sleeptimer_t, SYS_SLEEP_MAXKEY -------------------------------------------------------------------------- Configure as appropriate for the dynamic range needed by the sleep queue. These values won't be allowed negative and are measured in timer ticks, so it's probably better to keep them as unsigned values. (But signed values will not harm anything.)

Remember that we are talking about timer ticks here, not seconds (unless your timer ticks away according to second intervals.)

*/

#if SYS_SLEEPQ != 0 typedef unsigned char sys_sleeptimer_t; /* delta-time value */ #define SLEEP_MAXKEY (255) /* larger than any valid delay */ #endif

#define SYS_KERNEL_DECL #endif /* #ifndef SYS_KERNEL_DECL */

Reply to
Jonathan Kirwan

This is comp.arch.embedded newsgroup (not some Windows newsgroup :-), so that one can expect that all the tasks in a system are _designed_ to be well behaving.

The general rule of thumb for any priority based pre-emptive system is that the higher the priority, the less work it should be doing. It is quite common that the lowest priority null/idle task (a busy loop or a loop containing just the wait for interrupt instruction) executes nearly constantly. In a typical priority based system, when you scan the task list, most of the tasks are idle waiting for some event from an interrupt or from an other task and it would quite rare to find more than one task in a runnable state.

It is generally a bad idea to have two or more tasks on the same priority. If you can not decide any precedence between these two tasks, there is usually something wrong with your design. The only exception is the systems with less priority levels than there are tasks, in which case, unfortunately, you have to put multiple tasks on the same priority.

To implement such a polling system, make a _high_priority task which consists only of a loop which tests the state of the hardware register (if some change occurred, save data and send a signal for a lower priority task to do the actual processing) and then goto sleep until next clock tick.

In this case, you also need a timer interrupt, which scans all tasks waiting for a timer interrupt and marks those runnable and then rescans the task list and performs a task switch to run the runnable task with the highest priority.

If the high priority hardware poll task has detected some work must be done, it signals a lower priority task, which immediately becomes runnable. When the poll routine goes to sleep and if there are no other runnable high priority tasks, the lower priority working task starts to run, but after the clock interrupt, some high priority tasks waiting for that interrupt can quickly run and then the control returns back to the lower priority task.

This is just a matter of division of labour. The high priority tasks only check the hardware and goes to sleep and the lower priority task does the actual data processing.

Time slicing usually implies that a minimum quantum is granted for each task at each clock tick. In a priority based pre-emptive system, the clock interrupt is just an opportunity for reschedule (as are all the other interrupts).

Paul

Reply to
Paul Keinanen

If a new task starts to run immediately as a consequence of a device interrupt (and not waiting for the next time quantum or some cooperative yield operation) then the system is definitively pre-emptive. If there are additional methods to trigger this task switch (such a cooperative message transmission between tasks) the system is still basically pre-emptive. i.e. the system must still be able to switch tasks between any two instructions i.e. requires a task for each task to save the full task context at any point.

This is a matter of semantics.

The task switching is still implemented in a pre-emptive way. This is easy to see in a multiprosessor system, when a task sends a signal on processor A, which can cause a completely unrelated task at processor B to be pre-empted and a high priority task on processor B waiting for a signal from the process in processor A starts to run.

It is of course true, that in any priority based system, it is up to the designer to ensure, that a sensible amount work is done at each priority level, so that all real-time deadlines are met. In this sense any priority based systems are in a sense "cooperative", since you can not let anyone insert any random new task at a random priority level, without first checking the general division of labour in that system.

In a single processor system, the only thing that can cause a task to be switched at any point is the interrupt and the state changes due to the interrupt.

On a single processor system, an other task can not pre-empt an other task, since it is not running _at _that _point _of _time_, since by definition, the process that sets the event etc. is running at that particular point of time.

On a multiprocessor system the pre-emption can happen at any time.

Interrupt usually do not automatically save the floating point registers.

In a pre-emptive system it is stupid to use static variables for holding floating point values etc. The only sensible place is in the stack, so when there is a task switch, the variables are already in the per task stack, so there is no need to save and restore them. On larger systems, it is common to allocate a per task static area just above the stack, thus when doing a task switch a base pointer register is set to the beginning of the per stack static area and also the stack pointer needs to be switched. In addition to the HW register saves for task A, you just need to switch the stack pointer to the other task,which pops up the static data pointer and other HW registers for stack B before continuing execution.

Paul

Reply to
Paul Keinanen

Indeed.

That's your definition for the term. It's not mine. I can implement a message pass which makes a process ready and current with less concern about hardware and process state than I can in the case of a hardware interrupt that does the same. There are several examples of CPUs that come to mind, for example, where the hardware state must be backtracked in the face of hardware preemption and where I don't even have to consider worrying about this when a process calls to pass a message. Quite different things to an implementer.

I think you are looking at this more as a consumer, not a producer, of operating system software. I can accept your term for talking from that point of view, but in writing the core code (as I do), I use my meaning and not yours when I consider and lay out the work ahead.

is

No, it's a matter of what work I have to do in implementation. And that is not semantics to me, it's effort and the necessary diligence I have to apply. I can do a cooperative system in a matter of hours. Not a preemptive one (from my use of the terms.)

Luckily, it's been more than four years since I had to deal with multiprocessor (Intel 4-CPU) systems. In the small system embedded work I specialize in today, it's not in my vocabulary.

In the case you are talking about, it would come back again to whether or not the interrupted process cooperated in that interruption. The work involved differs some on this point. And making an O/S call that results in switching away is "cooperation" in my use of the term.

More conflation, I fear. But I won't bother teasing it apart, just now.

An interrupt will NOT switch processes in a system that switches only cooperatively. It may *move* processes from one state or queue to another, but the running process does not change until it makes a call to the O/S, in some form or another (for example, to send a message, change a priority of another process, switch() away, etc.) In a preemptive system, yes.

Not sure what I said that stimulated this. But ... sure.

Through hardware events, yes. If it is only a matter of cooperation, which it can be, then message receiving process on CPU B may be moved from a waiting condition to the ready queue, but not started. If preemption is permitted, then it may also start, as well. My use of terms are based on what I have to watch out for in writing the O/S, not what some user of the O/S imagines about what happens.

usually

do.

They do and don't. On Intel CPUs, they usually just set a bit and allow an interrupt to incur the cost of saving on first re-start in the process. But it sounds as though you haven't been using as many processors as I have for the last 30 some years.

Of course it is. But that doesn't stop it from occurring. If you have had much experience in the 1980s (which it sounds as though you have NOT) in writing an O/S for the Intel x86 with the various compilers that were available and for the various incarnations of real PCs which may or may not even have an FP CPU on board, then you'd know about the stupidities that had to be contended with for the existing FP techniques used then. And that wasn't the only situation I've encountered like that, before and after.

As I told many a compiler tool vendor on the phone.

Sounds like you live in a perfect world that I've not seen.

Well, you are just belaboring what "should be" and not what I've experienced, in practice writing O/S for actual systems. Sometimes, we are lucky enough. Sometimes, not.

But regardless, preemption means a very specific thing to me as an implementer. I wish you the best with your viewpoint. I won't be changing mine, though.

Jon

Reply to
Jonathan Kirwan

Yes, I have been mainly a OS service consumer for the last 20 years, but before that I also used and maintained small pre-emptive kernels for 8 bitters, so I know what is reasonable to expect and what is not.

No disagreement about this, but I did not comment on cooperative systems.

yes.

< a few chapters deleted, in which we apparently agree of the contents, but have a disagreement with the naming of various things >

The 8 bitters such as 8080/8085/Z80/680x this was a real issue if you only had the object code for the library. Disassembling and changing the addressing mode to be relative to some base register was not an option, if the processor did not have a suitable addressing mode or all the available registers capable of being used for base registers were used for something else. In these situations you really had to copy the local floating point registers to the stack.

However, with the x86 family with segmentation enabled, you should be able to use a separate data and stack segments for each task and thus have individual FP registers for each task in memory.

This is an issue if you emulate each opcode by an interrupt (trap) service each time a FP opcode was encountered in a non-FP CPU. Using subroutine calls and within the library code select either the FP opcode when available or use emulation in the local stack or local data segment if no FP-support available. Many compilers could be forced to issue subroutine calls for floating point operations.

Paul

Reply to
Paul Keinanen

... snip ...

How little ingenuity show up these days. Back when many systems had neither timers nor interrupts, the thing to do was intercept a common system call, such as checking the status of an input keyboard, and count these to generate a timer tick.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer

OK, here's the project I'm working on right now: It is a three-station test stand that tests transmission solenoid valves in a factory production environment. The three stations run the same test sequence, but they run independently. This is naturally implemented as three equal-priority tasks. Is this a bad design? How would you do it differently?

-Robert Scott Ypsilanti, Michigan (Reply through this forum, not by direct e-mail to me, as automatic reply address is fake.)

Reply to
Robert Scott

That only show up because it's generated by a hardware timer interrupt (probably one out of many interruts). Don't let all the layers of software obscure the true inner workings.

Best regards, Spehro Pefhany

--
"it's the network..."                          "The Journey is the reward"
speff@interlog.com             Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog  Info for designers:  http://www.speff.com
Reply to
Spehro Pefhany

The main purpose of the timer interrupt is representation of time and release of periodic tasks (that are very common e.g. control loops). The periodic tasks gives up execution after the work is done and blocks until the next period.

One common implementation is a clock tick. The interrupt occurs at a regular interval (e.g. 10 ms) and a decision has to be taken whether a task has to be released. This approach is simple to implement, but there are two major drawbacks: The resolution of timed events is bound by the resolution of the clock tick and clock ticks without a task switch are a waste of execution time.

A better approach is to generate timer interrupts at the release times of the tasks. The scheduler is now responsible to reprogram the timer after each occurrence of a timer interrupt. The list of sleeping threads has to be searched to find the nearest release time in the future of a higher priority thread than the one that will be released now. This time is used for the next timer interrupt.

Martin

---------------------------------------------- JOP - a Java Processor core for FPGAs:

formatting link

Reply to
Martin Schoeberl

How does any scheme that relies on priority queues avoid starvation of low-priority tasks? This seems to be an inherent disadvantage versus the round-robin varying-time schemes.

Reply to
Elko Tchernev

In general? This cannot be answered. But in specific circumstances, it's rather easy to imagine useful cases. Usually, the higher priority processes will wait for messages or semaphores and are related to low-level hardware interrupt routines which may put something in a buffer to be serviced by the high priority routine (which the low-level code awakens when it captures something to be processed.) In this case, the reason they are high priority is so that they can pay attention to incoming data in a timely way and not so they can just hog the CPU. Until data arrives, they aren't in the ready queue.

Round robin is usually just time-based preemption when a quantum expires. Round robin can operate, even in cases where different priorities are permitted for processes, when more than one of the highest priority, ready-to-run processes have the same priority. Admittedly, round robin does depend on at least two processes having the same priority (or no priorities, at all) but round robin and priority support aren't inherently mutually exclusive, unless all processes always have different priorities assigned to them.

Jon

Reply to
Jonathan Kirwan

Why should it even attempt to do that ?

It is up to the designer of the system to ensure that there are enough computing power available to handle all incoming events at the maximum rate possible. For instance the maximum rate for UART interrupts depend on the serial line bit rate, the number of Ethernet end of frame interrupts depends on the speed of the link and the minimum message size and the number of timer interrupts depends on the crystal frequency and what division ratios are programmed into the counters.

If sufficient computing power can not always be provided, the system designer has to decide _in_advance_ what functionality _is_ sacrificed during congestion. Typical examples for functionality to be sacrificed are various "nice to have" status displays etc., which does not affect the main purpose of the system.

Thus, such "nice to gave" features should be put on the lowest priority, so if there is a high demand for processing power, these functionalities are automatically sacrificed by starving of resources and no extra application logic is required for that.

Paul

Reply to
Paul Keinanen

The logical conclusion from what you say is, that there's essentially two priority levels - the highest, which is the "must have" level, and all others - which become "nice to have", and _can_ be starved by the highest. All you say about computing power is true, however, sometimes you can't anticipate the actual computing requirements in the field, or maybe you _lack_ computing power to handle the theoretically maximum throughput, and are only able to handle the average (with buffering, let's say). In such cases, it seems to me that the priority queue approach will not degrade gracefully, but can shut down the low-priority tasks for long time periods. The round-robin with priority (where priority determines how long a task can run before being pre-empted, or switches on its own) offers more graceful degradation in high-load circumstances. It has the flexibility of distributing the CPU time between low-priority, continuously-running background tasks, and is much simpler to implement.

Reply to
Elko Tchernev

I am talking about realtime systems, which is the place that you typically use strictly priority based scheduling. Quite a few embedded systems are realtime, although not all realtime systems are embedded.

In a realtime system, the program is faulty, if the results do not arrive in time when they are needed, no matter how correct the calculations themselves are.

When there is not enough resources, do you use some "fair" scheduling to avoid starving the low priority task and as a consequence the high priority task does not meet the deadline ? Such system would be useless for the intended purpose.

In a typical realtime system, the lowest priority null (idle) task could consume more than 50 % on average over a long period. At periods of high demand, the null task will be starved. You cold replace the null task with a some other low priority "nice to have" work that can be starved in the same way.

Buffering in a real time system is acceptable when the length of a load burst as well as the frequency of occurrence of such bursts are well defined.

For instance it is OK in a half duplex serial protocol to receive bytes at a higher rate than the higher level could handle, since the maximum length of the received frame is known and thus a sufficiently large buffer can be allocated in the receiving routine to avoid overwriting. Since in a typical half duplex protocol, the next request will not arrive, before a response has been sent to the previous request.

In some situation it may even be acceptable to delay transmitting the response as a flow control method, but still the routines that handles the reception of individual bytes must be fast enough to receive bytes at the rate the line speed allows.

Such systems are usable only in situations, in which you have _no_ firm deadlines.

Paul

Reply to
Paul Keinanen

I think you're missing the point. If the CPU needs for the high-priority tasks ever become so high that a strictly priority-oriented scheduler doesn't have any time left to distribute to task below a certain level, then there's no room left for gracefully doing something clever. You've just hit the limit, and there *is* no graceful way of doing that any more than there is for, say, a race car to "gracefully" slide off the track and into the fenders because the driver tried to make a turn too fast. For those who watched TV in the 80's, the background story of a certain "Max Headroom" might ring a bell here.

The alternative to starving low-priority tasks in such a situation is to starve *high* priority ones. Which, if the word "priority" means anything at all, would be clearly even worse.

Actually, this is why one school of watchdog implementation says that the watchdog kicking task should have the lowest priority of all --- in a situation like this, it'll be starved, and the doggy will bite because there's nothing else left to be done that could make any sense.

--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

... snip ...

I don't think it is quite that simple. In general, low priority tasks are low priority because the presence of buffers means that they can wait. So the failure condition is not that the low priority is waiting indefinitely, it is that the high priority finds that no further buffer space is available.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.