cooperative multitasking scheme - Page 2

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
Re: cooperative multitasking scheme

Quoted text here. Click to load it

I can't agree with this.  In fact, there are much cleaner ways than I saw there.
A matter of opinion, perhaps, though.

Quoted text here. Click to load it

How would recursion solve anything here?  Your saying this makes me think you
haven't really experienced a well-designed and clean approach.  Sad, as it is
very easy to achieve (just a few lines of assembly code.)

Quoted text here. Click to load it

I implement a much fuller (and, I imagine, designed cleaner) operating system
than you've illustrated and I use very, very few bytes per process as
bookkeeping.  If you don't enable semaphores, I implement ready and sleep queues
with one byte per process (+1 extra.)  Semaphore queues do take another byte per
queue.  The stack space is entirely up to you, so that can be as little or much
as you want.  Task state depends on the processor and the C compiler, but it's
usually quite small as there is no need to save registers that the compiler
assumes are scratched across calls (in a cooperative system.)  Preemption is
supported, if you want it.  But it costs some more process state to do it, plus
a timer resource.

Quoted text here. Click to load it

To me, this is dead wrong.  It is **exactly** because I want to write cleaner
application code that doesn't have to bend over backwards trying to save... and
then somehow restore... both the earlier state and the program counter place
that motivates me to use a cooperative (or preemptive, I suppose) operating
system in the first place.  And with cooperative-only mode, it takes almost no
code and can be easily written from the ground up and then added to an existing
project in an afternoon.  And I'm not kidding about that.  It's that easy.

Some years ago, a programmer working at a place I was contracting at needed to
untangle code that monitored a manually scanned keyboard, updated a display,
operated serial port outputs, etc.  It was a ..nasty.. mess because the
keyboard, for example, was polled and had to be polled 'often enough' while at
the same time many other various polled operations were also competing.  The
resulting application was a disaster waiting to happen, with all kinds of
various calls salted everywhere and... worst of all... lots and lots of weird
code added to these routines to save some odd nested condition state so that
they could return to the "shift key seen, but no other key yet seen" state to
continue the polling process.  Since these routines not only had to restore
variables but also had to return to the deeply buried nested condition states
they were in the last time they saved themselves, it was a nightmare to read and
modify.

I suggested adding a cooperative tasker.  He'd never done one before and was
feeling a little cautious about my recommendation because of that.  But I set
down with him one morning and explained some details to him and suggested a line
of reasoning.  I did NOT write any code, except as pseudo code examples.  By the
next morning he came back over to my office and was ecstatic!  It was working!
He'd spent an afternoon working on the code and by a few hours into the next
morning had a working switcher.  Over the next weeks, he and I worked to unravel
the nastinesses in the existing application.  We never had any troubles with his
code.

It's dirt easy and there is no justification I can think of for writing
something like what I saw at your site when the clear and distinct advantages of
having an operating system in the first place are starkly absent.  If all I
needed to do was cause various routines to execute at different times, I might
just as well simply salt in the calls in various places in the code rather than
go to the long trouble of writing an "operating system" which really isn't
anything more than an extra complication without a distinguishing purpose.

Quoted text here. Click to load it

Saving and restoring state is common practice when you don't have an operating
system around to help out.  That's why you get a cooperative operating system in
the first place -- to allow you to focus on proper design and structure of the
application, separated along functional lines, and without having to confound it
with weird save/restore rube-goldberg contraptions that grow without bound.

It's why you do an O/S -- to avoid this common practice and produce something
that is easily read and modified with a low risk of breaking things.

Quoted text here. Click to load it

Nope.  Not so.  In two ways.  One is that if you keep separate stacks (which can
be done quite simply with nothing more than a static array that gets divided up,
if you want.)  In such cases, there is no need for a longjmp stack unwind.
Quite the opposite.  Which is why you do it.  Another is something called a
"thunk" which can allow you to do coroutines on a single stack, using a nested
stack context that looks almost the same as a local variable on a stack frame
but which allows you to ping-pong between two tasks without losing a beat or any
context.  Very fast, nice, clean.

Two reasons why your comment misses the reality.  And there are probably more.

Quoted text here. Click to load it

The will be, of course, because anyone trying to add complexity will face
daunting hurdles that would give pause to the best of programmers.

I frankly see no real value in how you are approaching this which cannot already
be purchased far more cheaply and easily in other ways in an application.  But
that's just me.  You are, of course, entitled to enjoy this process and sharpen
your programming claws on it.  But it's nothing I'd enjoy seeing or using.

Jon

Re: cooperative multitasking scheme
snipped-for-privacy@easystreet.com says...
Quoted text here. Click to load it
Pre-emption requires a timer?  I understand time-slicing would but surely
you can do preemption w/o a timer.  Useful yes.  Necessary?

Robert

Re: cooperative multitasking scheme
On Sun, 26 Sep 2004 12:08:21 -0400, the renowned R Adsett

Quoted text here. Click to load it
plus
Quoted text here. Click to load it

I don't see how. However, it need not consume the resource entirely-
other stuff could be chained off the same timer interrupt.


Best regards,
Spehro Pefhany
--
"it's the network..."                          "The Journey is the reward"
snipped-for-privacy@interlog.com             Info for manufacturers: http://www.trexon.com
We've slightly trimmed the long signature. Click to see the full one.
Re: cooperative multitasking scheme
On Sun, 26 Sep 2004 16:11:25 GMT, Spehro Pefhany

Quoted text here. Click to load it

Yup.  My preference usually is to assign a timer to the exclusive use of the
operating system, so that the timer can be adjusted without worrying about the
impacts that may imply elsewhere.  But it's not a requirement by any stretch.

Jon

Re: cooperative multitasking scheme
Quoted text here. Click to load it
... snip ...
Quoted text here. Click to load it

How little ingenuity show up these days.  Back when many systems
had neither timers nor interrupts, the thing to do was intercept a
common system call, such as checking the status of an input
keyboard, and count these to generate a timer tick.

--
Chuck F ( snipped-for-privacy@yahoo.com) ( snipped-for-privacy@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
We've slightly trimmed the long signature. Click to see the full one.
Re: cooperative multitasking scheme
On Mon, 27 Sep 2004 09:41:30 GMT, the renowned CBFalconer

Quoted text here. Click to load it

That only show up because it's generated by a hardware timer interrupt
(probably one out of many interruts). Don't let all the layers of
software obscure the true inner workings.


Best regards,
Spehro Pefhany
--
"it's the network..."                          "The Journey is the reward"
snipped-for-privacy@interlog.com             Info for manufacturers: http://www.trexon.com
We've slightly trimmed the long signature. Click to see the full one.
Re: cooperative multitasking scheme

Quoted text here. Click to load it
plus
Quoted text here. Click to load it

The basic essence of preemption is that there must be some method by which the
operating system gains momentary control of the CPU, without cooperation by the
current process.  I suppose you could use any interrupt event to force a task
switch, but I'm just not sure how useful it would be to use a serial port
interrupt for such a purpose (for example.)  Timers are the traditional (and, I
think, sensible) way.

Jon

Re: cooperative multitasking scheme
On Sun, 26 Sep 2004 18:03:37 GMT, Jonathan Kirwan

Quoted text here. Click to load it

Why should the OS need to get regular momentary control of the CPU in
a preempt system ? The only reason for a reschedule of the tasks is
when a higher priority task has changed state from blocked (waiting
for something) to runnable. There are mainly two situations this can
happen, an other task activates a signal or sends a message that the
blocked task is waiting for or an interrupt service routine satisfies
a wait condition.

To simplify things further, the task to task communication can issue a
software interrupt, thus the interrupt handler can be nearly identical
in all cases.
  
Quoted text here. Click to load it

At the end of each interrupt service routine, be it for serial line
interrupt, timer interrupt or the software interrupt from task to task
communication, jump to the scheduler and scan through the task list in
priority order to find the highest priority runnable task and switch
to that task.

This is quite simple to implement in systems that always saves the
full hardware context (CPU register) at each interrupt routine entry,
since the interrupt saves the context of task A at the beginning of
the interrupt. After interrupt servicing and checking the task list a
switch to task B should be performed, simply swap the stack pointers
and perform the return from interrupt in stack B.

The selected task may of course be the interrupted task, if the
interrupt did not make any higher priority task runnable. Since this
is a common situation and no stack switching is required, some
shortcuts can be applied, which speeds up the return. This is
particularly true on processors, that does not save the whole HW
context at interrupt service routine entry, since the partial context
can be reloaded at return from interrupt. However, when the task
switch must be done, the rest of the HW context must be saved into
stack A, then the stack pointer switched, then the main context of
task B must be retrieved from stack B and finally the return from
interrupt returns the minimal context of task B.  

The timer interrupt (if used) does not require any special treatment,
it can be handled as any other interrupt.

For instance a simple system copying characters in both directions
between two serial ports should be doable with the UART interrupts
only. Of course a timer interrupt may be useful, if some timeout
control is required on the serial lines.

Paul


Re: cooperative multitasking scheme

Quoted text here. Click to load it

This may be true for very well-behaved tasks.  But what if you have
several equal-priority tasks that are both trying to be CPU hogs?
(say, because they need to poll for something that is not available as
an interrupt).  It is possible that no event would occur for a long
time that would change the state of any tasks.  So without some form
of time-slicing, the other task could be suspended indefinitely, even
though it is ready to run.  Although, strictly speaking, I guess the
term "pre-emptive multitasking" does not imply "time-slicing".


-Robert Scott
 Ypsilanti, Michigan
(Reply through this forum, not by direct e-mail to me, as automatic reply
address is fake.)

Re: cooperative multitasking scheme
On Sun, 26 Sep 2004 20:09:37 GMT, no-one@dont-mail-me.com (Robert

Quoted text here. Click to load it

This is comp.arch.embedded newsgroup (not some Windows newsgroup :-),
so that one can expect that all the tasks in a system are _designed_
to be well behaving.

The general rule of thumb for any priority based pre-emptive system is
that the higher the priority, the less work it should be doing. It is
quite common that the lowest priority null/idle task (a busy loop or a
loop containing just the wait for interrupt instruction) executes
nearly constantly. In a typical priority based system, when you scan
the task list, most of the tasks are idle waiting for some event from
an interrupt or from an other task and it would quite rare to find
more than one task in a runnable state.
 
Quoted text here. Click to load it

It is generally a bad idea to have two or more tasks on the same
priority. If you can not decide any precedence between these two
tasks, there is usually something wrong with your design. The only
exception is the systems with less priority levels than there are
tasks, in which case, unfortunately, you have to put multiple tasks on
the same priority.
 
Quoted text here. Click to load it

To implement such a polling system, make a _high_priority task which
consists only of a loop which tests the state of the hardware register
(if some change occurred, save data and send a signal for a lower
priority task to do the actual processing) and then goto sleep until
next clock tick.

In this case, you also need a timer interrupt, which scans all tasks
waiting for a timer interrupt and marks those runnable and then
rescans the task list and performs a task switch to run the runnable
task with the highest priority.

If the high priority hardware poll task has detected some work must be
done, it signals a lower priority task, which immediately becomes
runnable. When the poll routine goes to sleep and if there are no
other runnable high priority tasks, the lower priority working task
starts to run, but after the clock interrupt, some high priority tasks
waiting for that interrupt can quickly run and then the control
returns back to the lower priority task.    

This is just a matter of division of labour. The high priority tasks
only check the hardware and goes to sleep and the lower priority task
does the actual data processing.

Quoted text here. Click to load it

Time slicing usually implies that a minimum quantum is granted for
each task at each clock tick. In a priority based pre-emptive system,
the clock interrupt is just an opportunity for reschedule (as are all
the other interrupts).

Paul


Re: cooperative multitasking scheme

Quoted text here. Click to load it

OK, here's the project I'm working on right now:  It is a
three-station test stand that tests transmission solenoid valves in a
factory production environment.  The three stations run the same test
sequence, but they run independently.  This is naturally implemented
as three equal-priority tasks.  Is this a bad design?  How would you
do it differently?


-Robert Scott
 Ypsilanti, Michigan
(Reply through this forum, not by direct e-mail to me, as automatic reply
address is fake.)

Re: cooperative multitasking scheme

Quoted text here. Click to load it

I'm not sure what you imagine is meant by "preempt" then.  In my mind it means
"not cooperative."

Quoted text here. Click to load it

That's cooperation.  The software interrupt is nothing more than a call which is
known to be able to switch tasks.

...

Before we go further, Paul, let me say a couple of things.

The main distinction to me about preemptive systems and cooperative ones, the
thing that distinguishes them more than anything else I can think of, is the
amount of process state that needs to be found and saved/restored.  Preemptive
systems can interrupt a process at *any* point in time, not just selected
points, and because of this the amount of important state can be much, much
greater.  There may be important hardware states in semi-completed conditions
that must be saved and restored, there may be important floating point static
variables that need to be saved and restored, there will probably be more
registers needing a save/restore, etc.  A software event, such as a "software
interrupt" which is nothing more than a software function call, does not usually
imply any of these additional burdens.  However, hardware interrupts usually do.

Basically, it's the complexity of the resulting product that is implied by a
cooperative-only or a preemptive system.  Now, you can try to divide this along
other lines of thought -- but then it's not along the line of the practical work
needed.  I tend to look at this from an implementers' point of view, as I
implement these regularly and routinely.  And from that viewpoint, the concept
of preemption is about whether or not a process is interrupted at places where
that process isn't consciously involved.  When the current process executes a
"software interrupt" event (for example, sending a message to a higher priority
process), it is (in my mind) cooperatively relinquishing control.  That is NOT
preemption to me.

...
 
Quoted text here. Click to load it

Here, to me, you are conflating several things.  Serial line interrupts are, in
fact, preemptive events.  Software interrupts, such as message passing, are not.
To conflate them together just because English sadly uses the same word or term
in saying 'interrupt' does not make them the same thing, inside the box so to
speak.

Regarding the preemptive parts of what you are talking about, for example the
serial line interrupt or timer interrupt, you can indeed do what you are saying.
In fact, one often does.  A low-level serial interrupt may mean that a buffer
goes from empty to not-empty and a process may be waiting on this event.  Or the
incoming serial character may be posted to a process as a message, causing it to
"wake up."  So, please don't misunderstand me here.  I'm not suggesting that a
timer is the ONLY way that an operating system may preempt a process.

I'm just saying that in the absence of interrupts from hardware functional units
other than a timer, a timer will be needed in order to round-robin processes of
equal priority (where implemented) or where priorities aren't specifically used.

But you are right in the sense that preemption can reasonably come from other
sources.  Sorry about not making that clear.

Quoted text here. Click to load it

Jon

Re: cooperative multitasking scheme
snipped-for-privacy@easystreet.com says...
Quoted text here. Click to load it
the
Quoted text here. Click to load it
the
Quoted text here. Click to load it
<snip>
Quoted text here. Click to load it
saying.
Quoted text here. Click to load it
the
Quoted text here. Click to load it
to
Quoted text here. Click to load it
units
Quoted text here. Click to load it
used.
Quoted text here. Click to load it
I think that the question I was trying to ask.  I didn't see where timers
were needed to make a system preemptive.  Needed for time-slicing at a
single priority, but the only systems I've used that used timeslicing are
larger systems (mainframes, workstations, desktops) although I know they
are used in some embedded systems.

On the other hand every embedded system I've dealt with required a timer
whether it was pre-emptive, cooperative or just a big loop. While I can
imagine systems (preemptive or otherwise) that don't need a timer, I keep
wanting to add timeouts to deal with issues like corrupted data and
broken communication lines. And the systems that don't require timeouts
seem to require pacing.

Mostly I was just wondering if I'd missed a subtle point that forced pre-
emptive systems to have a timer, or maybe a definition that was different
than I expected.

Robert

Re: cooperative multitasking scheme

Quoted text here. Click to load it
the
Quoted text here. Click to load it
the
Quoted text here. Click to load it
saying.
Quoted text here. Click to load it
the
Quoted text here. Click to load it
to
Quoted text here. Click to load it
units
Quoted text here. Click to load it
of
Quoted text here. Click to load it
used.
Quoted text here. Click to load it


Generally, you'll need timers for embedded operating systems.  Almost every such
system I've worked on required some kind of precision timing mechanism for
certain kinds of processing.  So it's almost always a good idea to plan for
that.  Using a timer is almost second nature to me, because I almost always
implement sleep queues even when the system is running cooperatively and not
preemptively.  In cooperative systems, where I choose not to use preemption at
all, the action of the timing event is to move the process from the sleep queue
to the ready queue but not to yank control to the new process, even if it has a
higher priority.

Quoted text here. Click to load it

I use simple round-robin in preemptive embedded systems, often enough, where I
don't really see a need for (or cannot afford the cost of) a priority system.  A
process does NOT have to use its entire slice, you know.  For example, if I have
a process to (1) update the display, (2) scan and poll a keyboard input, (3)
handle RS-232 parsing and query response, and (4) performs rather lengthy
computations on inputs and then updates an output; then I may use no priorities
at all but still use preemption so that I avoid excessively frequent use of
switch() calls.  If the O/S preempts every 20ms, for example, it may switch to
the keyboard polling task, which may already be sitting in a scanning loop that
just moves the scan line to the next one, checks for anything to do, and if
nothing just calls switch() inside the loop before continuing again.  In this
way, that process gives up any remaining time in its slice, voluntarily, because
it knows that there is nothing to do now.  (The keyboard does not, in this case,
generate any hardware interrupt events at all.)  But the task that performs the
long calculations doesn't need to salt those calculations with frequent switch()
calls, because the 10ms timeout will interrupt an in-progress calculation to
ensure that some time goes back to the polling keyboard routine every so often.

Probably a somewhat weak example, but perhaps it makes the idea clear enough for
you to construct a better one in mind.

Quoted text here. Click to load it

I very much appreciate having a sleep queue.  A timer advances the time and may
move a process from the sleep queue to the ready queue and is needed for that
purpose, if you want any reasonable chance at having regular intervals for
processes.  On DSPs, I've used as fine-grained timing specs for a process as
2us, with no more than 100ns restart jitter.  Decent for much work.

Quoted text here. Click to load it

Hopefully, I relaxed your worries about that.

Jon

P.S.  By the way, just by way of possible interest, here's the configuration
header for the O/S I wrote for tiny embedded processors with scarce RAM.
Mainly, I'm adding this so you can read the comments which explain some things
it does:

/*
    File:   config.h
    Author: Jonathan Dale Kirwan

    Creation Date:  Fri 24-Jan-2003 11:42:28
    Last Modified:  Sun 16-Nov-2003 12:37:21

    Copyright 2003, Jonathan Dale Kirwan, All Rights Reserved.


    DESCRIPTION

    This file defines the normal compile-time configuration criteria for the
    operating system and some general symbolic constants used throughout.  For
    those configuration parameters requiring a yes/no answer, set them to zero
    for no or to disable them and any non-zero value for yes (preferably 1.)

    In addition, this file provides the type/size of certain key items in the
    operating system; such as the size of the sleep and semaphore counters.


    TARGET COMPILER

    This module is designed to be compiled with general purpose C compilers.
 
 
    MODIFICATIONS
 
    Original source.
*/

#ifndef SYS_KERNEL_DECL


/*  SYS_RTCLOCK
    --------------------------------------------------------------------------
    Whether or not to include a real-time clock for the operating system is
    the central decision a user must make, in configuring this operating
    system.  A real-time clock is not necessary, but without it time doesn't
    advance on its own, so there is no possibility for automatically moving
    processes from the sleep queue to the ready process queue or for pre-
    empting the currently running process.  Without the real-time clock as a
    resource, there is no determinism for sleeping processes, no round robin
    sharing of CPU time, and no pre-emptive relinquishing of the CPU by a
    lower priority process to a higher priority process - it all becomes a
    matter of process co-operation.

    Making a real-time clock available has a price to pay.  Some CPU time is
    used by the clock event handler each time the clock interrupts the CPU.
    If the clock is operated too rapidly the time spent in its event handler
    can consume nearly 100% of the CPU time, leaving almost nothing left for
    normal process operations.  So be careful about deciding the interval.
    
    It also uses up a hardware timer resource, which may be scarce or used for
    something else more important to the application.  Further, any kind of
    interrupt handling may require some non-portable feature in C (a #pragma,
    for example, to designate a function as an interrupt function) or else
    some assembly language and linker support in order to properly locate code
    at the right 'vector address.'

    So, enable this feature if either automatic movement of sleeping processes
    to the ready process queue or pre-emption of the currently running process
    is needed.  If co-operating processes are sufficient to get the job done,
    this feature can be disabled.
*/
#ifndef SYS_RTCLOCK
#define SYS_RTCLOCK     (1)
#endif


/*  SYS_SLEEPQ
    --------------------------------------------------------------------------
    Whether or not providing a real-time clock enables automatic movement of
    sleeping processes, a user may choose to support the sleep queue.  A sleep
    queue is simply a place for processes waiting until some period of time
    has expired.  Processes are added into the sleep queue, in an order
    determined by the delay they specify.

    If the real-time clock is available, a counter for the top process in the
    sleep queue is decremented and, when it reaches zero, the top process is
    moved from the sleep queue to the ready process queue.  If there are any
    processes immediately following it in the sleep queue, which also have a
    zero counter, they are also moved at this time.  (This does not cause a
    rescheduling event, so moving higher priority sleeping processes to the
    ready process queue doesn't preempt the currently running process.)

    Supporting a sleep queue without a real-time clock is a bit unusual -- as
    normally there is no automatic change in their counters and no automatic
    motion of processes from the sleep queue to the ready process queue unless
    the currently running process to perform these functions as it deems them
    appropriate.  The behavior of the operating system, in this case, is to
    automatically start the top entry in the sleep queue when there are no
    remaining processes running (the last running process puts itself to sleep
    or else waits on a semaphore or message.)  This can actually be useful in
    cases where no preemption is desired but where some processes should run
    more frequently than others when cooperatively switching, using the delta
    time sleep queue as a way of achieving that process shuffling.

    (If you are curious about how this may be achieved, consider the idea of
    four processes where P1 should run 4 out of 8 switches, P2 should run 2
    out of 8, P3 should run 1 out of 8, and P4 should run 1 out of 8.  If a
    delta sleep time for P1 is given as 1, a delta time for P2 as 2, a delta
    time for P3 as 4 and a delta time for P4 as 4 and then these processes use
    this associated value to sleep on when they want to cooperatively switch
    away, then the arrangement will work as desired.)

    There is a secondary effect to specifying support for sleep queues.  A
    sleep queue requires a counter for each process to support timing their
    motion back to the ready process queue.  This will increase the RAM
    requirements.  For systems with very little RAM, this must be a
    consciously considered decision.
*/
#ifndef SYS_SLEEPQ
#define SYS_SLEEPQ      (1)
#endif


/*  SYS_PRIORITIES
    --------------------------------------------------------------------------
    Enabling process priorities allows the operating system to organize the
    ready process queue by process priority.  Disabling this feature means
    that all processes have, in effect, the same priority.  The operating
    system doesn't prevent processes with equal priority -- processes of the
    same priority are simply added into the ready process queue after all
    processes of equal or higher priority.

    Process priorities have their expected impact when preemption isn't turned
    on -- nothing takes place until the current process attempts to resume a
    process (sleeping, waiting on a semaphore, or waiting on a message) with a
    higher priority, tries to change the priority of another ready process to
    a value higher than its own, or else tries to cooperatively reschedule.
    It is only at these times that the operating system may switch to a higher
    priority process.

    There is a secondary effect when enabling process priorities.  The process
    priority feature requires adding a priority value for each process to
    support sorting them in the ready process queue.  This will increase the
    RAM requirements.  For systems with very little RAM, this must be a
    consciously considered decision.
*/
#ifndef SYS_PRIORITIES
#define SYS_PRIORITIES  (1)
#endif


/*  SYS_MESSAGING
    --------------------------------------------------------------------------
    Messages provide a method of unsynchronized process coordination.  They
    are particularly useful, in contrast to semaphores, when a process
    doesn't know how many messages it will receive, when they will be sent, or
    which processes will send them.

    These messages are posted directly to the process and are not of the type
    which are left at rendezvous points, since this operating system is
    designed for very small RAM requirements.  Processes do not block when
    sending messages and only the first message sent to a process is retained,
    if several messages are sent to it before it can receive them.

    Enabling this feature allocates room in each process node to hold the
    latest message as well as code space for the support routines (of course.)
*/
#ifndef SYS_MESSAGING
#define SYS_MESSAGING   (1)
#endif


/*  SYS_QUANTUM
    --------------------------------------------------------------------------
    Set this to zero in order to disable pre-emption of the currently running
    process.  Any positive value will enable the feature and specify the
    number of real-time clock ticks required in order to generate a
    rescheduling event.  A negative value has 'undefined' meaning.

    Enabling pre-emption and the real-time clock allows the operating system
    to automatically select higher priority processes in the ready process
    queue and allows round robin sharing of CPU time for processes with equal
    priority.  Without these features, the only way a higher priority process
    can get control is if the currently running process voluntarily requests
    rescheduling.

    The real-time clock is required for automatic action by the operating
    system.  Without the real-time clock, the quantum of the currently running
    process isn't automatically updated and thus cannot expire -- so the
    operating system cannot automatically generate a rescheduling event.
    Enabling pre-emption this way, without the real-time clock, is permitted
    but it isn't very useful.

    A secondary effect of enabling pre-emption is that a memory location is
    allocated for the remaining quanta available to the currently running
    process.  It's only a small addition, but on memory starved systems it may
    be important.
*/
#ifndef SYS_QUANTUM
#define SYS_QUANTUM     (25)
#endif                  


/*  SYS_PLIMIT
    --------------------------------------------------------------------------
    This value sets a limit on the number of allowable processes.  Naturally,
    this affects the required RAM used to support them, as each process
    requires a queue node to keep track of which queue the process is in as
    well as space for process-specific state information.  This value must be
    positive and probably greater than 1.
*/
#ifndef SYS_PLIMIT
#define SYS_PLIMIT      (20)
#endif


/*  SYS_SLIMIT
    --------------------------------------------------------------------------
    This value sets a limit on the number of distinct semaphore queues
    supported by the operating system.  A zero value disables semaphore
    support, entirely.

    Semaphores provide a simple method of process coordination, often used to
    synchronize their actions or cooperate in sharing common resources.  Each
    semaphore consists of an integer count, conceptually, with wait(s) calls
    decrementing the count and signal(s) calls incrementing it.  If the
    semaphore count goes negative due to a wait(s) call, the process is
    suspended or delayed.  In that case, the next signal(s) call will release
    exactly one suspended process.  Semaphores are a synchronized method of
    process coordination, essentially requiring one wait(s) for each signal(s)
    call.

    Each semaphore queue requires a small amount of RAM for a queue head and
    tail node.
*/
#ifndef SYS_SLIMIT
#define SYS_SLIMIT      (10)
#endif


/*  SYS_DLINKEDQ
    --------------------------------------------------------------------------
    The system queues may be configured as either singly linked lists or
    doubly linked lists.  Enabling the doubly linked lists provides a faster
    response from the operating system when moving processes from one queue to
    another.  But it does so at the expense of an extra link in RAM for each
    queue node.

    This is essentially a space versus speed option.  Normally, doubly linked
    lists are desired.  But if RAM is very tight, disabling this option could
    help.  The amount of help will depend on the size of the qnode pointers
    (usually 1 byte) and the number of processes and queues.  As I said, it is
    not a lot, but there are times when every byte counts.
*/
#ifndef SYS_DLINKEDQ
#define SYS_DLINKEDQ    (1)
#endif


/*  sys_status_t, SYS_FAIL, SYS_SUCCESS
    --------------------------------------------------------------------------
    Operating system function calls often return a success/fail status result.
*/

typedef int sys_status_t;
#define SYS_FAIL            (0)
#define SYS_SUCCESS         (1)


/*  sys_priority_t, SYS_PRIORITY_MIN, SYS_PRIORITY_MAX, SYS_PRIORITY_DEFAULT
    PRIORITY_MAXKEY
    --------------------------------------------------------------------------
    Configure as appropriate for process priority support.  These may use
    either signed or unsigned types, as desired.

    SYS_PRIORITY_MIN and SYS_PRIORITY_MAX aren't used for more than some error
    checking, when calling system functions to set process priorities.  Just
    make sure they are in the valid range of sys_priority_t and, of course,
    that the minimum value is less than the maximum value.  The only other
    requirement is that PRIORITY_MAXKEY be greater than SYS_PRIORITY_MAX.
    PRIORITY_MAXKEY is used for the tail entry in queues, so it needs to be
    unique and larger than the greatest possible priority value.

    SYS_PRIORITY_DEFAULT is only used when creating processes and shouldn't be
    relied on.
*/

#if SYS_PRIORITIES != 0
typedef signed char sys_priority_t;     /* process priorities */
#define SYS_PRIORITY_MIN    (-100)      /* minimum allowed priority */
#define SYS_PRIORITY_MAX    (100)       /* maximum allowed priority */
#define SYS_PRIORITY_DEFAULT (0)        /* default priority, when created */
#define PRIORITY_MAXKEY     (127)       /* larger than highest priority */
#endif


/*  sys_sleeptimer_t, SYS_SLEEP_MAXKEY
    --------------------------------------------------------------------------
    Configure as appropriate for the dynamic range needed by the sleep queue.
    These values won't be allowed negative and are measured in timer ticks, so
    it's probably better to keep them as unsigned values.  (But signed values
    will not harm anything.)

    Remember that we are talking about timer ticks here, not seconds (unless
    your timer ticks away according to second intervals.)
*/

#if SYS_SLEEPQ != 0
typedef unsigned char sys_sleeptimer_t; /* delta-time value */
#define SLEEP_MAXKEY        (255)      /* larger than any valid delay */
#endif


#define SYS_KERNEL_DECL
#endif                  /* #ifndef SYS_KERNEL_DECL */

Re: cooperative multitasking scheme
On Sun, 26 Sep 2004 20:09:41 GMT, Jonathan Kirwan

Quoted text here. Click to load it


If a new task starts to run immediately as a consequence of a device
interrupt (and not waiting for the next time quantum or some
cooperative yield operation) then the system is definitively
pre-emptive. If there are additional methods to trigger this task
switch (such a cooperative message transmission between tasks) the
system is still basically pre-emptive. i.e. the system must still be
able to switch tasks between any two instructions i.e. requires a task
for each task to save the full task context at any point.
 
Quoted text here. Click to load it

This is a matter of semantics.

The task switching is still implemented in a pre-emptive way. This is
easy to see in a multiprosessor system, when a task sends a signal on
processor A, which can cause a completely unrelated task at processor
B to be pre-empted and a high priority task on processor B waiting for
a signal from the process in processor A starts to run.

It is of course true, that in any priority based system, it is up to
the designer to ensure, that a sensible amount work is done at each
priority level, so that all real-time deadlines are met. In this sense
any priority based systems are in a sense "cooperative", since you can
not let anyone insert any random new task at a random priority level,
without first checking the general division of labour in that system.

Quoted text here. Click to load it

In a single processor system, the only thing that can cause a task to
be switched at any point is the interrupt and the state changes due to
the interrupt.

On a single processor system, an other task can not pre-empt an other
task, since it is not running _at _that _point _of _time_, since by
definition, the process that sets the event etc. is running at that
particular point of time.

On a multiprocessor system the pre-emption can happen at any time.
 
Quoted text here. Click to load it

Interrupt usually do not automatically save the floating point
registers.

In a pre-emptive system it is stupid to use static variables for
holding floating point values etc. The only sensible place is in the
stack, so when there is a task switch, the variables are already in
the per task stack, so there is no need to save and restore them. On
larger systems, it is common to allocate a per task static area just
above the stack, thus when doing a task switch a base pointer register
is set to the beginning of the per stack static area and also the
stack pointer needs to be switched. In addition to the HW register
saves for task A, you just need to switch the stack pointer to the
other task,which pops up the static data pointer and other HW
registers for stack B before continuing execution.

Paul


Re: cooperative multitasking scheme

Quoted text here. Click to load it

Indeed.


That's your definition for the term.  It's not mine.  I can implement a message
pass which makes a process ready and current with less concern about hardware
and process state than I can in the case of a hardware interrupt that does the
same.  There are several examples of CPUs that come to mind, for example, where
the hardware state must be backtracked in the face of hardware preemption and
where I don't even have to consider worrying about this when a process calls to
pass a message.  Quite different things to an implementer.

I think you are looking at this more as a consumer, not a producer, of operating
system software.  I can accept your term for talking from that point of view,
but in writing the core code (as I do), I use my meaning and not yours when I
consider and lay out the work ahead.

Quoted text here. Click to load it
is
Quoted text here. Click to load it

No, it's a matter of what work I have to do in implementation.  And that is not
semantics to me, it's effort and the necessary diligence I have to apply.  I can
do a cooperative system in a matter of hours.  Not a preemptive one (from my use
of the terms.)

Quoted text here. Click to load it

Luckily, it's been more than four years since I had to deal with multiprocessor
(Intel 4-CPU) systems.  In the small system embedded work I specialize in today,
it's not in my vocabulary.

In the case you are talking about, it would come back again to whether or not
the interrupted process cooperated in that interruption.  The work involved
differs some on this point.  And making an O/S call that results in switching
away is "cooperation" in my use of the term.

Quoted text here. Click to load it

More conflation, I fear.  But I won't bother teasing it apart, just now.

Quoted text here. Click to load it

An interrupt will NOT switch processes in a system that switches only
cooperatively.  It may *move* processes from one state or queue to another, but
the running process does not change until it makes a call to the O/S, in some
form or another (for example, to send a message, change a priority of another
process, switch() away, etc.)  In a preemptive system, yes.

Quoted text here. Click to load it

Not sure what I said that stimulated this.  But ... sure.

Quoted text here. Click to load it

Through hardware events, yes.  If it is only a matter of cooperation, which it
can be, then message receiving process on CPU B may be moved from a waiting
condition to the ready queue, but not started.  If preemption is permitted, then
it may also start, as well.  My use of terms are based on what I have to watch
out for in writing the O/S, not what some user of the O/S imagines about what
happens.

Quoted text here. Click to load it
usually
Quoted text here. Click to load it
do.
Quoted text here. Click to load it

They do and don't.  On Intel CPUs, they usually just set a bit and allow an
interrupt to incur the cost of saving on first re-start in the process.  But it
sounds as though you haven't been using as many processors as I have for the
last 30 some years.

Quoted text here. Click to load it

Of course it is.  But that doesn't stop it from occurring.  If you have had much
experience in the 1980s (which it sounds as though you have NOT) in writing an
O/S for the Intel x86 with the various compilers that were available and for the
various incarnations of real PCs which may or may not even have an FP CPU on
board, then you'd know about the stupidities that had to be contended with for
the existing FP techniques used then.  And that wasn't the only situation I've
encountered like that, before and after.

Quoted text here. Click to load it

As I told many a compiler tool vendor on the phone.

Quoted text here. Click to load it

Sounds like you live in a perfect world that I've not seen.

Quoted text here. Click to load it

Well, you are just belaboring what "should be" and not what I've experienced, in
practice writing O/S for actual systems.  Sometimes, we are lucky enough.
Sometimes, not.

But regardless, preemption means a very specific thing to me as an implementer.
I wish you the best with your viewpoint.  I won't be changing mine, though.

Jon

Re: cooperative multitasking scheme
On Mon, 27 Sep 2004 06:37:28 GMT, Jonathan Kirwan



Quoted text here. Click to load it

Yes, I have been mainly a OS service consumer for the last 20 years,
but before that I also used and maintained small pre-emptive kernels
for 8 bitters, so I know what is reasonable to expect and what is not.

Quoted text here. Click to load it

No disagreement about this, but I did not comment on cooperative
systems.

Quoted text here. Click to load it

yes.

< a few chapters deleted, in which we apparently agree of the
contents, but have a disagreement with the naming of various things >
 

Quoted text here. Click to load it

The 8 bitters such as 8080/8085/Z80/680x this was a real issue if you
only had the object code for the library. Disassembling and changing
the addressing mode to be relative to some base register was not an
option, if the processor did not have a suitable addressing mode or
all the available registers capable of being used for base registers
were used for something else. In these situations you really had to
copy the local floating point registers to the stack.

However, with the x86 family with segmentation enabled, you should be
able to use a separate data and stack segments for each task and thus
have individual FP registers for each task in memory.

Quoted text here. Click to load it

This is an issue if you emulate each opcode by an interrupt (trap)
service each time a FP opcode was encountered in a non-FP CPU. Using
subroutine calls and within the library code select either the FP
opcode when available or use emulation in the local stack or local
data segment if no FP-support available. Many compilers could be
forced to issue subroutine calls for floating point operations.

Paul


Re: cooperative multitasking scheme
Quoted text here. Click to load it

The main purpose of the timer interrupt is representation of time and
release of periodic tasks (that are very common e.g. control loops). The
periodic tasks gives up execution after the work is done and blocks until
the next period.

One common implementation is a clock tick. The interrupt occurs at a
regular interval (e.g. 10 ms) and a decision has to be taken whether a
task has to be released. This approach is simple to implement, but there
are two major drawbacks: The resolution of timed events is bound by the
resolution of the clock tick and clock ticks without a task switch are a
waste of execution time.

A better approach is to generate timer interrupts at the release times of
the tasks. The scheduler is now responsible to reprogram the timer after
each occurrence of a timer interrupt. The list of sleeping threads has to
be searched to find the nearest release time in the future of a higher
priority thread than the one that will be released now. This time is used
for the next timer interrupt.

Martin
----------------------------------------------
JOP - a Java Processor core for FPGAs:
http://www.jopdesign.com /




Re: cooperative multitasking scheme

Quoted text here. Click to load it
     How does any scheme that relies on priority queues avoid starvation
of low-priority tasks? This seems to be an inherent disadvantage versus
the round-robin varying-time schemes.


Re: cooperative multitasking scheme

Quoted text here. Click to load it

In general?  This cannot be answered.  But in specific circumstances, it's
rather easy to imagine useful cases.  Usually, the higher priority processes
will wait for messages or semaphores and are related to low-level hardware
interrupt routines which may put something in a buffer to be serviced by the
high priority routine (which the low-level code awakens when it captures
something to be processed.)  In this case, the reason they are high priority is
so that they can pay attention to incoming data in a timely way and not so they
can just hog the CPU.  Until data arrives, they aren't in the ready queue.

Quoted text here. Click to load it

Round robin is usually just time-based preemption when a quantum expires.  Round
robin can operate, even in cases where different priorities are permitted for
processes, when more than one of the highest priority, ready-to-run processes
have the same priority.  Admittedly, round robin does depend on at least two
processes having the same priority (or no priorities, at all) but round robin
and priority support aren't inherently mutually exclusive, unless all processes
always have different priorities assigned to them.

Jon

Site Timeline