Has this wheel already been invented?

There are all sorts of preemptive kernels, RTOS's, frameworks, out there, open source and commercial. There are priority-driven preemptive systems, time-sliced preemptive systems, and others.

But preemptive systems have their costs, namely context switching time, and context storing RAM. Even cooperative coroutines have context switch and space overhead.

On the other end of the scale is pure cooperative multitasking, as exemplified by the classic "super loop":

while (1) { task1(); task2(); /* ... */ task99(); task100(); }

In most cases, each of these task calls some lower level routine (timer, communication interface, whatever) to see if there's actually any work for it to do. If not, is just returns. The advantage, of course, is that there is no context overhead, either space or time.

This can be detrimental to response time if an event comes in for a task just after it has had its turn in the super loop. It won't see its event until after every other task has been called, whether the other tasks really have any useful work to do or not.

What I need, and I am planning to develop, is a cooperative task scheduler to replace the super loop. Inter task communication, and ISR to task communication, will go through a kernel to set event flags, timer flags, put messages in queues, etc., and the kernel will track which tasks are actually ready because they have pending events.

After any task returns, the scheduler will execute the highest priority ready task.

My question is, does anyone know of any kernel, RTOS, whatever, that's implemented this way? I'd look at the documentation and API details for any that exist, not their source code.

Knowing whether or not this wheel has already been invented will be handy when I reinvent it.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/alt.comp.lang.learn.c-c++
http://www.club.cc.cmu.edu/~ajo/docs/FAQ-acllc.html
Reply to
Jack Klein
Loading thread data ...

=== SNIP ===

Salvo?

formatting link

--
Dan Henry
Reply to
Dan Henry

This sounds like the 'task queue' mechanism that Linux has. Check this link for an explanation:

formatting link

Reply to
Arlet

This is basically a non-preemptive scheduler. Any RTOS can give you the exact behavior you desire with no hassel. Look at uC/OS-II for an example, it has its source available and a book.

---Matthew Hicks

Reply to
Matthew Hicks

Have you looked at the co-routine functionality in FreeRTOS.org? FreeRTOS.org has the following concepts:

Co-routines - These are prioritised light weight tasks that share a stack, and there is no context switch as such. The overhead is no more than a function call.

Cooperative tasks - basically tasks but with the kernel set to cooperative mode. Each task maintains its own stack and a full context switch is performed to switch from task to task.

Pre-emptive tasks - as cooperative tasks but with the kernel set to pre-emptive mode.

If you are looking for something really light weight then take a look at

formatting link

--
Regards,
Richard.

+ http://www.FreeRTOS.org
14 official architecture ports, 1000 downloads per week.

+ http://www.SafeRTOS.com
Certified by TÜV as meeting the requirements for safety related systems.
Reply to
FreeRTOS.org

I did something along these lines about 10 years ago on a PIC. The system was simplified by requiring that every called subroutine exited within a fixed period, I think 100 uSec. or so. This was forced by the slowest peripheral, which was the display, and could have to wait about 50 uSec. to become non-busy and accept the next character. Thus the char. output mechanism squirted one char and returned. There was no minimum time requirement.

When some routine required service more often, I simply called it more often. For example, I could have a routine needing service every 300 uSec (svc0) and four others with no particular speed requirement (svc1 thru svc4). Then I called:

do { svc0(); svc1(); svc2(); svc0(); /* again */ svc3(); svc4(); } while (running);

This also had the advantage, for PICs, that interrupts were normally disabled, and only enabled once in the complete loop. That controlled the stack usage etc.

This handled concurrent systems quite nicely.

--
 Chuck F (cbfalconer at maineline dot net)
   
   Try the download section.
Reply to
CBFalconer

In most embedded systems, the number of possible tasks is fixed (as in your example above). Often, their priorities do not change: they too are fixed by the system design. When this is so, an efficient scheduler is just a list of machine instructions, which are altered as needed. Using Z80 instructions as an example, you can have something like this:

.section ROM Scheduler:

21 tasktab ld hl,tasktab ; Table of JP instructions 11 03 00 ld de,3 ; Length of a JP instruction C3 TaskList call TaskList ; Return (here) is stacked ...

tasktab: ; These do not change C3 Task_0 jp Task_0 C3 Task_1 jp Task_1 ...

Task_0: ; Typical of each task ... C9 ret

.section RAM TaskList: ; These are altered to de/activate tasks

19 add hl,de ; Task 0 cannot run E9 jp (hl) ; Task 1 can run ... C9 ret ; Nothing can run (end marker)

Each TaskList element is 1 byte, being a 1-byte Z80 instruction. You set a task active or inactive by changing that byte to either "add hl,de" or "jp (hl)". When the table is executed, the first (highest) available task will run, as its jump will be taken.

Obviously, alter to taste for other CPUs. [I first saw this trick used in 1972, on the GEC 2050 mini.]

Reply to
David R Brooks

Some years back Klaus Schleisik-Kern presented a paper at a EuroForth Conference with a mechanism that seemed to have similar aims to your idea. I'll try and locate the paper and give you a reference to it. If you are lucky it may be available on the Bournemouth web-site. If not I'll see about sending a copy to you.

--
********************************************************************
Paul E. Bennett...............
Forth based HIDECS Consultancy
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-811095
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
Reply to
Paul E. Bennett

I woke up one morning a couple of years ago with an idea for improving the "classic super loop". I coded it up and call it "prioritized jobbing".

Each job to be executed is described in an array of data structures that include job ready booleans and pointers to jobs' initialization and executive functions. The initialization functions are called once from early in main(). Both functions are passed their jobs' indeces.

Event service functions associated with jobs indicate that their jobs have become ready by passing their jobs' indeces to a job scheduling function that sets the indicated job's ready flag.

main() scans the job data and jobs' executive functions are called only when their associated jobs are ready. When the executive functions return, job data scanning is restarted from the beginning of the job data array. In this way, higher priority is given to jobs with lower index values.

Not being preemptive, it's not deterministic, but it is simple, small, and is an improvement over simple looping.

I expect to be told that I have reinvented something that my lack of formal software engineering education has left me unaware. ;-)

--
========================================================================
          Michael Kesti            |  "And like, one and one don't make
                                   |   two, one and one make one."
    mrkesti at hotmail dot com     |          - The Who, Bargain
Reply to
Michael R. Kesti

It's been invented many times.

Here are a couple approaches I thought were interesting:

formatting link

This is a threading model, but doesn't deal with scheduling. I think you could run it on top of something like this to deal with scheduling:

formatting link

There are also many implementations of coroutines and cooperative multi-tasking available.

--
Grant
Reply to
Grant Edwards

[...]

[...]

What for?

Windows 3.x Netware 2.x Microsoft .Net Micro

:-)

Sure. There is plenty.

This is a special wheel of a square shape. For those who don't like to be in comfort.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

I've looked at plenty of RTOSs etc and have always come back, for embedded work, to a variation of the humble round-robin. It's not a million miles away from your list approach, but does have better latency.

It depends on each task returning promptly; if there is complex processing to do, split it up into a state machine, and keep track of progress. Each task returns a boolean which is TRUE if significant time has been spent, or FALSE if none.

while ( TRUE ) { do_background_tasks(); // maintain time etc if ( task_1() ) continue; if (task_2() ) continue; if (task_3() ) continue; }

There are many possible enhancements: tasks can alternate, rather than be sequenced. (Note that higher-priority tasks can "task-hog" and prevent lower-priority ones from ever running in the scheme above. Again, many ways around this.) The whole thing can be turned into a task table, perhaps with the "do I need to do anything" test separated from the "do something". Etc, etc, etc.

I like to add a watchdog kick at the end of the loop, and the unkick in the regular timer interrupt; watching this pin on a 'scope can be instructive.

Finally: I'm a big believer in cooperative multitasking (synchronous multitasking, actually) in the context of embedded systems. Pre-emptive OSs have their place: multi-user multi-function systems ;).

In haste,

Steve

formatting link

Reply to
Steve at fivetrees

If the system is cooperative, why would there be a priority scheme?

Cooperative multasking has a fatal flaw, especially for real-time systems. And the flaw is that every task must be aware of the time constraints of the others. IOW, you make what seems to be a simple programming change in task99 and suddenly task100 starts failing to meet its deadline intermittently. This isn't a failure of the superloop, but of the whole cooperative multasking model.

You might want to check some OS theory before starting your development. Ed

Reply to
Ed Prochak

That is not an uncommon solution to the flaw in cooperative multasking. It does require more testing to verify each service does stick to the time limit. Guaranteeing each service routine always meets that limit can be a difficult process. Ed

--
I do not like them Sam I AM. I do not like green eggs and ham!
Reply to
Ed Prochak

I have always favored preemptive multasking. It serves larger projects much better in that each task does not have to be programmed with an awareness of the other tasks. when the development of each tak is all done by one or two people, cooperative tasking can work out. but when you have 10 major tasks being written by a team of 16 developers, it is a lot easier to go preemptive. (though you then have to haggle over priorities and make sure you got them right by testing)

Ed

Reply to
Ed Prochak

Hi, I am sorry,but it looks to me that your explaination resembles pre emptive kernel.Can you please help us to understand how its different from pre-emptive kernel? Mainly the following sentence from your mail resembles to me a pre emptive kernel:

"What I need, and I am planning to develop, is a cooperative task scheduler to replace the super loop. Inter task communication, and ISR to task communication, will go through a kernel to set event flags, timer flags, put messages in queues, etc., and the kernel will track which tasks are actually ready because they have pending events. After any task returns, the scheduler will execute the highest priority ready task."

From what you say, is it possible to pre empt a task in between or not?

Regards, s.subbarayan

Reply to
ssubbarayan

tp://

formatting link

Hi Richard, With the cooperative routines how does the OS take care of preventing corruption of datastructures given the fact that light weight tasks share the same stack?Will there be some sort of seggregation between the tasks with in the stack?

Just want to understand this,asked this out of curiosity.

Regards, s.subbarayan

Reply to
ssubbarayan

So when the running task yields, the next one to run is the one with the highest priority.

Premptive multitasking has a fatal flaw, especially for small systems: it requires a seperate stack for each task (at least that's how it's always done). If you've got 10 tasks to run and only 256 bytes of RAM, then suddenly you've got stack overflows.

--
Grant Edwards                   grante             Yow!  Yow! Is my fallout
                                  at               shelter termite proof?
                               visi.com
Reply to
Grant Edwards

I suppose, but attempting to execute 10 tasks on a system with only

256 bytes of RAM is certainly a fatally flawed design!
--
========================================================================
          Michael Kesti            |  "And like, one and one don't make
                                   |   two, one and one make one."
    mrkesti at hotmail dot com     |          - The Who, Bargain
Reply to
Michael R. Kesti

"Jack Klein" skrev i meddelandet news: snipped-for-privacy@4ax.com...

Something like this will do it (may be some syntax errors, did not test)

typedef void (*task)(); task tasktab[MAXTASK] while (1) { (tasktab[get_next_task()])(); }

Each task is implemented as state machine which will send and receive messages. When they are waiting for messages, then they are not computable, and will not be executed. When a message arrives, they will process the message, possibly change state, and then exit.

Everything can be implemented in C, since they share a single stack. Interrupts will send messages as well, so you have a pretty clean implementation.

Have successfully used this approach on designs down to a few kBytes of flash, and quite limited SRAM(256-512 bytes).

You may further limit size by having a single "in" message per task which is statically allocated, and block all tasks trying to write to the in-buffer if it has not been processed.

--
Best Regards,
Ulf Samuelsson
This is intended to be my personal opinion which may,
or may not be shared by my employer Atmel Nordic AB
Reply to
Ulf Samuelsson

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.