Parallax Propeller

With only 10-50 cores, you still would have to put multiple threads on each core. I wonder for you could make a pre-emptive kernel without interrupts ? You would end up into some ugly Win 3.x style co-operative kernel. At least you would need to have some kind of timer interrupt to handle runaway threads.

Reply to
upsidedown
Loading thread data ...

I think we are talking about an embedded microcontroller connected to a few sensors and pushbuttons or something like that. Not a workstation or internet server. 8 tasks are probably plenty for many things.

Reply to
Paul Rubin

You can still design them as if they were in their own thread and run them in "series". The now-old CASE tool ObjecTime had a separate set of screen widgets for organizing separate "executables" into threads.

In non-case-tool systems, you simply run them one after the other. This can be round-robin or through a more sophisticated arrangement.

An RTOS can also be a purely software object. Do mean essentially register bank switching?

Shared memory has been around since System V, so...

--
Les Cargill
Reply to
Les Cargill

Does it really need to be preemptive?

There is nothing particularly wrong with cooperative multitasking, especially for embedded systems.

Or you could consider runaway threads to be defects.

--
Les Cargill
Reply to
Les Cargill

When considering that each external connection will need a dedicated task, there is not many tasks left for the actual application.

Anyway, the I/O task needs to inform the main application that the I/O operation (such as reading a complete frame from the serial port) has ended. Of course, the main task could poll some shared memory location(s) and burning a lot of power doing that.

Some low power "wait for interrupt" style instruction would help reduce the power consumption, in order to avoid the busy polling (especially if multiple signal sources needs to be polled).

Alternatively, the main task requesting a service (such as receiving a frame) needs to send the request to the I/O task and then go to a low power halt state. Upon completing the I/O task would have to power up the halted task (hopefully from the same location it was halted and not from the restart point :-). Things get ugly, if two or more waiting tasks need to be informed.

Reply to
upsidedown

I use timer interrupts mainly to have many nice accurate timers.

It makes it so easy. I could use polled timers in main() but really prefer the ease of an interrupt and if it doesn't require any context switching, what could be easier ?

boB

Reply to
bob

Forget about I/O -- in modern systems I/O devices frequantly are like a specialized processors. Think about coordination between different processors -- you need to get some info from one processor to another ASAP and the info is nonatomic so just writing it to shared RAM will not do. And you need intrrupt even in case of atomic info it the target processor is doing something else at would not look at the info otherwise.

--
                              Waldek Hebisch 
hebisch@math.uni.wroc.pl
Reply to
Waldek Hebisch

I don't 100% agree with that description! If you wrote your code like that then you end up with code written in a parallel style, but that executes in a serial (i.e. sequential) way!

It's far better if the task that is waiting for a response from the serial I/O task goes off and does some other work while it is waiting. Otherwise what you have in reality is a sequential process written in a parallel style; in other words, it's far more complicated than it needs to be.

If there's no other work to do, and you put the main task to sleep, then it's possible that the overall software 'solution' didn't need to be parallel in the first place - time to re-think the design and simplify.

Or, to put it another way, if the 'main-task' has to go to sleep while the serial task is running, then I would say the software has been designed "the wrong way around" - the *serial task* is the main task, and the programmer needs to rotate his perception of the problem he is trying to solve by 180 degrees! He has it backwards.

Okay, enough pedantry from me!

Reply to
Mark Wills

Of course it would be preferable if the tasks could do other work, but that requires an interrupt mechanism if you want to handle external events with low and predictable latency.

Reply to
Arlet Ottens

While this is true, quite a lot of protocols are essentially half-duplex request/response protocols (such as Modbus). While for instance TCP/IP is capable of full-duplex communication, many protocols built on TCP/IP are essentially half duplex request/response (such as http). Of course, the I/O core should be programmed to handle sending the request and assembling the response in true IBM mainframe I/O processor SNA style :-)

If the main task is doing something useful between issuing the request and before processing the result, then is the question, _how_ or _when_ the main task is going to process the I/O response without interrupts. Of course, the main task could poll the I/O status say every few hundred microseconds, this would force you to insert those polls all around your application code, making it hard to maintain.

However, if the I/O-task (core) is polled at some slow application convenient rate (say 10 ms), the buffering requirements would be quite large. For a half duplex protocol, any latencies due to polling will kill the throughput due to the extra latencies (especially at high signaling rates).

might be a solution in some cases, I have

The original question was about how to use multiple cores (=tasks in traditional RTOS speak) without interrupts.

While I understand the problem of implementing proper interrupt processing in current processors with long pipelines and large caches, I still think that at least some kind of "wait for interrupt" mechanism (i.e. flushing the pipeline, cache invalidation and going to low power sleep mode) and some mechanism to reactivate the code by an external pin or by writing a value to the power up register by some other core is needed. If you do not want to call this "interrupt", then it is an other story.

Reply to
upsidedown

A system can be implemented in various ways. If the intertask synchronization and communication is very primitive (e.g. implemented in hardware only), you easily end up with dozens or even hundreds of very simple tasks doing a huge number of random communications with each other, making it hard to manage.

On the other hand, if you are forced to a single task (old style Unix), you easily end up building some huge select() calls, including file descriptors from actual files, sockets and serial lines. This also makes it hard to maintain the system.

There should be some sweet spot between these two extremes. A sufficiently usable intertask communication systems (typically using interrupts) should be able to handle this. I prefer 5-10 different tasks, so my fingers are sufficient to keep track of these, without having to take my socks off to use my toes for counting the rest :-).

Reply to
upsidedown

The BIOS only shows the IRQ lines available to the original PC, not those on the "extended" interrupt controller or multiplexed interrupt sources.

A quick check on my PC shows 36 interrupts in use, with four cores. So my first guess was a bit of an exaggeration - though I don't know how many interrupt sources are available. But it is also not hard to find large microcontrollers with hundreds of interrupt sources and only 1 or two cores, giving much more than 50 to 1 ratio.

Of course, any given system is unlikely to use more than a small fraction of the interrupt sources - but even something as simple as a UART can use three interrupts plus some sort of timer. But it will still be a lot more than the number of cores.

Reply to
David Brown

I remember reading an article in Nuts & Volts on the propeller which discussed the interrupt thing. Digging into the archives...

"There were only a few rules: it had to be fast, it had to be relatively easy to program, and it had to be able to do multiple tasks without using interrupts -- the bane of all but the heartiest programmers." April 2006, page 16.

In other words, interrupts are too hard.

By this standard, I am a hearty programmer.

--
David W. Schultz 
http://home.earthlink.net/~david.schultz 
Returned for Regrooving
Reply to
David Schultz

And FORTH will rise again !

Sorry, couldn't help myself...

PS: Yes, I have programmed complicated IO processing on a minicomputer without interrupts. And my next task was adding an interrupt controller ;-)

Reply to
Dave Nadler

You can certainly do that - but then you don't get the dedication of a core that lets you get low-latency reactions.

Bank switching is part of it, but to make a "hardware RTOS" you need a system to switch tasks (register banks) automatically by task.

There is a lot more to inter-process communication than shared memory. The XMOS implements a message passing system with CSP-style synchronisation.

Reply to
David Brown

Interrupts are not going away anytime soon.

There are event riven processors that are essentially all interrupts.

Add run to completion (to eliminate preemption overhead) and multiple cores for interrupts to use the next available execution unit and a lot of processing overheads go away with comparable reduction in software complexity.

w..

Reply to
Walter Banks

To be sure.

I presume the code store is shared, so it's matter of assigning the "task" represented by entry point to core .

Very nice indeed, then. CSP is the right abstraction.

--
Les Cargill
Reply to
Les Cargill

There are no spelling mistakes. Only unexpected meanings. Not even unexpected, sometimes.

Mel.

Reply to
Mel Wilson

I have often written programs where the main loop contains nothing but a "sleep until next interrupt" instruction - it is not uncommon for low-power systems.

If you can eliminate all forms of pre-emption, you can keep many things simpler - there is no need to worry about how you share data or resources amongst threads, for example. But of course you can no longer have low-latency reactions to events - you need to make sure these are handled in hardware.

Reply to
David Brown

Latency in these systems is reduced in two ways, pre computing responses delivered on an event (your comment) and multiple execution units so code is not pre-empted but is immediately executed if there is an available execution unit.

Hardware arbitrated execution priority levels is also a feature of these processors

The rest is software design to make sure the response time meets the application requirements. The resulting code has better average response time and that can be important as well.

w..

Reply to
Walter Banks

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.