What really happened on Mars? Priority Inversion and Mars Pathfinder

What really happened on Mars? Priority Inversion and Mars Pathfinder

formatting link
formatting link

Notes on the history of the Priority Inversion problem

formatting link

Yodaiken: Against Priority Inheritance

formatting link
formatting link

Rebuttal to Yodaiken

formatting link

What is Priority Inversion? Why You Care and What to Do About It

formatting link

Wikipedia: the Dining Philosophers Problem

formatting link

Consider the problem that the Mars Pathfinder had (priority inversion) and whether it si possible to solve it without using the solution that the Mars Pathfinder team used (priority inheritance).

From the URL avove:

"Pathfinder contained an 'information bus', which you can think of as a shared memory area used for passing information between different components of the spacecraft. A bus management task ran frequently with high priority to move certain kinds of data in and out of the information bus. Access to the bus was synchronized with mutual exclusion locks (mutexes).

"The meteorological data gathering task ran as an infrequent, low priority thread, and used the information bus to publish its data. When publishing its data, it would acquire a mutex, do writes to the bus, and release the mutex. If an interrupt caused the information bus thread to be scheduled while this mutex was held, and if the information bus thread then attempted to acquire this same mutex in order to retrieve published data, this would cause it to block on the mutex, waiting until the meteorological thread released the mutex before it could continue. The spacecraft also contained a communications task that ran with medium priority.

"Most of the time this combination worked fine. However, very infrequently it was possible for an interrupt to occur that caused the (medium priority) communications task to be scheduled during the short interval while the (high priority) information bus thread was blocked waiting for the (low priority) meteorological data thread. In this case, the long-running communications task, having higher priority than the meteorological task, would prevent it from running, consequently preventing the blocked information bus task from running. After some time had passed, a watchdog timer would go off, notice that the data bus task had not been executed for some time, conclude that something had gone drastically wrong, and initiate a total system reset.

"This scenario is a classic case of priority inversion."

Can the above priority inversion problem be solved without using priority inheritance?

--
Guy Macon (hardware engineer, not software...)
Reply to
Guy Macon
Loading thread data ...

Sure. No task priorities, scheduling by the time share. Although it is non-RTOS approach, it could be quite good for many cases.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

Of course. KISS ;).

(Longer answer: there are tasks that are best left to an RTOS [bean-counting etc]. There are tasks that are best handled by a cooperative scheduler, i.e. controlled serialism rather than parallelism. This story [in terms of paradigm SNAFU vs costs/disbenefits] is a *beautiful* example of why I think this ;).)

Sheeit. Haven't been here for a while. First thread I read, Guy has left me a corker. Only another 312 threads to read. Eeep.

Steve

formatting link

Reply to
Steve at fivetrees

On Mon, 15 Oct 2007 22:27:37 GMT, Vladimir Vassilevsky wrote in comp.arch.embedded:

Actually it can be an RTOS approach, as long as it is designed that way. You just need more processor overhead.

In fact, it is probably the simplest possible real time multitasking system that you can prove by testing will absolutely always meet all of its timing requirements.

I once designed a system with five tasks, time sliced in a timer interrupt. This was back in the old 8-bit days, on a Z80 running less than 4 MHz. The task switching code took up 20% of the CPU cycles, each task had 1/5 of the 80% left, 16% each.

Each task completely owned its 16% of the CPU, whether it was idle or busy, and no matter what any other task was doing.

I could test each execution path of each task to prove that it met its timing requirements, there weren't that many. And because it was pure time slicing, with each task getting its fixed time slice always, it made no difference what other tasks were doing.

Once I proved that every execution path of a particular task met its requirements with the other tasks idle, it would meet them no matter what the load on the other tasks were.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/alt.comp.lang.learn.c-c++
http://www.club.cc.cmu.edu/~ajo/docs/FAQ-acllc.html
Reply to
Jack Klein

I am surprised this sort of hard-timeslicing is not found in more Embedded cores. I think Ubicom have chips in this area, and the Propellor chips takes it futher, with a small CPU/Task.

FPGA SoftCPUs lend themselves to easy hard-timeslicing.

I guess most of the 'mindset' comes from MicroProcessor fields, and not like the example above (which required you knew you were going to allocate a fixed number (eg 5) tasks, at the start of the design).

-jg

Reply to
Jim Granville

One, admittedly ugly, solution is to have the medium priority thread periodically acquire and immediate release the mutex. If the high priority thread is waiting for the low priority thread to finish, the medium priority thread will be suspended until the low priority thread releases, then high priority task acquires and releases the mutex.

--
Thad
Reply to
Thad Smith

There is somewhat better way: each task is assigned the max. amount of time after which it will be preempted by the next task. If the task has nothing to do or can't asquire the lock, the control is transferred to the next task immediately. So the time is scheduled dynamically, it is much more efficient on the CPU, and the average latency is much less. However the worst case latency is the same as with the fixed slicing. The whole task loop can be made to roll with the fixed speed; for the extra time the CPU can be put in the idle state or to do a low priority work. I had that in a previous version of my OS.

And the time critical stuff can be done in the interrupts, as usual.

I am thinking of a good way for combining the advantages of time slicing and priority scheduling. Can't find an elegant solution yet.

Vladimir Vassilevsky DSP and Mixed Signal Consultant

formatting link

Reply to
Vladimir Vassilevsky

Now that's interesting...

I, being a hardware engineer who hacks around with software on occasion, handle real-time requirements in one of two ways:

[1] I do everything in one big loop with the occasional use of an interupt to handle a task that always takes the same short amount of time. [2] I realize that the simple but reliable scheme isn't enough to do what I need to get done, and hire someone who knows what they are doing to do it right.

You have to know your own limitations...

--
Guy Macon
Reply to
Guy Macon

On Mon, 15 Oct 2007 22:05:25 -0500, Jack Klein wrote in comp.arch.embedded:

To follow up on my own post, based on responses by Vladimir, Jim, and Guy...

The system I described above, developed almost 25 years ago, was my first embedded system with real multitasking.

It is not processor efficient, since each task always receives a big enough percentage of CPU time for its worst case load, even when it has nothing to do. But then again, any way at all I ran the system, whether it was priority based preemptive multitasking or just a super loop, there has to be enough CPU cycles for worst case load, and there will be unused cycles at less than full load.

In this particular case, defining the five tasks was extremely easy. The board sat in a PC on the ISA bus. It contained four optically isolated current loop serial ports for talking to industrial machines, through boxes on the machines that were also part of our product.

There were four tasks running the identical code, one for each serial port, where the variables (including the port addresses in the Z80 DARTs) for each task resided in a separate RAM block, pointed to by the Z80's IX index register. And of course each task had its own stack.

The fifth task monitored the bus interface, mainly a FIFO, where the PC sent commands and read or wrote data to be transmitted/received to and from the machines. There were a few megabytes (this was the mid

1980's) of banked DRAM for temporary data storage, files from the PC to be transmitted, or files received from the machines until the PC program got around to reading them and storing them to disk files.

Transmitting data was not a performance issue, receiving at full speed on a channel was. Once I proved by testing that the serial task could handle full speed receiving, I knew that all four could do so simultaneously.

Having this architecture and the Z80 code that implemented the task switching, I used it in several other embedded Z80 products in the

80's and early 1990's.

There was even a way to emulate higher priority.

I did one industrial control product which had 3 tasks.

The UI task communicated with a little industrial serial terminal (membrane keypad and 2 x 24 LCD).

The IO task monitored electronic signals, debounced mechanical inputs, and watched a few analog voltages via ADC.

The control task actually ran a PID control algorithm, which really ate CPU cycles on a Z80.

So I used four fixed time sliced slots for the three tasks, like this:

  1. Control Task
  2. UI Task
  3. Control Task
  4. IO Task

This gave the control task twice the execution time of the other two tasks, with no priority levels, mutexes, or any other fancy mechanisms.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/alt.comp.lang.learn.c-c++
http://www.club.cc.cmu.edu/~ajo/docs/FAQ-acllc.html
Reply to
Jack Klein

Vladimir Vassilevsky escribió:

The firm I work with developed (and I tweaked) a nice RT kernel many years ago (for the 8086) that combined both. It was basically a priority based scheduler with a fixed amount of time devoted to time slicing. It had a fixed number of tasks (eight), and you could configure which tasks could run in the fixed time slices. When I began to investigate other RTOS, I was surprised not to find similar approaches out there.

Reply to
Ignacio G.T.

y=20

=20

s=20

If my understanding of your approach is correct, a higher priority task=20 can be preempted by a lower priority task, but only for a timeslice? In=20 the other RTOSes, it is possible to implement that policy by adjusting=20 the task priorities by timer.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

Vladimir Vassilevsky escribió:

Yes. Each quantum (10 ms) was divided in two sections. The first one was devoted to time slicing for a given task. The second one was devoted to priority tasking. You could configure which task would run in the time slicing part of the first quantum, which one would in the time slicing part of the second quantum, etc... until a given number of quantums were reached, and the cycle restarted.

During the time slicing part, the scheduler was prevented to give control to higher priority tasks. Of course, this introduces a task switch latency at least equal to the duration of the time slicing section, but this was perfectly acceptable for our applications.

There were other subtleties, but space and time are short. Let's say that we didn't have the resources to upgrade it to i386 protected-mode (it was made in assembly) and we no longer maintain it, although we continue to develop new versions of old embedded applications for it...

Reply to
Ignacio G.T.

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.