Worst case execution time problem

Hi,

in real-time systems the worst case execution time (WCET) is an important issue. In the literature I found the statement that its calculation is undecidable in general. Why? I appreciate a detailed explanation.

Regards, Chris

Reply to
Christian Christmann
Loading thread data ...

Because _in_general_, it's even impossible to decide whether a given program will *ever* finish. This is known as the Halting Problem, and its impossibility was proven by Turing long before most of us were born.

If you have to be able to calculate WCET and be sure about the result, that requirement restricts the choice of programs you can write. I.e. it's a loss of generality.

Reply to
Hans-Bernhard Bröker

Because it is not yet even known generally if Saddam Hussein is in U.S. or Iraqi custody.

Reply to
larwe

Oo, ouch.

Steve (applauding the quick wit while wincing)

formatting link

Reply to
Steve at fivetrees

If one wants to create structured , modular code , timing IS deterministic . But i code it into the kernel , so he calc's it .... he knows exactly when to do every task ...

Even in the VN mem module , ARM7 , it is deterministic , if you disipline yourself to structured code .

But you must not do this urself , let the kernel calc and control all the idle time... Computers arent like humans , they wont "create" idle , like a Trade Unionist !!

BTW comming soon to a theater near you , a free OpSys for ARM . Its so structured , takes minutes to figure everything in the kernel , then use those patterns to disassemble any code !!

_______________________________________________________________

Christian Christmann wrote:

Reply to
werty

He is now either in the hands of God, or no longer exists, depending on your world view.

--
Tim Wescott
Wescott Design Services
 Click to see the full signature
Reply to
Tim Wescott

And oddly enough, nothing else in the world has changed, for better or worse.

Reply to
larwe

I believe that some people would find Gehenna

formatting link
a suitable place. He always wanted to go to Jerusalem.

--
Best Regards,
Ulf Samuelsson
Reply to
Ulf Samuelsson

Regardless of that, it looks like the time of execution can't be any worse.

VLV

Reply to
Vladimir Vassilevsky

True, unlike the U.S. judicial system, wherein EVERY non-masked event is a high-priority interrupt and every single task contains interminable, nonpreemptible busy-waits with interrupts disabled.

Reply to
larwe

Everything is undecidable in general. But a real hunk of well-designed embedded code will have a fairly obvious worse-case path, and its execution time can be calculated or measured. If its execution time is undecidable, it's bad code and doesn't belong in an embedded system.

I generally use an oscilloscope. Pull up a test point at the start of a routine, drop it at the end, set the scope to infinite persistance, and go away for a while. Do whatever things cause all the various paths to be taken, and observe the various execution times.

John

Reply to
John Larkin

... snip ...

I disagree. Counter example - a thermostat. Design it with a timer interrupt, and a watchdog for the interrupt. On interrupt, it examines the setpoint and the actual temperature, and turns the heater on/off. So far all this code has worst case execution times. However it is normally in the 'set' state, awaiting keypad input for the setpoint. The timing of this routine is totally unknown, it depends on external happenings. This may be a state machine (await 1st dig, 2nd dig, ..., backup, cancel, execute).

This can be a very solid design, and execute in a few hundred bytes of object code, and use very little storage. I.E. ideal for a PIC. The problems will be in the keypad reading code, which may need another state machine. It can easily be gussied up to display current setpoint, current temp, partial input value. Or to use up/down setpoint adjustment buttons, for a simpler but more awkward interface.

--
Merry Christmas, Happy Hanukah, Happy New Year
        Joyeux Noel, Bonne Annee.
 Click to see the full signature
Reply to
CBFalconer

Either approach will work, but you are making some added assumptions. First, that power consumption is important. Second, that something is present to create that keypad interrupt. The system may need to be polling a set of lines for changes and keeping a history (via the one timer interrupt) to detect keyup/down/debounce etc. The up/down setpoint version needs only two input bits. I can see the thing needing only 5 pins, plus another for programming (2 input, 1 output, 2 power) before output display control. Your version will be complicated if an output display showing actual temp and setpoint is needed.

In fact, my version can dispense with any interrupt, by simply polling the output of a timer. Less to go wrong = added reliability.

--
Merry Christmas, Happy Hanukah, Happy New Year
        Joyeux Noel, Bonne Annee.
 Click to see the full signature
Reply to
CBFalconer

You have a choice in situations like this: either do the keypad input as the mainline program and to the temperature control as an interrupt-driven state machine, or do both as interrupt-driven (or otherwise periodically executed) state machines. In the first case, the mainline routine literally runs forever (is "forever" decidable?) and we don't mind a bit. In the second case, each state machine runs for a predictable (and probably short) amount of time and then gets the hell out.

My embedded products often follow a common structure: a mainline loop that does serial i/o (or some similar function which likes to be classic procedural code) and a timer-driven interrupt that dispatches to various state machine snippets on some sort of rotating basis, doing all the realtime stuff. You can scope the IRQ execution time and see all the state machines executing in progression, and see each execution time jump around depending on the path through each snippet... fun to watch. A robust design won't care if an occasional snippet runs too long and a periodic interrupt is missed, although most designs try to avoid this situation.

Most deep-embedded things work very well this way and don't need an RTOS, queues, semiphores, or any of that fancy stuff that tends to make WCET hard or impossible to predict. WCET must be considered, but is often not a big deal if you keep the state blocks rip-right-through-simple and don't mind wasting a bit of available CPU power.

Small state machines can do complex stuff. They tend to be more reliable than classic procedural code where the system "state" is the program counter's current location inside a maze of branches and subroutines. A state machine, essentially, keeps coming up for air.

John

Reply to
John Larkin

Yep, absolutely agreed.

Just to add a bit of emphasis - nowadays I never have code that just waits for something to happen. It seems odd to me now that anybody would, in the same way that the use of globals seems odd to me now. Instead I use state machines and keep right on going.

Steve

formatting link

Reply to
Steve at fivetrees

Further musing: in a procedural program the system "state" is the value of the pc compounded by the reverse path back up to the start, which includes not only the flow-chart type logic back to the surface, but the values of all the subroutine addresses on the stack. It's amazing that big programs work at all. Actually, most of them don't.

Yeah, another rule: failure of a hardware operation to complete should not stall the code. So if a SAR ADC has a DONE flag, ignore it. Trigger the sucker, wait 18 microseconds, and read it, or trigger it and read it next pass through.

John

Reply to
John Larkin

Interesting, is the idea, better bad data then no data?

Robert

--
Posted via a free Usenet account from http://www.teranews.com
Reply to
Robert Adsett

Bad data can be observed and/or checked. If an embedded program hangs in the field, it's going to be very hard to figure out why. A user can call me and say "the temperature reads 655.35 degrees C" or he can say "the whole box just hung up."

John

Reply to
John Larkin

Apparently you have a relatively benign application environment for which that's reasonable.

Imagine istead that's something like the speed feedback from a milling head. If you accept bad value it could end up careening at high speed in the wrong direction (or almost as bad, in the right direction).

I do agree having the process just sit forever at with the same output would be as bad but just taking a value from a peripheral that indicates the results are not yet valid?

Robert

--
Posted via a free Usenet account from http://www.teranews.com
Reply to
Robert Adsett

I make, among other things, NMR sample temperature controllers. If one messes up, it could easily poach an enzyme sample that a thousand rabbits died to make. So I check for open thermocouples, shorted thermocouples, unreasonable reference junction temps, unreasonable actual feedback temps, open/shorted heater, loss of air flow, idiotic customer requests, and inconsistancies between sensed temperature and how much power we're dumping into the heater. I can't do any of that if the code isn't running.

The alternative is to let things hang if the hardware doesn't cooperate, and let a watchdog timer shut things down. That's safe, but delivers zero diagnostic data. Actually, I use two watchdogs, one the normal CPU thing and a second, a one-shot that operates a relay in series with the heater.

And I could also check the DONE bit *after* the 18 us wait, or loop on the done bit with a parallel timeout. But the code shouldn't hang.

John

Reply to
John Larkin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.