instructions - operations

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View

I'm reading some papers about a tool which is estimating
the worst-case execution time of a task. The authors
make a difference among the terms "instruction" and

I'm not sure about the difference. Could you explain it
to me.

Thank you for your answers.


Re: instructions - operations

Quoted text here. Click to load it

The best source is the authors.  They must have something specific in
mind.  My guess is that the difference would be for instructions that
perform multiple low-level operations.  A simple example would be
decrement and skip if zero.  Another example might be an 80x86 string
instruction that performs the same operation on each byte of an
array.  DSPs are famous for allowing multiple operations per


Re: instructions - operations
Quoted text here. Click to load it

I am just guessing what might have been meant in the
context that you reference. But worst-case estimation
is an important component in design.   My first thought
was about the complication on some processors due to
pipeline stall effects and memory cache miss effects.

Dr. Philip Koopman wrote an excellent article for Embedded
Systems a few years ago about the problem of worst-case
execution timing on non-deterministic processors.  One
can't just say that this instruction takes this many cycles
except in the context of what other instructions are executed
and where by unpredictable interrupt driven routines.

His bottom line was that one in a thousand times a given
routine would take 100 times longer to execute than its
average execution time on the non-deterministic Intel
processor that he was using for his examples.  So to meet
realtime requirements 100% of the time instead of 99.9%
of the time one had to use hardware that was 100 times
faster than was needed to meet a requirement 99.9%
of the time.

One might use the terms that you mentioned to distinguish
between time listed in an instruction cycle table and
the time required to get that instruction through the
pipes and multi-layered memory caches in the presence
of and unpredictable mix of other interrupt driven
instruction sequences.  On some processors there is
pretty much a one to one relationship between instruction
and operation.  But on some processors an instruction
might execute in parallel with other instructions or it
might stall or be stalled by some other instruction's

The complexity of prediciting the effects of instruction
sequences in pipelines and caches is one of the main
reasons for the complexity of some compilers today.  And
an unpredicted sequence of instructions being run by
an interrupt in the middle of any sequence throws all
that out.  So in my opinion one simply has to test for
low frequency cases that may push the edge in meeting
any hard realtime requirements.  Worst-case
estimation is important at design time.

You probably know if the context for the use of
the terms in question was about processors
that have these non-deterministic timing problems
due to pipelines, cache effects, or coprocessors
stealing memory cycles from the bus, etc.

Best Wishes

Site Timeline