interrupt latency

Just a generic question regarding interrupt latency. I am using a periodic timer which generates an interrupt at every clock tick. The question I have is will the interrupt latency cause the timer interrupt service routine to run at a reduced frequency. What I mean here is as follows. Consider a simple isr routine as shown below,

void isr() { t = timestamp(); }

Assume the above code runs twice. The question i have is will the time elapsed between 2 calls of timestamp() in this isr include the interrupt latency value or will the time elapsed correspond to the original timer frequency (i.e. time_elapsed = 1/timer_freq). It would make sense that the isr still runs at the original timer frequency and the interrupt latency only delays the calling of isr and not the frequency at which the isr runs. However from the measurements it doesn't look like the case. Any help is appreciated.

Reply to
wamba.kuete
Loading thread data ...

No, as long as the interrupt routine is short.

Maybe. Over time it should but there is no a-priori reason for the latentcy to be constant.

Robert

--
Posted via a free Usenet account from http://www.teranews.com
Reply to
Robert Adsett

Ok so its fair to say the following is not correct, interrupt latency = time_elapsed between time stamp value in the isr - timer_period (1/timer_freq)

Reply to
wamba.kuete

Another way of phrasing the above question is as follows,

interrupt latency != time elapsed between two calls to timestamp (in the code above) - timer_period (ie 1/timer_freq)

Reply to
wamba.kuete

That depends on the source of the interrupt. If it comes from a timer that continues to run after an interrupt occurs and you do not change the timer value, the interrupt rate should be fixed. If instead, you restart the timer within the interrupt, then the time between the timer interrupt and restarting is lost (although it may be partially compensated for).

For many applications I use a continuously running timer that generates an interrupt on overflow. I form a high resolution run timer from a combination of the software interrupt count and the timer value. Care has to be taken to handle timer rollovers properly.

--
Thad
Reply to
Thad Smith

If your system does not have to process other (higher level) interrupts or has areas in the code where interrupts are disabled, then the time between 2 calls to timestamp() will be constant, as long as you have a timer interrupt that restart itself (as indicated by others as well). If you have other higher level interrupts and/or interrupt disabled areas then the moment the timestamp is called can be deleayed by a total of (time needed in interrupt disabled areas) + (total time of all higher level ISR's). So depending on the system configuration you may or may not see variation in the time between two calls to timestamp.

Kind regards, Johan Borkhuis

Reply to
borkhuis

Not necessarily. Many systems, maybe even most, have varying instruction execution times depending on the instruction, what memory it is executing from and what memory is being accessed. That's without considering instructions tha automatically disable interrupts for the next instruction or two.

Robert

--
Posted via a free Usenet account from http://www.teranews.com
Reply to
Robert Adsett

Yes, it might give you the jitter in the perios though. I think that would be simpler to measure on a storage oscilloscope though.

As has been pointed out, if you use a free running timer for your timestamp and a comparator off of that same timer for the interrupt you can measure the latency directly.

Robert

--
Posted via a free Usenet account from http://www.teranews.com
Reply to
Robert Adsett

Good point. I was assuming either an automatic reload or a free running timer with an adjustable comaparator w/o any justification.

Robert

--
Posted via a free Usenet account from http://www.teranews.com
Reply to
Robert Adsett

Just as a note. These latency effects are known as jitter. The frequency of the the timer interrupt remains constant, but the time between interrupts vary. This assumes that the total latency is less than the period of the interrupt.

--
Michael N. Moran           (h) 770 516 7918
5009 Old Field Ct.         (c) 678 521 5460
Kennesaw, GA, USA 30144    http://mnmoran.org

"So often times it happens, that we live our lives in chains
  and we never even know we have the key."
The Eagles, "Already Gone"

The Beatles were wrong: 1 & 1 & 1 is 1
Reply to
Michael N. Moran

Assuming as another poster reminded me that you don't reload the timer in the interrupt. Then you get a drift added to the jitter.

Robert

--
Posted via a free Usenet account from http://www.teranews.com
Reply to
Robert Adsett

You may or you may not. There are techniques that make it possible in many cases to reload a timer without causing drift (add or subtract to the current timer value).

Gerhard

Reply to
Gerhard Fiedler

Doesn't this require the timer to be able to perform the add/subtract or the CPU to perform read-add-write with predictable timing?

--
Peter
Reply to
Peter Dickerson

Yes. The read-add/sub-write sequence often does have a predictable timing or can be made to have it (with interrupts disabled -- which often will be the case in this scenario).

Gerhard

Reply to
Gerhard Fiedler

And a timer that continues running after a match. I was thinking of the other case but Gerhard raises a good point.

Robert

--
Posted via a free Usenet account from http://www.teranews.com
Reply to
Robert Adsett

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.