Hi:
I discovered on the TMS320F2812 DSC that I'm using to synthesize digital words that the CPU pipeline appears to cause jitter of the interrupt latency. If I have the basic program structure:
interrupt timer_isr() { diagnostic_IO_port_bit = 1; // scope this diagnostic pulse calculate_and_output_some_data_words(); diagnostic_IO_port_bit = 0; }
main() { make_CPU_timer_interrupt_vector_point_to_timer_isr(); init_CPU_timer_to_cause_periodic_interrupts(); clear_CPU_interrupt_mask_bit(); // enable global interrupts
while(1) { do_something_preemptable_less_important_than_timer_isr_repeatedly(); } }
I set my scope to expand the time scale around the *next* rising edge of the diagnostic pulse, to observe the jitter in the timing of the periodic interrupt service routine.
What I observe is (you will need fixed font for this):
____ ____ ____/ \___________________/////// \____... trig^ ^--------timer period-------^ ^-----^ jitter
The time spacing between jitter edges is one CPU cycle (6.67ns).
Interestingly (and understandably), enabling the real-time debug mode of the C2000 and stopping the CPU (peripherals continue operating and interrupts keep getting serviced) causes the jitter to disappear.
The jitter width varies with the code that is executing in the main loop. In the simplest case of a while(1); main loop, the jitter is 6 cycles (40ns). However, fairly complex main loop code can result in 10 cycles (66.7ns) jitter.
Fortunately I can tolerate this level of jitter in my application (it gets worse because I have another interrupt which can block the important interrupt for a few cycles before I reenable interrupts in the lesser ISR).
I wonder if the new SHARC DSPs which I recall reading are not pipelined, could produce perfectly jitter-free signal synthesis?
Comments appreciated.