Somewhat OT: Efficient timing under Linux

Programming a somewhat compute-intensive and time-critical application on an embedded ARM-9 system, running Linux. I need a clock in the low tens of milliseconds for triggering synchronous functions, ie.

while (1) { if Timeout (say 30mS) { reset timer do some stuff } }

The standard Linux itimer functions and callback mechanism will serve, but I suspect without a lot of data that there may be a significant overhead associated with them. Any suggestions on how to do better in terms of overhead?

I appreciate that this sort of thing can be done easily at the hardware level, just a few instructions in a timer ISR. For now, I'd prefer to stay within the Linux API, and also avoid realtime OS extensions.

As an aside, can anyone suggest a good Linux forum for real time programming?

Reply to
Bruce Varley
Loading thread data ...

Usually signal handling is fast, context switching and event delivery time is in the microsecond range depending on your system clock, but I don't know how good the itimer concept works.

I've done a similar loop once in an user mode app with the help of my own Linux driver: I'm calling an ioctl in the driver, which in turn sleeps and the process wakes up, triggered by an interrupt, each 16 ms for VSync. This worked even with multiple threads in the user mode app, if you change the thread priority of the loop-thread to a high value. Jitter is most of the time below 1 ms (this depends on the other drivers in your system).

But of course, Linux doesn't guarantee it, so it might wait longer sometimes. If you can live with a delay of some more milliseconds sometimes, it is possible. Don't do it, if you want to control a motor or something like this.

--
Frank Buss, http://www.frank-buss.de 
electronics and more: http://www.youtube.com/user/frankbuss
Reply to
Frank Buss

itimers devolve to spinlock() calls. I would consider them to be pretty low overhead. frankly, I find select()/pselect() with the timer arguments filled in good enough, but I'm not in a position where jitter matters much nor where CPU utilization is much at issue.

You can always write a device driver than owns the hardware timer and hits callbacks. I would be more concerned about the nonblocking nature of the above loop... I figure on one nonblocking timer poll loop per CPU at most.

As a gross generalization, those quickly turn to pthreads() discussions.

--
Les Cargill
Reply to
Les Cargill

At the "low tens of milliseconds" level, the context-switching overhead shouldn't be significant. Typical time slices are between 10ms (for servers and systems with relatively expensive context switches) and 1ms (for low-latency desktop systems).

Reply to
Nobody

I worked on a project that seemed to be doing well doing significant data- mangling in multiple processes in a 5 millisecond loop. Differed from yours in that the 5 milliseconds was based on hardware interrupts from the device that generated the data. I haven't seen it scaled up to full workload, but the simulations we had running at the time suggested that things were good. Ran stock Debian 5 on an 800 MHz i86. Extra-low niceness was granted to the processes that had to be fast.

Mel.

Reply to
Mel Wilson

Right, and 10 ms is common for some embedded systems. And if there is no process with 100% load, processes which waits for IO or events are activated much faster than the time slice time, when data arrives or an event is generated.

--
Frank Buss, http://www.frank-buss.de 
electronics and more: http://www.youtube.com/user/frankbuss
Reply to
Frank Buss

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.