timer implementation

I'm seeking links/suggestions about the implementation of general purpose timers. The usage of the timers would look something like:

id =3D start_timer(wait_time, func_called_when_time_up); stop_timer(id);

There would also be a function called by the system timer tick, which would call functions passed as "func_called_when_time_up" as appropriate. (It's assumed the calling code handles all mutual exclusion issues that may result from the time-=F9p function being called. There would also likely be some way to pass id-specific data to the time-up function, but I don't think that would impact the broad outlines of the design.) I'm looking for an approach that could scale to 100s or more timers.

So far the best approach I can think of is to keep the timers in an AVL tree where (absolute) expiration time is the search key. The big plus is controlled worst-case behavior. Is there an approach that has at least O(log n) worst-case and O(1) average time for both stopping a timer and the case where the timer runs to expiration?

Reply to
wkaras
Loading thread data ...

On 14 Aug 2006 15:21:13 -0700, snipped-for-privacy@yahoo.com wrote:

Without getting into exactly why you want to do all this, especially the question about 'why' considering that you are sweeping a lot of important details under the rug of "calling code handles all mutual exclusion issues" and obviously aren't then thinking clearly about this, there is a short answer you may be able to like: delta queues.

Usually, your timer tick is pre-emptive. (In your case, I can assume you are asking this without having an existing O/S, since an O/S usually provides this service even if it does little else.) Since it's pre-emptive and the test happens as often as your tick takes place, the usual desire (at expense of other desires) is to make the testing for whether or not to launch a function call (or process) as fast as possible. Timer events eat up available CPU, so that is usually an important consideration. A lookup tree is not as fast as can be done, in that regard.

If you choose a means to insert into your queue in this method:

: void qnodeptr_t (qinsertd)( queueptr_t queue, qnodeptr_t node, qsleepkey_t key ) : /* -------------------------------------------------------------------------- : Link the given node into the given queue, as a delta-key entry. Here, the : key is a time delay. The first entry in the queue has the least time to : remaining to wait. : : As the list of processes are scanned from first to last, the time value of : nodes earlier in the queue are subtracted from the current, desired delay. : In this way, the current delay being tracked only holds the time remaining : after the earlier processes have exhausted their time delays. Once the : right insertion point is found, the remaining time left on this current : delay value then used as the key for the entry to be inserted. So at this : point, the node is inserted and the time value for the next entry which : then follows it is updated. : */ : { : auto qnodeptr_t prv, nxt; : auto qsleepkey_t nxtkey; : : prv= qheadptr( queue ); : for ( nxt= qnext( prv ); (nxtkey= qsleepkey( nxt )) < key; prv= nxt, nxt= qnext( nxt ) ) : key -= nxtkey; : if ( nxt != qtailptr( queue ) ) : qsetsleepkey( nxt, nxtkey - key ); : qsetsleepkey( node, key ); : qsetprev( node, prv ); : qsetnext( node, nxt ); : qsetprev( nxt, node ); : qsetnext( prv, node ); : : return node; : }

I'm not expecting you to know all the details, but just to see some real code that actually does the job with a delta queue so that you can work through any failures on my part to adequately describe it.

But with this method, your timer tick does NOT need to look at all the various things waiting for time, but instead only the top entry. As you count down the time on that entry, and since all the other entries are relative to it, you also count down all the rest of them, too. When you get to the point where the top entry counter goes to zero, you run it and also any following ones that also have zero-key values (they are to be run at the same moment, if so.)

Hope that helps, Jon Jon

Reply to
Jonathan Kirwan

I assume you want at most O(log n) worst-case. You can do that with some sort of balanced tree or skip list. A priority heap works well for this, but any balanced tree can do some rebalancing on removal, so can't have O(1) removal time. When you stop a timer, you can either adjust the tree/list at that point, or wait for the old expiration time and discard.

--
Thad
Reply to
Thad Smith

This usually is done in a list where only the difference to the next timer is stored. so when decrementing the first timer, all timers are automatically decremented as well.

Rene

--=20 Ing.Buero R.Tschaggelar -

formatting link
& commercial newsgroups -
formatting link

Reply to
Rene Tschaggelar

y_t key )

-------

re, the

me to

alue of

delay.

maining

the

rent

at this

hich

prv=3D nxt, nxt=3D qnext( nxt ) )

The concern I have with this approach is that starting a timer has time complexity O(n) worst-case, whereas the AVL tree approach has O(log n) worst-case.

Reply to
wkaras

key )

--------------------------------------------------------------------------

the

of

delay.

remaining

this

nxt, nxt= qnext( nxt ) )

The insertion time usually isn't the critical side of it, since it takes place only once per insertion attempt and is part of each process's normal use of the CPU. In any case, you can work an additional structures to improve it and reduce insertion time, if it's important. What usually winds up being more important, though, is the amount of time taken for the timer event. This usually needs to be very fast -- especially if you are looking for timer precision and have fast ticks. Here, every time event you must use CPU. The amount of time used in the time event is completely independent of the number of waiting events, using the above technique and yuo mentioned '100's', so I thought it worth mentioning.

Having said so, I've said enough. It's up to you what your own needs are and where the important issues lay. I leave that to you. But this is a common tecnique that works well in real systems. I use it, for example, in a fast system with 25ns cycle times and 2us timer event resolutions (9 cycle or 225ns O/S delay to launch the process), which was important for the application.

I've tried more complex methods, too. Just so you know.

Jon

Reply to
Jonathan Kirwan

Assuming you have a fixed tick on which you will dispatch timer events, a slightly different (and more-or-less O(1)) approach is to store your timers in buckets (using a simple linked list for each bucket), the bucket being selected based on when the timer fires.

Start with 128 buckets of one tick each. These are the highest resolution items, and on each tick you just fire all the events in the next bucket. If you set a timer that fires within the next 128 ticks, that's all there's to it. Next you have a set of 128 buckets, each covering 64 ticks. These are all longer timers than in the first set. Once every 64 ticks, you grab the next bucket from the second tier, and redistribute all of the items in to the first tier buckets, where you'll eventually dispatch them.

You repeat that out to the longest intervals you want to time, with subsequent tiers of buckets covering 4096, 256K, 16M, 1G... ticks each. Seven tiers of buckets and 1ms ticks covers you for 250+ years, and timers longer than a couple of years would actually get moved six times in their lifetime.

Each timer is processed exactly once if it's set to expire in the next

128 ticks, longer timers are also copied once for each tier of buckets their from the first (aka lg(ticks)/6). Adding timers into the first tier is a simply range check and a divide by 128 to find the bucket and an add into the linked list. Into higher tiers the range check is a bit more complex, but it still just a basic indexing operation to determine where it goes.

Obviously there's nothing magic about 64 as a factor. Smaller buckets will reduce the number of list heads, but increase the number of moves.

FWIW, the work per timer is effectively O(log t), where t is the duration of the timer, and is not dependant on the number of timers at all. Since that's always going to be a very small number, it's effectively O(1).

Reply to
robertwessel2

This has alot of advantages. The one quibble would be that you get "hiccups" where timer management may spend alot of time when crossing tier boundries. Particularly when it has to cascade at multiple tier levels.

Reply to
wkaras

True, you do have a chunk of work to do when each of the higher tier intervals wraps, but you have a considerable amount of time to get the work done. Moving a bucket from the second tier to the first has to be completed in 63 ticks, from the third tier to the second in 4032 ticks. And you can lengthen that interval by increasing the number of buckets in a tier.

So you can at least stagger some of the work, and/or spread it out over multiple ticks (eg. if you have 500 timers in a second tier bucket to move in 63 ticks, do 8 in each tick). Spreading out the work over multiple ticks, would reduce the maximum amount of work needing to get done in a single tick to the sum of ceil(n/t) for each tier, where n is the maximum number of items you'll ever have in bucket in that tier and t is the number of ticks you have to move it. That's likely to be a fairly small number - 500 timers in a tier two bucket implies (assuming

1ms ticks) that you're creating 8000 timers per second.
Reply to
robertwessel2

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.