small footprint priortiy queue

Hello,

I am maintaining a C application running on a tiny embedded platform with only a few hundreds bytes of RAM. The engine of this application is an event queue which is used for all communications between interrupt handlers, main code and all the modules. Because of the small amount of RAM available, I'd rather not use malloc and friends. Therefore, the queue is now implemented as a staticly allocated ringbuffer, something like this :

-------------------------------------------------------------------

enum event_type { EV_TYPE_A, EV_TYPE_B };

/* The various event types */

struct event_type a { enum event_type type; int some_data; };

struct event_type b { enum event_type type; char other_data[10]; };

/* The main event union holding all possible structs */

union event { enum event_type type; struct event_type_a a; struct event_type_b b; };

/* The ringbuffer */

#define QUEUE_SIZE 16

struct eventqueue_t { int head; int tail; uinion event event_list[QUEUE_SIZE]; };

-------------------------------------------------

Events are put in the queue filling the event_list array at the 'head' position, and retrieved from the queue by reading the array_list from the tail position. Nothing special here.

void event_push(event *ev) { eq.event[eq.head] = *ev; eq.head = (eq.head + 1) % QUEUE_SIZE; }

et cetera.

This all works like a charm, but for a new application I need some kind of priority mechanism; events with low priority are just added to the end of the queue, but high-prio events should be inserted just before other events with the same priority. This would be no problem when the queue was implemented as some kind of dynamic data structure (linked list, etc), but as I said before, this is no option because of memory restrictions.

With the current queue implementation, I would need to re-shuffle data in the array for every inserted event to make room for new higher-priority events, which is not optimal, of course. My current solution is to use seperate queues for different priorities, but this tends to waste a lot of memory.

Does anybody have ideas how to create a tiny-footprint, efficient priority-queue without using any dynamic memory allocation ? Any ideas or pointers to books/documentation are welcome,

Thank you,

Reply to
Gerr
Loading thread data ...

"Gerr" schreef in bericht news: snipped-for-privacy@g44g2000cwa.googlegroups.com...

What about filling the queue backwards, when a high priority event needs to be pushed.

--
Thanks, Frank.
(remove 'q' and '.invalid' when replying by email)
Reply to
Frank Bemelman

Linked lists do not require a dynamic memory allocation system.

Simply put all free memory into a pool of equal size blocks and put them all initially into the 'Free' list. If the memory block are of the same size, there is no point of storing a full pointer to the next block, but just use an index into the next block. With 16 entries, try to find 4 free bits from each entry for the pointer value. Thus, you will be able to walk the priority queue.

In your case, the priority queue cost would be 4 bits/element.

I do not know your configuration, but I would create a task specific queues (with each element indexed by the 4 bit field) and a pseudo task called 'Free'. Do you really need a priority queue ?

Paul

Reply to
Paul Keinanen

True.

This is an alternative I already considered. But before re-inventing the wheel (I happen to do that every now and then), I wanted to hear some other opinions about possible solutions.

Maybe I don't.

What I need is a way to make sure some events are handled with a higher priority then others (in this case, received network messages are considered 'more important' then keyboard presses). Since the system already uses a single queue, the obvious solution of was to rework the queue mechanism to allow high-prio events to be inserted before others, instead of putting them on the end of the queue. This would probably have the least impact on the rest of the project, since all the changes occur inside the event-queue handling code only. Only the queue itself needs to know the priority for each kind of event, the rest of the modules can be oblivious of this.

The pool-thing you mentioned seems to be a nice solution for this. I will give that a try, it shouldn't be too hard.

Thanks for your time,

Reply to
Gerr

In that case order would not be preserved, but it should. I forgot to mention that in my original post.

Thanks,

Reply to
Gerr

Gerr:

How many distinct priorities?

How big can the total number of entries get? Is there a separate per-priority limit at all?

How much time can you spare, assuming you can trade space for time?

--
Charles Allen
Reply to
Charles Allen

Agreed. In the OP, the entry type was shown as a separate enum with 2 defined entries. He could fit a 4-bit type code (up to 16 types) and a

4-bit pointer into the same entry. If each type has its own priority, then eliminate the the type code and place each type into its own queue!

As an alternative, if there is enough time, simply add the new entry to any empty location in the array. When removing, scan all 16 to find the highest priority. Copy entry and mark location unused.

--
Thad
Reply to
Thad Smith

The use of the word "therefore" in that last sentence is a mistake. Using a ringbuffer is by no means something the absence of dynamic memory allocation would force you to do.

[...]

Whoever taught you that you can't have linked lists without dynamic memory?

No you don't have to reshuffle the main data in the actual collection of events. Only the priority queue itself needs to be shuffled. Nothing forces you to make those two a single monolithic data structure.

For full generality, I'd use a heap for the priority queue, holding indices into the event array, and keep references of unused entries in the back part of the heap array, behind the actual heap. That's just

68 bits of storage for the entire thing. If you can't afford even 68 bits, you'll have to shuffle the actual data instead, but the heap structure will keep that less painful than it may appear.
--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

An unordered priority queue is pretty simple, and there's only 16 entries maximum to worry about. Keep a count of the number of entries. New entries go to Queue[Cnt++]. Then that's the number of entries you have to scan to find the highest priority in the queue. Return Queue[Highest] and set Queue[Highest] = Queue[--Cnt].

- Anton Treuenfels

Reply to
Anton Treuenfels

Clever! Then we don't have to mark or skip unused entries. Sounds like a winner to me.

--
Thad
Reply to
Thad Smith

You are right, I am by no means forced to used a ringbuffer because of the lack of dynamic memory allocation, and of course linked lists are also possible. I did not express myself clearly in the original post. My main reason for not wanting creating a linked-list-like data structure was the extra memory usage it would take for each element, holding either pointers or indices.

I have now implemented the queue as a static array of indices to the actual events. On insertion, this array is shuffled to make room for new events at the proper location. I will do some tests tomorrow; if performance is sufficient this will probably be good enough for now.

Thank you all for your comments,

Reply to
Gerr

The OP later added that he needs FIFO order for entries of the same priority, which (unless I misunderstand) may be destroyed with either of these suggestions.

Paul beat me to suggesting the linked list solution; Frank's has the advantage of smaller space overhead but only works with two priority levels and seems trickier to implement.

Alex

Reply to
Alex Fraser

Its not clear how the size 16 was picked. Could it be that it was picked since this is the maximum number of event "sources"? (I.e. you have one event for the keyboard, one for network messages, etc. adding up to 16 possible). If that is the case then split into N queues, one per priority level, allocating entries depending on how many event "sources" there are for that priority level.

Reply to
CTips

Huh. Okay, how about this: keep the queue in sorted order, with the highest priority element as the "topmost" element. Add new elements such that the new element always winds up "behind" (or "under") all elements of equal or higher priority already in the queue, so FIFO holds. The highest priority element is alway the "highest", so finding and removing it is trivial.

In effect swap where the effort occurs from my first suggestion, from removal to insertion.. Probably does more work on average though, since while the correct insertion point can be found in logarithmic time, there's still the matter of at least occasionally moving several elements per insertion (versus only one element moved per removal in my first suggestion).

- Anton Treuenfels

Reply to
Anton Treuenfels

That would work, may give a memory saving over the multiple linked list approach (if the number of priorities is less than the queue size), and dequeueing is fast. But as you pointed out, the enqueue operation becomes potentially expensive, and worst-case performance of that may be critical. I think an efficient implementation would be more complex than multiple linked lists, too.

Alex

Reply to
Alex Fraser

I

linked

Maybe, maybe not. Certainly enqueueing is more expensive than dequeueing, but consider that the queue is always sorted before each item is added. A simple insertion sort is essentially linear time in that case, maintains FIFO, and there's (still) only 16 elements to check in the worst case.

I got a little side-tracked because I was also thinking about a variant of binary search that can find the proper FIFO insertion point within a sorted list in logarithmic time, after which any elements that needed to be moved could be. But good lord, there's (still) only 16 elements max! Insertion sort might be good enough.

- Anton Treuenfels

Reply to
Anton Treuenfels

[snip code]

You could use a heap. One way is to associate two fields with an event, a priority and a sequence number. You can get sequence numbers by having a counter that is updated with each entry. The top of the heap has the highest priority and, among those with that priority, the lowest sequence number. Using a heap (common for priority queues) is O(log n). However ...

Given that you need a small footprint you must also consider code size. If the queue size is only 16 it is almost certainly better to use very simple code that might be O(n). For example, if you have two priority levels, low and high, search the buffer for the earliest high priority event, emit it, and move everything before it up one position.

Richard Harter, snipped-for-privacy@tiac.net

formatting link
formatting link
The eternal revolution has its needs - its slogans to be chanted, its demonstrations to be held, its children to eat.

Reply to
Richard Harter
[...]

Sigh. Doesn't *anyone* own an algorithms textbook any more, to be consulted before posting to Usenet? This is so basic a question even Sedgewick has the solution for you ;->.

I've posted this before, but nobody seems to have listened. If you want a priority queue, use a heap. If you can afford about N*ceil(log2(N)) bits of extra memory, use an indirect heap, otherwise, make a heap out of the actual data. It's not unduly complex to implement, and free storage book-keeping comes for free.

--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

This is a long solved problem. Look up heaps, heapify, upheap, downheap. Insertion or deletion from the queue is O(logN). Access to the top priority is O(1). Space required is O(N). heapsort is another application of heaps.

--
"If you want to post a followup via groups.google.com, don't use
 the broken "Reply" link at the bottom of the article.  Click on 
 "show options" at the top of the article, then click on the 
 "Reply" at the bottom of the article headers." - Keith Thompson
More details at: 
Also see
Reply to
CBFalconer

... snip ...

You and I seem to have a similar pain threshold on seeing this thrashing about! I posted the equivalent reply two minutes ago. :-)

--
"If you want to post a followup via groups.google.com, don't use
 the broken "Reply" link at the bottom of the article.  Click on 
 "show options" at the top of the article, then click on the 
 "Reply" at the bottom of the article headers." - Keith Thompson
More details at: 
Also see
Reply to
CBFalconer

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.