in my software design I have a set of processes - state machines - which are running in turn, each of them performing a relatively small step and then returning control to a main loop which allocates execution time to every process.
Some of the processes may have a packet to transmit to an output fifo and the packets size may vary a lot among processes. Since I want to avoid starvation I thought about a 'token passing' where each process gets a token and if it has something to transmit it will keep the fifo busy, otherwise it will pass the token to the next process.
I'm not sure though this is the 'best' way to go. Could any of you advise on scheduling disciplines and things to pay attention to? There's very little specification on this part, but I can say that some packages are certainly more important than others, therefore I should have a mechanism that allows those processes not to be clogged by others.
I searched the web a bit but haven't found an appealing article that suited my needs. Thanks a lot,
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
You can have a few FIFOs, say 3, with fixed priorities and use them accordingly. This of course may mean that if you have too much high priority traffic the lower priority ones will get starved and possibly blocked completely.
The way I do it is to leave it to the OS - that is, how good the chances of a given task are to queue a packet for output depends on that tasks priority. Given that the OS does its job properly things will be OK. But this implies using DPS which you do not have, and I do not know how others work in sufficient detail to be able to comment on them.
Hey, I had just begun to think usenet had grown past its zealotish phase :-). Naive me, uh? :D
"To answer the above objections, several alternatives to priority- based pre-emptive scheduling have begun to appear on the RTOS horizon. The first is called =93Deadline Scheduling=94. In this approach, the RTOS kernel=92s task scheduler is provided with information about task deadlines, and it temporarily raises the priorities of tasks as they approach their deadlines if they have not yet run. In this way, the deadline scheduler urgently strives to get tasks to run before they miss their deadlines, by pre-emptively =93borrowing=94 running time from tasks that normally have higher priorities. [Hopefully, this will not cause the "lender" tasks to then miss their own time deadlines.]"
Step back a bit and look at what you are trying to do. Conceptually, you are trying to route packets from multiple input ports to a single output port with a fifo on the output port. There are lots of papers and articles on this subject.
Just for the sake of the example. Pick the number which suits your goal.
Put packets you want to go out with higher priority into the higher priority FIFO and so on down. Then while there are packets in a higher priority FIFO do not send out packets from those with lower priority. Standard fixed priority, nothing needing an explanation really.
With fixed priority, as long as you have packets of higher priority those with the lower one will have to wait. It only takes enough high priority traffic to block completely the low priority one.
That was clear enough, this is what a "packet" would mean in this context.
Well given that the OS does prioritizing in a sufficiently good manner. There are lots of schemes for that, the one I have implemented has served me well last 18 years. Except that tasks get system time proportional to their priority they are also guaranteed to not stay waiting forever. If you find an OS which will work for you and do that things will be much easier afterwards.
I'm assuming that this is the software-implemented FIFO of bytes that features in the other thread on interrupt nesting.
Sounds rather complex, but a bit similar to the "bandwidth budget" method, on which more below.
As others have suggested, one common approach is to make a number of packet queues (these are also FIFOs, but the basic unit is a pointer to a packet, not a byte). You have as many packet queues as you have levels of package priorities.
A process creates a packet (memory management is an interesting question, but perhaps for another thread) and puts it in the packet queue corresponding to the priority of this packet.
A dedicated process takes one packet at a time from the packet queues, following the priority order, and plugs all the bytes from that packet into the byte FIFO for transmission. When that packet has been transmitted, the next packet from the queues is processed in the same way.
Of course you can instead have a single queue of packets, sorted by priority. There are special data structures for such priority queues.
A problem with this pure priority approach is that if more packets are created than can be transmitted, some packets must be discarded before transmission. The easiest implementation is to have a size limit on the packet queues and to discard a packet when the queue for the packet (for the priority of the packet) is full, but this means that newer data are discarded while older data (in the queue) are kept. In most cases it would be better to discard the old data and transmit the new data. One way is for the packet-creating process to discard packets from the head of the queue until there is room in the queue for the new packet.
Another idea is to assign a "bandwidth budget" to each process (or each packet priority level), for example in bytes per second. When a process wants to create a packet, it must subtract the packet size from its current budget; if there is not enough budget left, the packet is not created at all. Periodically, perhaps once a second, some process fills up ("replenishes") the budgets, and packets can again be created as long as the budgets lasts.
I do not use and OS, the software runs on bare metal, but thanks for the pointer. Deadline Scheduling looks interesting, but I don't see how it fits to my case, since I do not have deadlines to meet.
My goal is to maximize the throughput preventing starvation. I maybe came up with something, even though I have difficulties to formalize it.
The key point is a 'request queue'. Every process which needs to send a packet will set a request in the queue, with the amount of bytes it needs to send. Then a 'scheduler' will go through the queue and allow the process to write whenever it is possible. I think this is a 'First Come First Served' approach which will certainly prevent starvation.
assumption correct, same guy, different problem ;-)
My problem with priorities is that as soon as somebody with high priority will start to have a lot of packets then the others will simply not have resources to transmit.
I'm more interested in something that will provide every process the possibility to send a packet, therefore I thought about a 'request queue' which will hold the requests from all the needing processes and the a separate process will take care about assigning the 'write access' to the fifo in a First Come First Served manner. [...]
They may come handy one day, could you elaborate on those 'special data structures'?
In this case I think I may throttle the allocated time for the processes and let run more the ones which are more essential and less the ones which provides only 'housekeeping information' (like statuses or debug information).
My main loop is currently something like this:
where 'process' is a structure which contains several elements:
act : set if process has to run (active) pty : the smaller the number the higher the time allocated fun : the function call - is a FSM returning after each state
At this point I may decrease the priority and have less packet traffic from it.
this is an interesting scheme. An even amount of budget for all the processes will give everyone the right to send a packet, preventing starvation and this may also allow to have control on the overall bandwidth, since it will be the sum of the budgets.
I can in principle even throttle the overall bandwidth and keep the same budgets ratio such that everyone will get less or more according to the variation. I may need this feature since it is not yet clear to me if on the receiver side there are enough resources to keep the pace of the maximum speed the hardware allows.
Either that one process _has_ higher priority, or it doesn't. So either you act against your own reasons you gave that process higher priority for in the first place (e.g. by rising the effective priority of every process that has lost arbitration, until eventually it wins), or you rob the lower-priority one of the resource.
Priorities are what you do if you _cannot_ avoid starving somebody. They're how you decide which user to starve, no whether you can still avoid starvation at all.
But that's not really solving the problem --- that's just organizing it. The primary problem is how you decide which packet to send next, not what to do with the ones you won't be sending just now.
What about assigning a maximum number of packets to send to each "process" *
The higher the process priority, the greater the number.
You could call each process in turn which will queue as many packets they have to send up to the maximum number. That way, the low priority processes will have the possibility to send their packets but with a lower priority.
Since you don't have an OS, the term process is a bit misleading. I understand you speak about functions that are called by a main loop, in turn.