I read an online tutorial on RTOS, see below dot line please. I am not clear about what is for the word 'queue' in the first line. Does it mean a task queue?
Thanks in advance. .................... The use of a queue allows the medium priority task to block until an event causes data to be available - and then immediately jump to the relevant function to handle the event. This prevents wasted processor cycles - in contrast to the infinite loop implementation whereby an event will only be processed once the loop cycles to the appropriate handler.
Generally, RTOS implementations don't bind a queue to any specific task. Rather, the developer does so: anything can put messages into a queue, anything can take messages off of a queue, and anything can pend on a queue. If the developer is wise, only one task takes things off the queue, and there is a well-defined, small number (one is best) of sources that put things on the queue.
You arrange things so that the task that depends on the queue needs to run if and only if there's a message on the queue, and you have the task pend on the queue having a message available.
(sigh) How NOT to ask questions! :( You've taken this entirely out of context and expect folks to GUESS (or, invest effort to try to guess) that surrounding context.
"Online tutorial"? Hmm.... perhaps you could have provided a pointer (URL) to that tutorial so folks would be able to *see* the context that you've omitted??
Be that as it may...
So, the queue allows to wait -- in an ordered fashion (i.e., "I got here first! The rest of you will have to ") -- until ("data" in this case) is available.
[The "infinite loop implementation" has to refer to the archaic approach of "one big loop" that repeatedly tries to check for everything and anything that might be able to "proceed" in its computation.]
It appears this P is trying to espouse the advantages of queuing on an event/resource over that of repeatedly *polling* for that resource/event.
So, as an example, instead of checking a UART (directly *or* a FIFO/buffer that the UART ISR maintains) for "available received data" which you can then "process", a more elegant/efficient approach is to tell the OS that you are waiting for data to be available.
The OS then suspends your task (marks it as not ready to run so it no longer consumes CPU cycles... that would be wasted repeatedly checking for data that is NOT YET AVAILABLE) *at* the point where you invoked "wait_for_data/event". I.e., the subroutine/function "doesn't RETURN" until the condition is satisfied!
To the programmer, this makes life easy: the OS does the "checking for available data" ON BEHALF OF the task that requires it.
It also allows a definite ordering of "consumers" to be imposed. It may be something as simple as "first come, first served" for THAT particular resource. Or, "most important goes first" for some other type of resource/event.
E.g., a task that monitors the charging of a battery probably is concerned with knowing the status of primary power -- the charger only works when power *is* available! So, a power fail event would be of interest to that charger task -- at the very least, it would be able to update its estimate of when charging will be COMPLETE to reflect "never"! :>
OTOH, another task that is responsible for copying key configuration parameters from (volatile!) RAM into FLASH/NVRAM would probably be MORE concerned with that event! It would want to be able to ensure that activity is performed regardless of how long the device can "stay up" after the event is signaled.
A FIFO ordering might give the battery charger task first crack at the event -- depending on the order of execution of those two tasks (charger & NVRAM) -- even though it is far less "important" to the operation of the device!
Trying to do this prioritization in a "big loop" approach means everything needs to know about everything else! I.e., the battery charger can check for the power fail event... but, if it happens to see it first, it needs to check to see if the NVRAM task needs to respond to that instead or first!
This leads to clumsy and brittle implementations -- because you have to distribute and replicate operational decisions in many places (information hiding being a win in most cases!)
Does this make sense in the context of your INTENDED question?
I think that in this case your explanation below is correct. In other contexts, "task queue" might refer to a data structure internal to an RTOS scheduler that defines an ordered set of the runnable tasks (the order could be determined strictly by task priority or by some other equitable-round-robin type scheme). It's a queue that contains tasks as its data elements.
Grant Edwards grant.b.edwards Yow! Pardon me, but do you
at know what it means to be
Referring to the labels within the code fragment above:
A: The task first blocks waiting for a communications event. The block time is relatively short.
B: The do-while loop executes until no data remains in the queue. This implementation would have to be modified if data arrives too quickly for the queue to ever be completely empty.
C: Either the queue has been emptied of all data, or no data arrived within the specified blocking period. The maximum time that can be spent blocked waiting for data is short enough to ensure the keypad is scanned frequently enough to meet the specified timing constraints.
D: Check to see if it is time to flash the LED. There will be some jitter in the frequency at which this line executes, but the LED timing requirements are flexible enough to be met by this implementation.
One common method of referring to task/thread eligibility to run is to have a "ready queue" or a "waiting queue". This may not actually be a queue; it can be nothing more than a state in the set of task control blocks.
In a typical RT system, typically most (and sometimes all) tasks are in a Wait_For_XX state.
When a significant event occurred, the task scheduler is restarted, scanning the task list in priority order to find the first (highest priority) task that has become Runable due to the significant event, saves the context of the old task and starts to run that new task.
A significant event could be e.g.
Completetion of a clock interrupt
Completetion of some other interrupt e.g. serial line
Writing data to some queue
Setting some event flag (single bit messages)
The currently Running tasks goes to sleep
Of course the last three actions must go through some OS routines to do the actual operation and then kick the scheduler to search for task that might have become Runnable due to the event.
If all tasks are in some Wait_For_.. state, the scheduler falls through to the NULL task loop, which should preferably be implemented with some low power consumption WaitForInterrupt instruction in the NULL task loop.
Then I have no hope of finding what he's talking about.
1) "The use of a queue allows the medium priority task to block until an event causes data to be available" - mention of blocking, which is very much a "waiting queue"/"ready queue" sort of thing, although you have to wonder why medium priority matters.
The big ambiguity is whether or not the queue is a data source/buffer or simply a wait/block structure.
2) The compare/contrast with The Big Loop.
wait/ready queuing is as close as I can get with that mess.
The OP refers to it in another sub-thread: it's the FreeRTOS "queue" entity, which is a typical RTOS queue that you stuff messages into from a source, and block on pending message availability in some task.