What means of word 'quesue' in this RTOS description?

Hi,

I read an online tutorial on RTOS, see below dot line please. I am not clear about what is for the word 'queue' in the first line. Does it mean a task queue?

Thanks in advance. .................... The use of a queue allows the medium priority task to block until an event causes data to be available - and then immediately jump to the relevant function to handle the event. This prevents wasted processor cycles - in contrast to the infinite loop implementation whereby an event will only be processed once the loop cycles to the appropriate handler.

Reply to
Robert Willy
Loading thread data ...

I do not see how it could mean anything else.

--
Les Cargill
Reply to
Les Cargill

I'm not sure what you mean by "task queue".

Generally, RTOS implementations don't bind a queue to any specific task. Rather, the developer does so: anything can put messages into a queue, anything can take messages off of a queue, and anything can pend on a queue. If the developer is wise, only one task takes things off the queue, and there is a well-defined, small number (one is best) of sources that put things on the queue.

You arrange things so that the task that depends on the queue needs to run if and only if there's a message on the queue, and you have the task pend on the queue having a message available.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

(sigh) How NOT to ask questions! :( You've taken this entirely out of context and expect folks to GUESS (or, invest effort to try to guess) that surrounding context.

"Online tutorial"? Hmm.... perhaps you could have provided a pointer (URL) to that tutorial so folks would be able to *see* the context that you've omitted??

Be that as it may...

So, the queue allows to wait -- in an ordered fashion (i.e., "I got here first! The rest of you will have to ") -- until ("data" in this case) is available.

[The "infinite loop implementation" has to refer to the archaic approach of "one big loop" that repeatedly tries to check for everything and anything that might be able to "proceed" in its computation.]

It appears this P is trying to espouse the advantages of queuing on an event/resource over that of repeatedly *polling* for that resource/event.

So, as an example, instead of checking a UART (directly *or* a FIFO/buffer that the UART ISR maintains) for "available received data" which you can then "process", a more elegant/efficient approach is to tell the OS that you are waiting for data to be available.

The OS then suspends your task (marks it as not ready to run so it no longer consumes CPU cycles... that would be wasted repeatedly checking for data that is NOT YET AVAILABLE) *at* the point where you invoked "wait_for_data/event". I.e., the subroutine/function "doesn't RETURN" until the condition is satisfied!

To the programmer, this makes life easy: the OS does the "checking for available data" ON BEHALF OF the task that requires it.

It also allows a definite ordering of "consumers" to be imposed. It may be something as simple as "first come, first served" for THAT particular resource. Or, "most important goes first" for some other type of resource/event.

E.g., a task that monitors the charging of a battery probably is concerned with knowing the status of primary power -- the charger only works when power *is* available! So, a power fail event would be of interest to that charger task -- at the very least, it would be able to update its estimate of when charging will be COMPLETE to reflect "never"! :>

OTOH, another task that is responsible for copying key configuration parameters from (volatile!) RAM into FLASH/NVRAM would probably be MORE concerned with that event! It would want to be able to ensure that activity is performed regardless of how long the device can "stay up" after the event is signaled.

A FIFO ordering might give the battery charger task first crack at the event -- depending on the order of execution of those two tasks (charger & NVRAM) -- even though it is far less "important" to the operation of the device!

Trying to do this prioritization in a "big loop" approach means everything needs to know about everything else! I.e., the battery charger can check for the power fail event... but, if it happens to see it first, it needs to check to see if the NVRAM task needs to respond to that instead or first!

This leads to clumsy and brittle implementations -- because you have to distribute and replicate operational decisions in many places (information hiding being a win in most cases!)

Does this make sense in the context of your INTENDED question?

Reply to
Don Y

I think that in this case your explanation below is correct. In other contexts, "task queue" might refer to a data structure internal to an RTOS scheduler that defines an ordered set of the runnable tasks (the order could be determined strictly by task priority or by some other equitable-round-robin type scheme). It's a queue that contains tasks as its data elements.

--
Grant Edwards               grant.b.edwards        Yow! Pardon me, but do you 
                                  at               know what it means to be 
                              gmail.com            TRULY ONE with your BOOTH!
Reply to
Grant Edwards

Excuse me not giving full information about my question. The link for the except tutorial is from:

formatting link

below title: Concept of Operation

I have thought about it, but no answer is satisfying to me. Thank all of you for the explanation.

Reply to
Robert Willy

From the cited link (comments from my previous post interspersed):

-----8> RETURN" until the condition is satisfied! >> >> To the programmer, this makes life easy: the OS does the "checking for >> available data" ON BEHALF OF the task that requires it.

-- I think what you are missing is that xQueueReceive() can "hang" indefinitely

-- waiting for the arrival of data in that queue (or, at least until the

-- timeout expires, FORCING it to return). During the time while it is

-- "hung", other tasks are using the processor. *This* task isn't

-- wasting any CPU cycles doing something silly like:

-- if (!data_available) {

-- reschedule(); // i.e., yield CPU to other tasks

-- } else {

-- get_data();

-- which is what it would do in the "big loop" approach.

ProcessRS232Characters( Data.Value );

-- Having successfully returned from xQueueReceive() (an UNsuccessful return

-- would be one where some parameter was in error or the timeout expired before

-- data was available in the queue), the code now processes the data extracted

-- from the queue (i.e., made available by xQueueReceive() in the

-- buffer/variable referenced in the xQueueReceive() invocation ("&Data") }

// B } while ( uxQueueMessagesWaiting( xCommsQueue ) );

// C if( ScanKeypad() ) { UpdateLCD(); }

// D if( ( xTaskGetTickCount() - FlashTime ) >= FLASH_RATE ) { FlashTime = xTaskGetTickCount(); UpdateLED(); } }

return 0; }

Referring to the labels within the code fragment above:

A: The task first blocks waiting for a communications event. The block time is relatively short.

B: The do-while loop executes until no data remains in the queue. This implementation would have to be modified if data arrives too quickly for the queue to ever be completely empty.

C: Either the queue has been emptied of all data, or no data arrived within the specified blocking period. The maximum time that can be spent blocked waiting for data is short enough to ensure the keypad is scanned frequently enough to meet the specified timing constraints.

D: Check to see if it is time to flash the LED. There will be some jitter in the frequency at which this line executes, but the LED timing requirements are flexible enough to be met by this implementation.

-----8

Reply to
Don Y

One common method of referring to task/thread eligibility to run is to have a "ready queue" or a "waiting queue". This may not actually be a queue; it can be nothing more than a state in the set of task control blocks.

V

formatting link

--
Les Cargill
Reply to
Les Cargill

That's not what the OP is talking about, but yes, I had forgotten that terminology (please, please do not ask me why).

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

In a typical RT system, typically most (and sometimes all) tasks are in a Wait_For_XX state.

When a significant event occurred, the task scheduler is restarted, scanning the task list in priority order to find the first (highest priority) task that has become Runable due to the significant event, saves the context of the old task and starts to run that new task.

A significant event could be e.g.

  • Completetion of a clock interrupt
  • Completetion of some other interrupt e.g. serial line
  • Writing data to some queue
  • Setting some event flag (single bit messages)
  • The currently Running tasks goes to sleep

Of course the last three actions must go through some OS routines to do the actual operation and then kick the scheduler to search for task that might have become Runnable due to the event.

If all tasks are in some Wait_For_.. state, the scheduler falls through to the NULL task loop, which should preferably be implemented with some low power consumption WaitForInterrupt instruction in the NULL task loop.

Reply to
upsidedown

Then I have no hope of finding what he's talking about.

1) "The use of a queue allows the medium priority task to block until an event causes data to be available" - mention of blocking, which is very much a "waiting queue"/"ready queue" sort of thing, although you have to wonder why medium priority matters.

The big ambiguity is whether or not the queue is a data source/buffer or simply a wait/block structure.

2) The compare/contrast with The Big Loop.

wait/ready queuing is as close as I can get with that mess.

:)

--
Les Cargill
Reply to
Les Cargill

The OP mistakenly called it a "task queue". It's actually a "data FIFO" supported as a first-class object by the OS. As such, a *task* can pend on it "efficiently" (moreso than spinning on it!)

Read the cited example.

Read the examples *preceding* the cited example. :>

Reply to
Don Y

The OP refers to it in another sub-thread: it's the FreeRTOS "queue" entity, which is a typical RTOS queue that you stuff messages into from a source, and block on pending message availability in some task.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.