Semaphore services in embedded rtos

I am looking for information how the semaphores work in embedded rtos like vxworks or integrity. Is there a minimum standards services required? Is there a standard ways for a task to wait in a semaphore or being waking up? When a semaphore is released, will all waiting tasks transferred to the ready queue?

Reply to
Banh_2000
Loading thread data ...

Your question is a little vague, as semaphores can be used for a number of different things. For example, binary semaphores can be used for signalling tasks, counting semaphores can be used for managing finite resources, mutex type semaphores can be used to serialise access to shared resources, etc.

The best thing you can do is read the documentation for the RTOS you are using, but in general terms, the theoretical answer is (and the way FreeRTOS works) "no". If there is one semaphore, on which multiple tasks are waiting, then moving all the waiting tasks to the ready queue when the semaphore becomes available would be extremely wasteful. Only one task will be able to obtain the single semaphore, so only one task should be moved into the ready queue.

The question then is which task?

If you are using a priority based scheduler then it should be the highest priority waiting task. If there is more than one task at the highest priority, then out of those tasks, it should be the task that has been waiting the longest.

Regards, Richard.

  • formatting link
    Designed for simplicity. More than 113000 downloads in 2014.

  • formatting link
    IoT, Trace, Certification, FAT FS, TCP/IP, Training, and more...

  • formatting link
    Come and see what's in the lab
Reply to
FreeRTOS info

When you speak of semaphores, many different concepts can come into play (depending on your intent). And, there are many different ways to implement each depending on the resources and complexity the OS designer wishes to devote to the implementation.

Roughly speaking, a semaphore is an integer variable that is *conceptually* managed by the OS. A binary semaphore has two states: "held" (by some task) and "released" (i.e., not held by any task). A request for a binary semaphore causes the requesting task to block/wait if the semaphore is currently held (by another task). If the semaphore is available, it is "taken" by the requesting task and the task continues execution.

In most implementations, nothing is ever really "transfered" to the task; the system has no idea who "holds" the semaphore. It is a lightweight primitive that really only exists as a variable and associated queue (which need not be implemented as a semaphore-specific queue). As a result, if the task holding the semaphore "dies" while holding the semaphore, a deadlock can occur -- if the holding task was expected to eventually release it!

A *counting* semaphore is a more general mechanism that allows the semaphore (variable) to take on multiple values (beside the two-valued 'HELD' and 'RELEASED') up to a maximum count which *may* be defined/enforced by the system at configure/run-time. If "any" count remain in the semaphore (typ >0), a request by a task to "take" the semaphore (possibly supporting a "take N" option), the variable is decremented (by N) and, if the result is not "ALL_HELD" (i.e., if result is not negative), the task is allowed to continue. Conversely, if the result exhausts the "supply" of that semaphore, then the task blocks waiting on that semaphore.

How the task is enqueued depends on the semantics implemented by the OS designer. This can be FIFO order (so, the first task that has to wait on the semaphore sits at the head of the queue) or priority order (assuming the OS supports the notion of task priority). The latter case can be further qualified to dynamically adjust queuing based on the *instantaneous*, dynamic priority of a waiting task (priority can change while task is blocked!). Systems that do not include this provision can manifest peculiar behaviors based on whether or not a particular task is queued, etc.

When a task releases a semaphore (N.B. In some OS's, a task may release a semaphore THAT IT DOESN'T HOLD!), the variable is increased (potentially by some number M). At that point, the task at the head of the queue is allowed to resume its request for the semaphore (recall, it may have some *number* that it seeks -- so, this resumption might not satisfy all of that task's needs!). When it can successfully acquire (take) the count that it desires, the task is made ready and any remaining count on the semaphore is used to satisfy the *next* task enqueued behind that first task.

As such, releasing a count of 'M' can potentially make M enqueued tasks "ready" (assuming each was only looking for a count of '1').

[If you imagine tasks as *physically* holding semaphores, then the number of semaphores held by task(s) plus the number remaining "available" (>0) at any time should be a constant.]

A mutex is a different type of synchronization primitive with a bit more overhead. Conceptually, it resembles a binary semaphore in that it can be held ("LOCKED") or released ("UNLOCKED"). But, unlike a semaphore, it is actually *tracked* by the OS and is considered to have an "owner".

The immediate consequence of this is that ONLY the owner can release a mutex that *it* holds (unlike semaphores which can be released by anyone that knows the name of the semaphore!).

Some mutex implementations will allow you to take another lock on a mutex you currently hold -- because the OS *knows* that you are the owner! And, "do the right thing" when you nest such lock/unlock invocations:

mutex_t aMutex;

operation1() { ... lock(aMutex); ... operation2(); ... unlock(aMutex); ... }

operation2() { ... lock(aMutex); ... operation3(); ... unlock(aMutex); ... }

operation3() { ... lock(aMutex); ... operation4(); ... unlock(aMutex); ... }

...

I.e., when operation1() is invoked, the invoking task will block until it can take the lock on the specified mutex ("aMutex"). Once it holds that lock, the attempt to take it *again* in operation2() will immediately succeed. As will the attempt in operation3(), etc.

FURTHERMORE, when operation3() eventually unlocks(aMutex), the task will

*still* hold the lock because of the successful lock() in operation2()! Only when *all* nested lock()-s are unlock()-ed will the mutex be available to other tasks. [You can't (easily) do this with a semaphore.]

Because the owner of a mutex is known to the system, other protocols can be employed to help minimize the priority inversion problem with various types of priority inheritance (priority ceiling, etc.) protocols.

[This quickly gets hairy and may OS's don't fully implement this mechanism beyond a "token" implementation]

Another useful semaphore-ish concept is that of "events". Events cause ALL tasks waiting for a particular event to be "made ready" at the instant the event is signaled. It is a lightweight concept -- like semaphores -- and really only requires implementation of a queue in which tasks awaiting a particular (set of!) events "pend". Typically, events are non-latching so any task waiting for an event at the time the event is signalled is made ready. A task that elects to wait for that event any time *after* it has been signaled ("raised") will be forced to wait until the NEXT time it is signaled.

[These can be very effective in lightweight "cooperative" OS's but leave much of the onus on the task developer to "get it right"]

You could also research "condition variables"...

Reply to
Don Y

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.