how is the thread/process awakened after it slept?

Hi, I come across this question about multi task in operating system: let's say I use semaphore to control the multi task read/write to shared resource. When one thread or process does not get the lock, it suspends itself, basically it calls sleep().

Then how is it awakened?

I can think of 2 implementations: 1. semaphore maintains a list which has all the threads or processes which did not get the lock and once the lock is available, the semaphore wakens them.

  1. When the thread/process calls sleep(), it puts a timer, say, every 1ms, to waken itself to check the lock.

What is the real solution then?

Reply to
linq936
Loading thread data ...

Hi, First of all ,the task which did not get the lock goes to pended state and not delayed state as mentioned by you.How ever it may opt to pend for a specific period of time(pending with timeout option).In that case if the task does not get lock with in specific period of time it would return error and move out of pended state. Your implementation number 1 is the appropriate way to invoke a blocked task how ever may not be the only way.This may be OS specific.

In the case of pending with time out,once timeout happens timer expires and theres no re running of it.

All the above said are specific to corresponding OS even though can be thought of as generic.If you mention the OS you are referring to ,you may get correct answer to your question.

Regards, s.subbarayan

Reply to
ssubbarayan

Bad.

Bad.

Here is the real solution:

When the task suspends itself because of not getting the lock, in the TSS of this task, the task state is changed to SUSPEND_BY_SEMAPHORE and the ID of that semaphore is remembered. After the semaphore is released by the other task, the task scheduler is started. The scheduler checks all of the TSS if the task was waiting for that semaphore and activates it.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

Why is this bad?

--
Paul
Reply to
Paul Black

Textbook.

This would work if there is an absolute task hierarchy with no shared priority levels. Otherwise it seems as if tasks could starve. (Unless you have a clever and "fair" way to vary the order in which the check is done.)

-rick-

Reply to
Rick Lones

  1. Dynamic lists.
  2. Unnecessary essenses.
  3. Cross dependencies. What if a thread has to be terminated or changed the priority?

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

If there is not enough of a resource for all of the tasks with the equal priorities, the first come first served policy is not going to help either.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

Dynamic in what sense? If you mean memory allocations, dynamic memory is not required to implement a list.

No idea what you mean by that.

Then the semaphore that it is blocking on is updated.

Plenty of semaphores are implemented this way so it's clearly not as simple as "bad".

--
Paul
Reply to
Paul Black

Threads cannot be terminated safely in most systems unless they are holding no resources, as there's no way to know state they might leave those in. Even if the system is aware of all resources held, a caller of the kill method cannot just wait until around until it releases all resources, as there's no way to tell when or if that will ever happen. That's why the Posix threads API doesn't even define a thread kill/terminate function, see: . The only way of terminating a thread is to signal it to terminate itself.

Further to that, any semaphore or condition variable that needs to implement fair queueing *requires* a list (linked or otherwise) to do it. Doubly-linked lists implemented using memory that's reserved in the thread structure use only a very few instructions to insert or delete anyhow.

Any other fallacious objections?

Reply to
Clifford Heath

Although "first come first served" policy is unnecessary, one can implement it by storing the current semaphore lock counter in the TSS, without use of any lists.

100000 lemmings can't be wrong.

VLV

Reply to
Vladimir Vassilevsky

How do you confirm that this method is the appropriate way ? Is this actually the correct method ? Have you seen any such method in any of the OS you used ?

But the timer can be reset to the original desired timeout value at the end of every timeout. What are you actually trying to convey here ? Why there is no re-running ?

Thx in advans, Karthik Balaguru

Reply to
karthikbalaguru

I am sorry, I am confused about this thread. Kindly, Could someone here tell me the correct method ?

Karthik Balaguru

Reply to
karthikbalaguru

There is no correct method to implement semaphore scheduling.

By which I mean, if more than one thread blocks on a semaphore, some implementations require that the first to block is the next to be scheduled, and some don't. This is known as fair queuing (or not). First you have to decide whether you need fair queuing. Vladimir claims that FIFO is "not needed", when he means that he hasn't needed it. Some applications do!

Then you have to implement it. Vladimir suggested, but didn't really describe, a method using counters. I think his method involves a thread that blocks on a semaphore "taking a ticket", so that when the semaphore is released, each thread blocked on that semaphore awakes and if theirs isn't the next ticket to be served, goes to sleep again. I'm sure he'll correct me if his method is different.

The other method is to put the thread on a linked wait-list on the semaphore, so that when it gets signalled, the next thread gets taken off the list and marked runnable (or put on a runnable list). Note that with a doubly-linked list, there's never a need to traverse the whole list, though you pay a bit more in storage.

Vladimir's method, if I understand him correctly, has higher overhead because every waiting thread must be woken, and all but one will go to sleep again, which is usually a greater cost than that of managing the linked list.

Over to you, Vladimir. Please try not to make unnecessarily offensive comments.

Clifford Heath.

Reply to
Clifford Heath

My philosophy is keep all data related to the task and OS interaction at one place, i.e. in the TSS of this task. Why: because it is easy to find the relevant data without scanning all sorts of dynamic lists of all sorts of OS objects. In this implementation, a semaphore has a counter of how many tasks are waiting for the release of the resource. When a task attempts to lock the semaphore, the counter value is saved to TSS. When the semaphore is released, the task scheduler checks the TSSs of all tasks suspended by this semaphore to determine what task should run now and get the resource. Normally, it is the task with the highest priority. If the priorities are equal, it could be any one of the waiting tasks. If you insist on the "first come first served" policy, then the counter values saved in the TSS are used to determine what task is the first in the line.

Isn't it simple, fast and straightforward?

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

The idea of using double-linked lists is that you don't need to scan them. You arrange things so that you know either what thread you want but not what list it's in, or what list to look in where the thread you want is at the head. In either case, with 2 indexed MOVs you can remove it, and with 4 more you can, if needed, add it to another list. It doesn't get much quicker than that.

So you have to scan the TSSs of all threads. That's fine for 10 threads, but doesn't scale to 1000. Unix used your approach through Version 7, but at 40-50 processes it starts to become a serious overhead. All that stuff was thrown away in later versions and most re-implementations.

No. It's just simple and straightforward.

Clifford Heath.

Reply to
Clifford Heath

Oh, now I see why you don't like fair queuing - this method requires you to renumber all the waiting threads every time you schedule one like this. Bad idea... better to have an always-increasing ticket number and schedule the lowest-numbered ticket... or just use a list.

Clifford Heath.

Reply to
Clifford Heath

There is no need to renumber; from the tasks of the equal priority the sheduler has to pick the one with the minimum value of counter.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

That doesn't work. If you have say 3 threads waiting, numbered 1, 2, 3, then the semaphore gets released twice, threads 1 and 2 are now running and the thread counter is 1. A new thread blocks on the semaphore and gets number 1 - so it's next to be scheduled, ahead of 3 that was waiting before it.

Clifford Heath.

Reply to
Clifford Heath

Yes. On the release of the lock, the scheduler has to run a loop through the array of all TSSs to check for the one variable: the ID of the semaphore which was released.

1000 threads looks a little bit too much, isn't it?

I am running ~20 threads on ~600MHz CPU. The scheduler is called every

100us on the average; there is also a lot of other system work such as dispatching the messages. All of that takes somewhat 2% of the CPU in the worst case, i.e. ~12 MIPS.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

That can work if the counter is never decremented; the rollover is not a problem.

VLV

Reply to
Vladimir Vassilevsky

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.