Sharing the timeslice

Hi,

Consider 3 processes (p1, p2 and p3) Consider that the process p1 has 2 threads ( t1, t2) and lets assume that t1 takes 15 ms and t2 takes 10 ms. If the timeslice allocated for every process by RR(round-robin) is 5 ms. In the above scenario, When will the p2 start to execute ?

That is, Will it follow any of the below sequence of execution ?

1) t1,t2,p2,p3,t1,t2,p2,p3 (t1 and t2 are executed first in the place of p1 and followed by the execution of p2,p3) Sharing the timeslice of 5ms allocated to p1 among the threads t1 and t2 (like, 2.5ms for t1 and 2.5ms for t2). Any ideas/links ? Is it possible to allocate 3ms for t1 and 2ms for t2 ? Any links that talk in detail about these scheme ? Or 2) t1,p2,p3,t2,p2,p3,t1,p2,p3 (t1 is executed during the first cycle of RR and t2 is executed in the consecutive cycle of RR and so on..) I think, this causes delay in the execution of t2 . Or 3) t1,p2,p3,t1,p2,p3,t1,p2,p3,t2,p2,p3,t2,p2,p3 (t1 is executed in first cycle of RR and in the consecutive cycles also , until the t1 is finished. And after finishing t1, it moves to execute the t2) I think, this method will not be followed as it would lead to starvation of thread t2.

I use Redhat 9.0 (linux). Is there any link that explains about the scheduling between the threads in a process with clear illustrations ?

Thx in advans, Karthik Balaguru

Reply to
karthikbalaguru
Loading thread data ...

You DO realise where you've posted don't you? COLA is the last place you'd want to post for this kind of detailed technical info...

Try comp.os.linux.development.system or its sister group, development.apps.

Errr... Well, the kernel documentation's fairly thorough in many areas... And the scheduler source itself is commented. One thing though...

Redhat 9? That must be getting onto at least 5 years out of date by now.

Consider upgrading to a new distro. For one thing, new kernels have different schedulers you can choose from, including a fully pre-emptive one and a real-time one, so if one doesn't suit your needs, another might.

--
|   spike1@freenet.co.uk   |                                                 |
|   Andrew Halliwell BSc   | "ARSE! GERLS!! DRINK! DRINK! DRINK!!!"          |
 Click to see the full signature
Reply to
Andrew Halliwell
  • karthikbalaguru peremptorily fired off this memo:

I don't think there is any way of determining (predicting) what runs in what order, in general.

Start with "man pthread_attr_init" and "man sched_setscheduler".

--
DOS is ugly and interferes with users' experience.
   -- Bill Gates
Reply to
Linonut

But, Why is not possible to predict the order of execution ?

I am interested to know if the following happens (assuming p2 takes

25ms and p3 takes 30ms) - t1,t2,p2,p3,t1,t2,p2,p3,t1,p2,p3,p2,p3,p2,p3,p3

Thx in advans, Karthik Balaguru

Reply to
karthikbalaguru

Because it is dependant on other system load.

--
This year with the release of XP, they are actually behind. The end days
are near for the BIOS reading inferior OS. It is inevitable.
 Click to see the full signature
Reply to
Hadron

But, How does system load impact the below scenario ?

There are 3 processes (p1, p2 and p3). Here, p2 takes 25ms and p3 takes 30ms. The process p1 takes 25ms. But, the process p1 has 2 threads ( t1, t2) and t1 takes 15 ms and t2 takes 10 ms. The timeslice allocated for every process by SCHED_RR(round-robin) is 5 ms.

Thx in advans, Karthik Balaguru

Reply to
karthikbalaguru

You tell me. What is 25ms here? Real time? CPU time? What. You should really take this to the correct newsgroups as indicated by Andrew Halliwell. Kernel scheduling is neither simple nor on topic here. This is a Microsoft hate group ...

Reply to
Hadron

In general it is impossible to predict. It depends on any number of different factors, some of whoch are impossible to know beforehand. You may as well call the scheduling random because it is pretty much impossible to predict.

Some of the factors that may affect the scheduling process:

The condition of various caches. The other activity on the system. The precise point in the HDD's rotation when the programs start. Precise hardware that it is running on.

In short you have a form of butterfly effect going on, where seemingly trivial details can drastically affect the outcome. Forget about attempting to predict things.

--
Andrew Smallshaw
andrews@sdf.lonestar.org
Reply to
Andrew Smallshaw

Ok. Interesting. I will stop predicting in this scenario.

Can you let me know whether the threads created for a process share the timeslice of the process OR the threads created are treated like process (have the same timeslice as that of the process).

That is, P1 =3D 5ms will be now t1=3D5ms and t2=3D5ms ? OR P1 =3D5ms will be now t1 =3D 2.5ms and t2 =3D 2.5ms ?

I am interested in knowing how the threads created by a process will be handled by the scheduler.

Thx in advans, Karthik Balaguru

Reply to
karthikbalaguru

It sounds more like you are looking for some sort of fixed slot scheduling rather than round robin.

Robert

** Posted from
formatting link
**
Reply to
Robert Adsett

Why do you want to know this ?

If you _need_ a specific thread execution order, you should proper synchronization and usually also priority level based scheduling. Trying to use round robin at a single priority level is not going to give any predictable result.

Paul

Reply to
Paul Keinanen

Hi, While what other experts suggested here is valid,why not try adding printfs in every thread to see which sequence it executes.You can create a sample program to emulate the scenerio of your project.Simulators will come handy though am not sure whether redhat has any simulators for your target.Adding printfs can give you sequence of execution though there may be delays from the timing aspect.

Regards, s.subbarayan

Reply to
ssubbarayan

Ok, i am not willing to predict the sequence of execution.

Can you let me know whether the threads created for a process share the timeslice of the process OR the threads created are treated like process (have the same timeslice as that of the process).

That is, P1 = 5ms will be now t1=5ms and t2=5ms ? OR P1 =5ms will be now t1 = 2.5ms and t2 = 2.5ms ?

I am interested in knowing how the threads created by a process will be handled by the scheduler.

Thx in advans, Karthik Balaguru

Reply to
karthikbalaguru
  • karthikbalaguru peremptorily fired off this memo:

Too many variables. Probably even if you know the scheduler's exact algorithm for thread selection.

But, rather than rely on me coming up with some definitive statement:

  1. Check out the book "Linux Kernel Development", 2nd edition, by Robert Love, Novell Press.
  2. Code it up and see how repeatable it is. But be careful -- even the small slow-down caused by outputting to the console is enough to act as a crude form of synchronization. Furthermore, various consoles differ in their speed.

Code it up!

By the way, if you use the pthreads API, there is a nice pthreads-w32 port so you can try the same stuff in Windows. (However, the DOS console is slow as hell.)

--
Microsoft looks at new ideas, they don't evaluate whether the idea will move
the industry forward, they ask, 'how will it help us sell more copies of
 Click to see the full signature
Reply to
Linonut
  • ssubbarayan peremptorily fired off this memo:

Even those can affect behavior. The tightest would be to write some audit info to memory, and then dump it out after the fact.

And, if you do use printf(), consider instead using fprintf() and trying it with stdout and with stderr. The former is buffered, the latter is not. You can also use setvbuf() to change the buffering.

I'm reworking some old code to synchronize console output, and I'm having a hell of time, this time around, getting the console output from various threads to intermix (in stderr), so that I can test the synchronizer.

--
As we look ahead into the next century, leaders will be those who empower
others.
 Click to see the full signature
Reply to
Linonut

Small slow down? Outputting to the console makes a HUGE difference.

Reply to
Hadron

It should be the simplest thing in the world. A FIFO queue running on another thread.

Reply to
Hadron

It can be handled either way and it varies. AFAIK with Windows threads have always been handled by the kernel. Unix traditionally delegated threads management to the application but over the last

10 years or so and with the introduction of POSIX threads the tendency now is to move thread handling to the kernel.

This isn't a straightforward subject though and there are all kinds of dispute over which thread system is best (the biggie at the moment is whether a 1:1 or m:n model is better). A detailed critique of the issues involved would be quite lengthy and not really suited to a news post. I suggest you read a good and up-to-date book on operating system design for a comprehensive overview of the area.

--
Andrew Smallshaw
andrews@sdf.lonestar.org
Reply to
Andrew Smallshaw

But, it should be either of the one.

Yeah, I understand that now the thread handling has been shifted to kernel.

Interesting !! :):)

I understand the generic concept of OS. But, here, i am looking for some different info.

I am using linux (Redhat 9.0).

Can you pls let me know whether the threads created for a process share the timeslice of the process OR the threads created have the same timeslice as that of the process.

That is, which of the following scenario is possible in linux (consider 2 threads(t1,t2) are created for process P1)

1) P1 = 5ms will become t1(5ms) + t2(5ms) = 10ms . 2) P1 = 5ms will become t1(2.5ms) + t2(2.5ms) = 5ms

Any ideas / tips ? Is there any link that talks in detail regarding the above ?

Thx in advans, Karthik Balaguru

Reply to
karthikbalaguru

Well, m:n threading models are exactly the opposite in that they do do both. This approach has several theoretical advantages but at the cost of much more complexity. NetBSD and FreeBSD both used m:n threading but are moving to 1:1 threading because the consensus now seems to be that the complexity isn't worth its theoretical benefits.

Linux uses 1:1 exclusively. That is to say that if process A has three threads and process B four threads then all seven threads are handled by the kernel. Each thread gets it's own timeslice, assuming that it is runnable and stays runnable for the duration of its timeslice.

Although I've never studied the Linux threading model in detail I would hope that it attempts to keep the different threads of a process together, scheduling them one after the other as far a possible. There are several advantages to this in that VM page tables do not need reloading and various caches and the translation lookaside buffer do not need invalidating.

Whole books have been written that deal solely with threads. Consult one of them or a general operating systems text. As I pointed out in my earlier post, this is a complex area and you aren't going to learn about it on a piecemeal basis from a few Usenet posts.

--
Andrew Smallshaw
andrews@sdf.lonestar.org
Reply to
Andrew Smallshaw

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.