Sharing the timeslice

This is true for Linux 2.6.x, but as far as I understand, the kernel did not manage individual threads in Linux 2.4.x, just processes.

Paul

Reply to
Paul Keinanen
Loading thread data ...

Partly true. The kernel only handled processes and not threads, but up until that time threads were implemented as separate processes created by the clone() system call. This was basically threading on the cheap, and did have the potential to cause problems (e.g. different 'threads' had different PIDs). It's only recently that Linux acquired a proper threading implementation.

The 1:1 behaviour still holds though, since the threads _were_ handled by the kernel, even if they weren't actually threads in the usual meaning of the term.

--
Andrew Smallshaw
andrews@sdf.lonestar.org
Reply to
Andrew Smallshaw

Hi, We used the technique you suggested to debug one of the crucial issues with our project and made good progress.I gave printf suggestion just to play around to get a feel of the sequence.If its real project,I would try the technique suggested by you then going for a printf.

Regards, s.subbarayan

Reply to
ssubbarayan
  • ssubbarayan peremptorily fired off this memo:

It's a useful technique, but has its own issues. So I use functions that only report errors, unless you elevate the verbosity at run time.

And, if you want the fastest code, a configure-time switch disables those functions completely (converting them to no-op macros).

By the way, I'm getting the feeling that the latest libc has somehow made output to stderr "thread-safe". I can't get two threads to mingle their output! I wish I had an older kernel handy.

I'll have to try it on Windows and see what happens.

--
Intellectual property has the shelf life of a banana.
   -- Bill Gates
Reply to
Linonut

Even if you knew the exact algorithm that prolly wouldnt help with a modern mutli core cpu.

Reply to
Phil Da Lick!

Here's the bottom line. Even if you did figure out what the exact timeslice methodology and sequence was, what you would have in the end is a bunch of code that makes too many assumptions to be reliable.

What exactly are you writing where the *exact* sequence of thread execution is so important?

It is a *BIG* mistake to assume the order in which threads will execute. What if someone runs this on a quad-core system? Or a machine with 32 processors? Write generic code that *WORKS* and don't fuss with and make assumptions as to what thread will run in what order and for how long. What you're trying to do is not portable, not reliable and is NOT guaranteed to work across a variety of machines and kernels.

I can't think of a single valid reason why it is so critical to know the exact order of execution. Unless you can explain why something *must* happen in some specific order you are just complicating the hell out of this for no valid reason. Once again... write generic code that will run properly and don't make it machine and kernel specific because no matter what you figure out there is basically zero chance that it will work reliably outside of your specific development environment.

** Posted from
formatting link
**
Reply to
Ezekiel

Thx for that info. Interesting to know that linux acquired a proper threading implementation only recently :):)

But, How did they manage with only processes till now ? Wasn't it a bad design ?

Thx in advans, Karthik Balaguru

Reply to
karthikbalaguru

clone() is a special system call that creates a new child process, however unlike fork() which which at least creates the appearance of the child process having its own address space a process created by clone() shares its parents space. This means that both processes have access to the same data structures and changes made by one process show up in the corresponding structure in the other. This gives the same semantics (at least for that area) as conventional threads.

Why do it this way? Well I've never checked the source (don't really use Linux myself) but I assume that it is much simpler in terms of the amount of code that needs adding to at least give the appearance of supporting threads. All that is needed is a relatively small amount of code to create a process but share the address space. Creating a fully fledged threading implementation OTOH is much more work.

However, it does go against some of the reasons for using threads in the first place. Sometimes threads are used because they are supposed to be 'lighter weight' (faster) than a whole new process. Obviously if they are in fact separate processes then this advantage is not apparent.

-- Andrew Smallshaw snipped-for-privacy@sdf.lonestar.org

Reply to
Andrew Smallshaw

:

I got the below info from internet. Interesting :):)

The scope of a thread can only be specified before the thread is created.

PTHREAD_SCOPE_SYSTEM A thread that has a scope of PTHREAD_SCOPE_SYSTEM will content with other processes and other PTHREAD_SCOPE_SYSTEM threads for the CPU. That is if there is one process P1 with 10 threads with scope PTHREAD_SCOPE_SYSTEM and a single threaded process P2, P2 will get one timeslice out of 11 and every thread in P1 will get one timeslice out of 11. I.e. P1 will get 10 time more timeslices than P2.

PTHREAD_SCOPE_PROCESS All threads of a process that have a scope of PTHREAD_SCOPE_PROCESS will be grouped together and this group of threads contents for the CPU. If there is a process with 4 PTHREAD_SCOPE_PROCESS threads and 4 PTHREAD_SCOPE_SYSTEM threds, then each of the PTHREAD_SCOPE_SYSTEM threads will get a fifth of the CPU and the other 4 PTHREAD_SCOPE_PROCESS threads will share the remaing fifth of the CPU. How the PTHREAD_SCOPE_PROCESS threads share their fifth of the CPU among themselves is determined by the scheduling policy and the thread's priority.

If there are other processes running, then every PTHREAD_SCOPE_SYSTEM and every group of PTHREAD_SCOPE_PROCESS threads (i.e. every process with PTHREAD_SCOPE_PROCESS threads) will be handled like a seperate process by the system scheduler.

Priorities and Scheduling Policy A PTHREAD_SCOPE_PROCESS thread has a priority. Whenever a thread is runnable and no other thread (of this process) has a higher priority the thread will get the CPU. Note that this might lead to starvation of other threads. When two or more runnable threads have the same priority and no other runnable thread has a higher priority, then the scheduling policy will determine which of these highest priority threads to run.

The priority is assigned staticly with pthread_setschedparam(). The scheduler will not change the priority of a thread.

The scheduling policy can either be SCHED_FIFO or SCHED_RR. FIFO is a first come first serve policy. RR is a round robin policy that might preempt threads. But again, the policy only effects threads that have the same priority.

A more extensive description of priorites and policies can be found in [1] and [2]. Note that these documents discuss process scheduling, but the principle is the same.

Note: The priority and scheduling policy settings are meaningless when a thread has scope PTHREAD_SCOPE_SYSTEM.

Realtime Process Scheduling It is also possible to do realtime process scheduling. [2] explains how realtime process scheduling works. sched_setscheduler() is used to set the process scheduling parameters.

Nice values The nice value of a process also influences the scheduling behaviour. A process (and the threads therein) with a lower nice value (i.e., higher priority) will get a higher share of the CPU time. Starting a program with nice works as expected. Using the nice() system-call from a threaded program has not been tested (the question is: does a nice() call effect the whole process or the current thread. This may well depend on the pthread imlementation and scope).

Reply to
karthikbalaguru

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.