factors affecting context switch time - Page 2

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
Re: factors affecting context switch time


Quoted text here. Click to load it

    Just ignore ludicrously high values.

    DS



Re: factors affecting context switch time

Quoted text here. Click to load it

[ I limited this post to the "embedded" and "realtime" groups. ]

You can find some references to some such tools at
http://www.tidorum.fi/bound-t/timing-tools.html . Don't know if
they are exactly the kind of thing you need, though; mostly they
are aimed at smaller systems with small real-time kernels.

--
Niklas Holsti
Tidorum Ltd
We've slightly trimmed the long signature. Click to see the full one.
Re: factors affecting context switch time
Quoted text here. Click to load it

The effect would be similar to what cache/TLB flushing during
context switch does: The observed end-to-end context switch time
(i.e. time between last instruction in one task and first instruction
in another) would not be affected, but execution of user-level code
would slow down, or, more precisely, execution of user-level code
would pause for some time at unpredictable points in time. Compared
to the effects of TLB/cache reloading, delays caused by demand paging
tend to have a more coarse granularity, so are more visible, but the
effect is more or less the same (user code slowing down).

Obviously, this can only happen in an MMU based system which, as
you suggested earlier, does not seem to be your primary focus (but
then, why are you posting to so many linux related newsgroups?).
Also, in Linux, there are the mlock() and mlockall() syscalls
which can be used to avoid delays due to demand paging.


Quoted text here. Click to load it


What is the goal here? I suspect that the motivation for such a
benchmark is to be able to predict the timing behavior of some
piece of code, but for that, you have to consider the real-world
conditions for which this prediction should be made. Also, what
exactly do you want to determine: worst-case execution time or
some average execution time?

To determine average execution time for a piece of code, just run it
many times so the presumably few cases of the code being interrupted
are evened out. Or, as David suggested, ignore the "freak" values.
(But here we go again: is being interrupted occasionally part of
the real-world situation the code will eventually run in? If so,
ignoring those "freak" values would be wrong.) The average time
that you can determine this way may be used as an input for some
heuristic reasoning, but keep in mind that the worst-case execution
time that same piece of code may exhibit can easily exceed the average
by several orders of magnitude!

Rob

--
Robert Kaiser                     email: rkaiser AT sysgo DOT com
SYSGO AG                          http://www.elinos.com
We've slightly trimmed the long signature. Click to see the full one.
Re: factors affecting context switch time

Quoted text here. Click to load it
All very interesting but not as enlightening as browsing theough the pages
you could get from tossing "rdtsc instruction" at a search engine.
--
JosephKK
Gegen dummheit kampfen Die Gotter Selbst, vergebens.  
We've slightly trimmed the long signature. Click to see the full one.
Re: factors affecting context switch time

Quoted text here. Click to load it
http://i30www.ira.uka.de/research/publications/papers/index.php?lid=de&docid64%1
Quoted text here. Click to load it

Context switching times should still be a minor overhead, even in
realtime systems.
The diference in this case, I would argue is the OS, not the hardware.
A good RTOS
that performs well on one  hardware platform will likely perform well
on another platform. But a poor RTOS on one platform will not improve
much even when moved to a "better, faster" platform.

If you are just talking about an embedded realtime system, then I think
the cross posts to LINUX OS groups are not appropriate. I'll go over to
the realtime group to follow this further.
    Ed


Re: factors affecting context switch time

Quoted text here. Click to load it
#

Just playing devil's advocate here - if your application is likely to be
that sensitive to context switch time then maybe you need to think more
about its overall architecture. After all contest switch time ought to be a
second order effect in a robust design.

Ian

Re: factors affecting context switch time
Quoted text here. Click to load it

Supposedly most important factor: count of CPUs

-Michael

Re: factors affecting context switch time

Quoted text here. Click to load it

All of the above!! Kernel version is most important then CPU.

Basically you need to measure.

Amongst the tools are "Linux Trace Toolkit" which will give detailed trace
of the kernel, then there is "Oprofile" which will give a more
summary/statistiscs view and there is a user-side app called "Hourglass"
which might be close to what you need (if you use 64 bit, you will have to
fix it).

I did some measurements just today and I am not sure I belive them yet:
according to those it takes 50 usec on a 4-way Opteron box. Hmmm, nah no
way!!! Or Maybe that is why the papers on latency dried out around 2003-2004
as the 2.6 kernels began to work. The problem went away.

http://www.cs.utah.edu/flux/papers/hourglass-usenix02 /
http://www.opersys.com/LTT /
http://oprofile.sourceforge.net/news/

maybe you want schedutils to play with cpu affinity

http://www.advogato.org/proj/schedutils/ (it is a package for
redhat/debian/suse)

irg affinity can be set with catting to "/proc/irq/<no>/cpu_affinity

hope this helps.



Site Timeline