I don't use an RTOS because... - Page 9

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
Re: So... What are the alternatives? Was: I don't use an RTOS because...
Quoted text here. Click to load it

I was trying to remember one of these articles just before I posted. Now
I found it:

http://www-128.ibm.com/developerworks/library/pa-migrate /

Regards.


Re: So... What are the alternatives? Was: I don't use an RTOS because...
Quoted text here. Click to load it
I
subject

Hey, people read it! I'm flattered :)


Re: So... What are the alternatives? Was: I don't use an RTOS because...

Quoted text here. Click to load it
I hadn't paid attention to the author's name until now. :-) My fault
though - a matter of bad habit not looking at article's authors names. I
look forward to reading the sequel. Brought PPC to my attention as a
possible candidate for the main processor (thogh not necessarily for
realtime tasks) in a new architecture. It's a pitty it lacks a built in
LCD controller.

Regards.

Elder.


Re: So... What are the alternatives? Was: I don't use an RTOS because...
Quoted text here. Click to load it
I

The article series was tentatively set to be ten pieces long. To give
you a sneak preview: The next article coming up (early Feb release
date) talks about differences between x86 and PPC Linux startup. It
also suggests a few different layouts for both the software bundle and
the flash.

Article #3 goes into details about the Linux distro shipped with the
Kuro Box, and also how to upgrade it (some).

Article #4 talks about building a web-administerable backend from a
beginner's perspective (it sounds like a digression from the primary
theme, but really it's not).

Looks like they will be publishing them once a month, or thereabouts.
I'm trying to keep at least two ahead.


Re: So... What are the alternatives? Was: I don't use an RTOS because...

Quoted text here. Click to load it

I guess that was the attitude back in the early 80's as almost everyone was
trying to cram computing abilities into almost everything. Even today I am
quite happy to do things in relay logic when that is the simplest solution.

One can only discover the right approach by taking the time to explore the
various ways of providing a solution and selecting the best in terms of
apparent simplicity, time, costs and quality. Usually, when you get to
systems that require one to deal with more than 20 I/O you should really
start looking for the natural architecture of the problem. You can still
look for the architecture of the problem below that level but it is usually
very apparent . As I have indicated, I usually find that this is closely
allied to the actuator distribution and groupings.

Quoted text here. Click to load it

It is a quite scalable approach too.

Quoted text here. Click to load it

As the low end processing silicon gets less and less expensive to develop
for and communication capabilities between them improve, it becomes easier
and easier to support the strategy. I have been advocating the processor
per drive strategy for 20+ years now. Especially when you realise that for
almost any process or machine control actuator based solution there are
only 28 types of control block. This is not a great many to have to develop
strategies for.
 
Quoted text here. Click to load it

As already stated, problem space needs analysis in order to determine what
the structure of the problems architecture is, what tasks are necessary to
accomplish the system goals and what risks are posed by achieving the
solutions activities. Once you have looked at and thought about all of that
you should start forming the simples and most appropriate strategy for
achieving your solution.

It is quite often easy to see where one of the 28 control blocks would
likely fit in without expending too much brain power on it.

--
********************************************************************
We've slightly trimmed the long signature. Click to see the full one.
Re: So... What are the alternatives? Was: I don't use an RTOS because...

Quoted text here. Click to load it

...for the same reasons that Object Oriented Programs are easier
to fully understand, easier to test for compliance and easier to
maintain.  Alas, just like OOP, it is possible for a sufficiently
clueless engineer/programmer to make a bad product using good tools.
That's no reason not to have good tools, though.


--
Guy Macon <http://www.guymacon.com/


Re: So... What are the alternatives? Was: I don't use an RTOS because...

Quoted text here. Click to load it

Absolutely!!

--
********************************************************************
We've slightly trimmed the long signature. Click to see the full one.
Re: So... What are the alternatives? Was: I don't use an RTOS because...

Quoted text here. Click to load it

Like everything in the world, the answers depend on specific circumstances.

In embedded systems with a single main purpose to them, the result without an
O/S may very well be much better -- in terms of safety, maintenance, size of
code and data, and pretty much any other useful measure.

As 'Steve at five trees' mentions, state machines are a common approach.  For
example, I used them in a system that handled three background communication
tasks -- (1) supporting functions to read and write from a serial EEPROM; and
(2) supporting functions to send and receive commands, queries, and data between
itself and an external DSP, where the DSP initiated all transfers; and (3)
supporting functions to update serial loaded external DACs.  To handle this, I
used three separate software module files, for each of the functions.  A single
timer was used and set to interrupt at one tick each 200us.  On each tick, the
timer event called three state-machine-driving functions, one each in the three
modules.  Each state machine driving function would give time to the current
state, which would either remain in that state or transition to a new one.  This
allowed the primary purpose of measurement to take place, while all these neatly
delineated and modularized functional areas still operated quite well in the
background.

Now, if you don't like state machine organizations (if seeing a list of state
functions, for example as one per state, bothers you and is hard to interpret or
modify), then this may be "ugly."  But I also keep a nicely-done state machine
diagram for each that is easily referred to.  And with it in hand, the code is
quite easy to understand.  Separate them, and yes there would be "some
difficulty" in reconstructing the original diagram from the code.  But not so
much.

I use whatever makes sense, though.  However, I usually do not use other
operating systems unless it is a requirement of the application or the libraries
in some operating system would really help out.  Most of my applications have a
wide variety of requirements, such as low cost, small footprint, low power
consumption, safety-critical/medical, etc., so I don't usually have the luxury
of just "tossing in an outside, 3rd party O/S."  The operating system I've
written is described in some detail below, in the postscript.

Jon

P.S.  What I wrote for myself allows me to statically configure these:

    SYS_PREEMPT
    --------------------------------------------------------------------------
    This configuration parameter decides whether or not preemption is allowed.
    See the 'readme' for a detailed explanation of the implications, here.
        0   Cooperative
        1   Preemptive

    SYS_RTCLOCK
    --------------------------------------------------------------------------
    This configuration parameter decides if there will be a real time clock.
    See the 'readme' for a detailed explanation of the implications, here.
        0   Disabled
        1   Enabled

    SYS_SLEEPQ
    --------------------------------------------------------------------------
    This configuration parameter decides if there is a delta-queue, usually
    used for processes sleeping on some time parameter.
        0   Disabled
        1   Enabled

    SYS_PRIORITIES
    --------------------------------------------------------------------------
    This configuration parameter decides if distinct process priorities are
    used in sorting the ready queue.  If disabled, all processes in the ready
    queue will be treated as though they have equal priority.
        0   Disabled
        1   Enabled

    SYS_MESSAGING
    --------------------------------------------------------------------------
    This configuration parameter decides if asynchronous process messaging is
    supported.
        0   Disabled
        1   Enabled

    SYS_QUANTUM
    --------------------------------------------------------------------------
    This configuration parameter decides if there is a quantum associated with
    the current process.  Quantums are usually used for round robin sharing of
    the CPU time.  But they do not serve much of a purpose unless preemption
    is enabled and the real-time clock is also enabled -- without a real-time
    clock the current thread quantum cannot time out and without preemption
    enabled no switching to the next ready thread of equal priority can occur
    (assuming no higher priority thread is ready, of course.)
        0   Disabled/Unlimited
        >0  Number of clock ticks before current process times out

    SYS_PLIMIT
    --------------------------------------------------------------------------
    This configuration parameter decides how may process slots are available.
        >0  Number of allocated thread slots (sets maximum number of threads)

    SYS_SLIMIT
    --------------------------------------------------------------------------
    This configuration parameter decides how many semaphore queues there are.
        0   Disabled (no semaphore support)
        >0  Number of allocated semaphore queues (sets maximum)

    SYS_DLINKEDQ
    --------------------------------------------------------------------------
    This configuration parameter decides how the processes are linked into the
    queues.  Singly linked takes about 1/2 the RAM, but operates more slowly.
        0   Singly linked
        1   Doubly linked

The operating system provides thread semantics on the smaller and more difficult
microcontroller targets and it works well in freestanding environments typically
found in embedded applications.  A principal feature of this operating system is
its compile-time configurability.  When a feature is not enabled or selected,
the code and data that is associated with it is eliminated from the resulting
object code.  It's a prime goal that adding features to the operating system's
source code is fine to do, so long as there is no cost if an application doesn't
require them and deselects them in the compile-time configurations.  A minimum
operating system does NOT have a real-time clock, semaphores, messages, thread
priorities, sleeping threads, and so on.  It's just a cooperative switcher, just
able to keep the stacks separated (and that's about it.)

It supports both Harvard and von Newmann architectures and the kinds of memory
systems that common microcontrollers provide (such as read/write and read-only
for either code or data.)  For example, it is also an explicit design goal to
isolate those portions of the thread/process data structures which can be
arranged into read-only memory (flash, for example) from those portions which
require read/write access during operation.  And processes (threads) can be
defined at compile-time, not just at run-time.

More data space is required for pre-emption support, of course.

The real-time clock serves two primary purposes in my O/S.  First, the real-time
clock can count down a delay so that threads in the sleep queue can awakened
(moved to the ready queue) when their time delays expire.  You can have a sleep
queue in this O/S even if you don't use a real-time clock, but then there will
be no way to automatically count down time delays and thus appropriately move
such threads to the ready queue when their delay expires.  Second, the real-time
clock can time out the currently running thread if quantums are enabled.  This
won't do anything if preemption is disabled.  But with preemption, this will
cause the O/S to reschedule the current thread and, if other processes have the
same priority, to round robin share the CPU time.

Including real-time clock support does NOT necessarily imply preemption.  If
preemption is disabled but the real-time clock is used, this only means that
threads can move from the sleep queue to the ready queue on timed intervals and
that the current thread can have its quantum counted down.  But if preemption is
disabled, then that is all that happens.  No change in the current thread will
be considered until the current thread makes a system call of some kind,
directly or indirectly, that may reschedule.

A real-time clock is essentially a hardware interrupt event.  Disabling
preemption means only that this real-time clock event isn't permitted to cause
rescheduling.  The remaining actions continue.  Enabling preemption means that
rescheduling is permitted when the interrupt occurs and this implies that thread
motion from the sleep queue to the ready queue or else thread quantum time-out
can cause the current process to change, if there is an equal or higher priority
process ready to run.  [The real-time clock may also be used to keep track of
elapsed times or other similar purposes (wall clock time, up time, various
metrics, etc.)]

The use of quantums are used for round robin sharing of the CPU time -- if
preemption is enabled and the real-time clock is also enabled.  Without a
real-time clock the current thread quantum cannot time out and without
preemption enabled no switching to the next ready thread of equal priority can
occur (assuming no higher priority thread is ready, of course.)  Quantums in the
O/S can be enabled even if there is no preemption and even if there is no real
time clock event.  This parameter allocates RAM for a quantum value for the
current process.  If there is no real time clock, then the quantum isn't
automatically updated.  And if there is no preemption, then there is no
rescheduling when the quantum reachs zero.  But no conflict inherently arises
from the lack of either one of these associated facilities -- or from the lack
of both of them.

A sleep queue in this O/S is simply a 'delta' queue which schedules threads on
the basis of a relative number.  Usually, this number represents how many clock
ticks to wait -- especially when the real-time clock is enabled -- but it can
also be used for other purposes.  Threads are inserted in order of their
remaining delay, with the soonest at the queue head.  Others are added
afterwards, kept with the number of remaining clock ticks after the thread ahead
of them runs.

If the real-time clock is available, a counter for the top thread in the sleep
queue is decremented and, when it reaches zero, the top thread is moved from the
sleep queue to the ready thread queue.  If there are any threads immediately
following it in the sleep queue, which also have a zero counter, they are also
moved at this time.  With preemption enabled, this movement causes rescheduling.
Thus, moving higher priority sleeping threads to the ready process queue does
preempt the currently running thread.
    
If the real-time clock is not available, there is no timer interrupt and
therefore threads simply stay in the sleep queue for an indeterminate period of
time.  The only way they move from the sleep queue in such cases is if some
other thread wakes them or else there are no remaining threads available to run
-- which causes the O/S to move the head of the sleep queue to the ready queue
and to restart it.

But supporting a sleep queue without a real-time clock is unusual.  I suppose it
can be used in cooperative arrangements where some threads should run more
frequently than others (kind of a modified round robin scheme), using the
delta-time sleep queue as a way of achieving non-equal thread shuffling.

(If you are curious about how this may be achieved, consider the idea of four
processes where P1 should run 4 out of 8 switches, P2 should run 2 out of 8, P3
should run 1 out of 8, and P4 should run 1 out of 8.  If a delta sleep time for
P1 is given as 1, a delta time for P2 as 2, a delta time for P3 as 4 and a delta
time for P4 as 4 and then these processes use this associated value to sleep on
when they want to cooperatively switch away, then the arrangement will work as
desired.)

A sleep queue requires a counter for each process to support timing their motion
back to the ready process queue.  This does increase RAM requirements.

Obviously, enabling priorities lets the O/S organize the ready queue by
individual thread priority.  Disabling it means all threads have the same
priority.  The O/S doesn't hinder threads with equal priority -- threads with
the same priority are simply added into the ready process queue after all
threads of equal or higher priority (which yields round robin behavior.)

Priorities do adding a priority value for each thread.  This increases the RAM
requirements.

The semaphores (and messages) provide a simple method of thread coordination,
often used to synchronize their actions or cooperate in sharing common
resources.  Each semaphore consists of an integer count, conceptually, with
wait(s) calls decrementing the count and signal(s) calls incrementing it.  If
the count count goes negative due to a wait(s) call, the thread is suspended or
delayed.  In that case, the next signal(s) call will release exactly one
suspended thread.  Semaphores are a synchronized method of general process
coordination, essentially requiring one wait(s) for each signal(s) call.  Each
semaphore queue requires a small amount of RAM for a queue head and tail node.
This again increases the RAM requirements.

Different from the semaphores, the messages provide a method of unsynchronized
thread coordination.  They are particularly useful, in contrast to semaphores,
when a thread doesn't know how many messages it will receive, when they will be
sent, or which thread will send them.  These messages are posted directly to the
thread and are not of the type which are left at rendezvous points (since this
O/S supports very small RAM requirements.)  Threads do not block when sending
messages and only the first message sent to a thread is retained, if several are
sent to it before it can receive them.  Enabling this feature allocates room in
each process node to hold the latest message.  This once again increases the RAM
requirements.

The O/S queues may be configured as either singly linked lists or doubly linked
lists.  Enabling the doubly linked lists provides a faster response from the
operating system when moving threads from one queue to another.  But it does so
at the expense of an extra link in RAM for each queue node.  It is a space
versus speed option.  Normally, doubly linked lists are desired.  But if RAM is
very tight, disabling this option could help.
 

Site Timeline