I don't use an RTOS because...

Who said anything about C? Real programmers can write assembly in any language :-)

Even if you're using C, a non-hosted environment doesn't guarantee you the existence of libc, and for lots of embedded stuff you just don't need that sort of thing. If you're coding to the bare metal, then the choice of libraries is entirely up to you. An RTOS is really just another library choice.

--
Andrew
Reply to
Andrew Reilly
Loading thread data ...

:-))

That's what I meant to say :-) Or let me put it the other way, if someone trusts libc to be reliable, then there is no reaon why not trusting the library containing the (RT)OS (esp. if it come in pure assembly source) ?

--
42Bastian
Do not email to bastian42@yahoo.com, it's a spam-only account :-)
Use @epost.de instead !
Reply to
42Bastian Schick

The difference between C libraries and an RTOS is that most C library functions can be simply documented and understood. But giving overall control to an RTOS requires a more thorough understanding of the interface. It is usually more than are series of simple function calls. It is more like completely understanding all the ins and outs of the Overlapped IO operations in Win32 programming.

-Robert Scott Ypsilanti, Michigan (Reply through this forum, not by direct e-mail to me, as automatic reply address is fake.)

Reply to
Robert Scott

99.999% of C library functions are irrelevant for small embedded systems so what's to write?

Ian

--
Ian Bell
Reply to
Ian Bell

Untill you can define what you want in something more specific terms than "more complex" you'll just have to keep wondering.

Reply to
CBarn24050

Take a look at

"Practical Statecharts in C and C++: Quantum Programming for Embedded Systems", Samek, Miro

Regards,

Richard.

formatting link

Reply to
Richard

Building a simple executive is quite straightforward and will handle a very large number of applications. One of its simplest forms is a time triggered executive. Have a look at 'Patterns for Time triggered Embedded Systems' by Michael J. Pont for details.

Ian

Reply to
Ian Bell

For the vast majority of embedded system control tasks you are likely not to really need a RTOS. Others have hinted that cooperative schedulers are simple to implement and can provide the necessary for a wide range of applications. I will add that progressing to a pre-emptive scheduler is not that much more difficult either (may be 4 or 5 pages of code rather than just one).

Alternatively, you can take a harder look at the application requirements and decide to distruibute the problem over more processors (as one other poster has already indicated). This keeps each individual embedded controller simpler no matter how complex the application seemed to be. This sort of system factoring does have quite a big win. It may not suit every part of the application but the parts it doesn't can be hived off into another networked unit that contains a suitable RTOS for that part of the application.

The wins in factoring the system across multiple platforms are that, with an adequately resilient interface spocification for each unit you can individually test the sub-functions in isolation to ensure they work before geting them together. It also helps the position by allowing multiple small (surgical) teams to work towards the common overall system goal.

--
********************************************************************
Paul E. Bennett ....................
Forth based HIDECS Consultancy .....
Mob: +44 (0)7811-639972 .........NOW AVAILABLE:- HIDECS COURSE......
Tel: +44 (0)1235-811095 .... see http://www.feabhas.com for details.
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
Reply to
Paul E. Bennett

Agreed.

I very occasionally use sprintf if there is an LCD but otherwise can't think of anything.

Mike Harding

Reply to
Mike Harding

Like everything in the world, the answers depend on specific circumstances.

In embedded systems with a single main purpose to them, the result without an O/S may very well be much better -- in terms of safety, maintenance, size of code and data, and pretty much any other useful measure.

As 'Steve at five trees' mentions, state machines are a common approach. For example, I used them in a system that handled three background communication tasks -- (1) supporting functions to read and write from a serial EEPROM; and (2) supporting functions to send and receive commands, queries, and data between itself and an external DSP, where the DSP initiated all transfers; and (3) supporting functions to update serial loaded external DACs. To handle this, I used three separate software module files, for each of the functions. A single timer was used and set to interrupt at one tick each 200us. On each tick, the timer event called three state-machine-driving functions, one each in the three modules. Each state machine driving function would give time to the current state, which would either remain in that state or transition to a new one. This allowed the primary purpose of measurement to take place, while all these neatly delineated and modularized functional areas still operated quite well in the background.

Now, if you don't like state machine organizations (if seeing a list of state functions, for example as one per state, bothers you and is hard to interpret or modify), then this may be "ugly." But I also keep a nicely-done state machine diagram for each that is easily referred to. And with it in hand, the code is quite easy to understand. Separate them, and yes there would be "some difficulty" in reconstructing the original diagram from the code. But not so much.

I use whatever makes sense, though. However, I usually do not use other operating systems unless it is a requirement of the application or the libraries in some operating system would really help out. Most of my applications have a wide variety of requirements, such as low cost, small footprint, low power consumption, safety-critical/medical, etc., so I don't usually have the luxury of just "tossing in an outside, 3rd party O/S." The operating system I've written is described in some detail below, in the postscript.

Jon

P.S. What I wrote for myself allows me to statically configure these:

SYS_PREEMPT -------------------------------------------------------------------------- This configuration parameter decides whether or not preemption is allowed. See the 'readme' for a detailed explanation of the implications, here. 0 Cooperative 1 Preemptive

SYS_RTCLOCK -------------------------------------------------------------------------- This configuration parameter decides if there will be a real time clock. See the 'readme' for a detailed explanation of the implications, here. 0 Disabled 1 Enabled

SYS_SLEEPQ -------------------------------------------------------------------------- This configuration parameter decides if there is a delta-queue, usually used for processes sleeping on some time parameter. 0 Disabled 1 Enabled

SYS_PRIORITIES -------------------------------------------------------------------------- This configuration parameter decides if distinct process priorities are used in sorting the ready queue. If disabled, all processes in the ready queue will be treated as though they have equal priority. 0 Disabled 1 Enabled

SYS_MESSAGING -------------------------------------------------------------------------- This configuration parameter decides if asynchronous process messaging is supported. 0 Disabled 1 Enabled

SYS_QUANTUM -------------------------------------------------------------------------- This configuration parameter decides if there is a quantum associated with the current process. Quantums are usually used for round robin sharing of the CPU time. But they do not serve much of a purpose unless preemption is enabled and the real-time clock is also enabled -- without a real-time clock the current thread quantum cannot time out and without preemption enabled no switching to the next ready thread of equal priority can occur (assuming no higher priority thread is ready, of course.) 0 Disabled/Unlimited >0 Number of clock ticks before current process times out

SYS_PLIMIT -------------------------------------------------------------------------- This configuration parameter decides how may process slots are available. >0 Number of allocated thread slots (sets maximum number of threads)

SYS_SLIMIT -------------------------------------------------------------------------- This configuration parameter decides how many semaphore queues there are. 0 Disabled (no semaphore support) >0 Number of allocated semaphore queues (sets maximum)

SYS_DLINKEDQ -------------------------------------------------------------------------- This configuration parameter decides how the processes are linked into the queues. Singly linked takes about 1/2 the RAM, but operates more slowly. 0 Singly linked 1 Doubly linked

The operating system provides thread semantics on the smaller and more difficult microcontroller targets and it works well in freestanding environments typically found in embedded applications. A principal feature of this operating system is its compile-time configurability. When a feature is not enabled or selected, the code and data that is associated with it is eliminated from the resulting object code. It's a prime goal that adding features to the operating system's source code is fine to do, so long as there is no cost if an application doesn't require them and deselects them in the compile-time configurations. A minimum operating system does NOT have a real-time clock, semaphores, messages, thread priorities, sleeping threads, and so on. It's just a cooperative switcher, just able to keep the stacks separated (and that's about it.)

It supports both Harvard and von Newmann architectures and the kinds of memory systems that common microcontrollers provide (such as read/write and read-only for either code or data.) For example, it is also an explicit design goal to isolate those portions of the thread/process data structures which can be arranged into read-only memory (flash, for example) from those portions which require read/write access during operation. And processes (threads) can be defined at compile-time, not just at run-time.

More data space is required for pre-emption support, of course.

The real-time clock serves two primary purposes in my O/S. First, the real-time clock can count down a delay so that threads in the sleep queue can awakened (moved to the ready queue) when their time delays expire. You can have a sleep queue in this O/S even if you don't use a real-time clock, but then there will be no way to automatically count down time delays and thus appropriately move such threads to the ready queue when their delay expires. Second, the real-time clock can time out the currently running thread if quantums are enabled. This won't do anything if preemption is disabled. But with preemption, this will cause the O/S to reschedule the current thread and, if other processes have the same priority, to round robin share the CPU time.

Including real-time clock support does NOT necessarily imply preemption. If preemption is disabled but the real-time clock is used, this only means that threads can move from the sleep queue to the ready queue on timed intervals and that the current thread can have its quantum counted down. But if preemption is disabled, then that is all that happens. No change in the current thread will be considered until the current thread makes a system call of some kind, directly or indirectly, that may reschedule.

A real-time clock is essentially a hardware interrupt event. Disabling preemption means only that this real-time clock event isn't permitted to cause rescheduling. The remaining actions continue. Enabling preemption means that rescheduling is permitted when the interrupt occurs and this implies that thread motion from the sleep queue to the ready queue or else thread quantum time-out can cause the current process to change, if there is an equal or higher priority process ready to run. [The real-time clock may also be used to keep track of elapsed times or other similar purposes (wall clock time, up time, various metrics, etc.)]

The use of quantums are used for round robin sharing of the CPU time -- if preemption is enabled and the real-time clock is also enabled. Without a real-time clock the current thread quantum cannot time out and without preemption enabled no switching to the next ready thread of equal priority can occur (assuming no higher priority thread is ready, of course.) Quantums in the O/S can be enabled even if there is no preemption and even if there is no real time clock event. This parameter allocates RAM for a quantum value for the current process. If there is no real time clock, then the quantum isn't automatically updated. And if there is no preemption, then there is no rescheduling when the quantum reachs zero. But no conflict inherently arises from the lack of either one of these associated facilities -- or from the lack of both of them.

A sleep queue in this O/S is simply a 'delta' queue which schedules threads on the basis of a relative number. Usually, this number represents how many clock ticks to wait -- especially when the real-time clock is enabled -- but it can also be used for other purposes. Threads are inserted in order of their remaining delay, with the soonest at the queue head. Others are added afterwards, kept with the number of remaining clock ticks after the thread ahead of them runs.

If the real-time clock is available, a counter for the top thread in the sleep queue is decremented and, when it reaches zero, the top thread is moved from the sleep queue to the ready thread queue. If there are any threads immediately following it in the sleep queue, which also have a zero counter, they are also moved at this time. With preemption enabled, this movement causes rescheduling. Thus, moving higher priority sleeping threads to the ready process queue does preempt the currently running thread. If the real-time clock is not available, there is no timer interrupt and therefore threads simply stay in the sleep queue for an indeterminate period of time. The only way they move from the sleep queue in such cases is if some other thread wakes them or else there are no remaining threads available to run

-- which causes the O/S to move the head of the sleep queue to the ready queue and to restart it.

But supporting a sleep queue without a real-time clock is unusual. I suppose it can be used in cooperative arrangements where some threads should run more frequently than others (kind of a modified round robin scheme), using the delta-time sleep queue as a way of achieving non-equal thread shuffling.

(If you are curious about how this may be achieved, consider the idea of four processes where P1 should run 4 out of 8 switches, P2 should run 2 out of 8, P3 should run 1 out of 8, and P4 should run 1 out of 8. If a delta sleep time for P1 is given as 1, a delta time for P2 as 2, a delta time for P3 as 4 and a delta time for P4 as 4 and then these processes use this associated value to sleep on when they want to cooperatively switch away, then the arrangement will work as desired.)

A sleep queue requires a counter for each process to support timing their motion back to the ready process queue. This does increase RAM requirements.

Obviously, enabling priorities lets the O/S organize the ready queue by individual thread priority. Disabling it means all threads have the same priority. The O/S doesn't hinder threads with equal priority -- threads with the same priority are simply added into the ready process queue after all threads of equal or higher priority (which yields round robin behavior.)

Priorities do adding a priority value for each thread. This increases the RAM requirements.

The semaphores (and messages) provide a simple method of thread coordination, often used to synchronize their actions or cooperate in sharing common resources. Each semaphore consists of an integer count, conceptually, with wait(s) calls decrementing the count and signal(s) calls incrementing it. If the count count goes negative due to a wait(s) call, the thread is suspended or delayed. In that case, the next signal(s) call will release exactly one suspended thread. Semaphores are a synchronized method of general process coordination, essentially requiring one wait(s) for each signal(s) call. Each semaphore queue requires a small amount of RAM for a queue head and tail node. This again increases the RAM requirements.

Different from the semaphores, the messages provide a method of unsynchronized thread coordination. They are particularly useful, in contrast to semaphores, when a thread doesn't know how many messages it will receive, when they will be sent, or which thread will send them. These messages are posted directly to the thread and are not of the type which are left at rendezvous points (since this O/S supports very small RAM requirements.) Threads do not block when sending messages and only the first message sent to a thread is retained, if several are sent to it before it can receive them. Enabling this feature allocates room in each process node to hold the latest message. This once again increases the RAM requirements.

The O/S queues may be configured as either singly linked lists or doubly linked lists. Enabling the doubly linked lists provides a faster response from the operating system when moving threads from one queue to another. But it does so at the expense of an extra link in RAM for each queue node. It is a space versus speed option. Normally, doubly linked lists are desired. But if RAM is very tight, disabling this option could help.

Reply to
Jonathan Kirwan

Ermmm...

strcpy? strcmp? strlen? memcpy? memset? memcmp? size_t? offsetof? ato(i|l)? strto(l|ul)? NULL? assert? ctype.h? limits.h? stdint.h?

sprintf is usually too large for the applications I work on.

Regards,

-=Dave

--
Change is inevitable, progress is not.
Reply to
Dave Hansen

But it is worth noting that there is a significant lowering of reliability as soon as you move to a pre-emptive system.

Ian

--
Ian Bell
Reply to
Ian Bell

The above are extremely application dependent.

There is never enough memory in a small embedded system for these to be necessary.

Again extremely app dependent

This is a LIBRARY FUNCTION???

Most definitely not.

Nope.

Damn right it is.

Not to mention that A) most of the c library functions are so generalised they contribute only bloat and B) it's bad enough finding bugs in you own code without having to sort them out in the vendors Clib.

Ian

Ian

--
Ian Bell
Reply to
Ian Bell

Well, the moment you do 32 bit arithmetic on an 8-bitter, most likely a library function is called...

Meindert

Reply to
Meindert Sprang

"Ian Bell" wrote

Why?

-- Nicholas O. Lindan, Cleveland, Ohio Consulting Engineer: Electronics; Informatics; Photonics. Remove spaces etc. to reply: n o lindan at net com dot com psst.. want to buy an f-stop timer? nolindan.com/da/fstop/

Reply to
Nicholas O. Lindan

I don't like them. They are great for systems where you will be running code that someone else wrote (example: desktop PC), but if you control all of the software a cooperative scheduler is usually batter than a preemptive scheduler for real-time. Then again, perhaps I just haven't run into the kind of problem that they solve; we all have limited experience.

Reply to
Guy Macon

Very rare that I would use any of the above.

If I want to copy memory (for example) two pointers usually do the trick.

It "usually" is for me too - that why I said " very occasionally" but a number of compilers provide cut-down versions.

Mike Harding

Reply to
Mike Harding

Even with a simple foreground/background cooperative monitor, you would have to call the monitor from the background task at frequent intervals to check if the foreground task needs to be run (as a result of external events and interrupts). In some complex low priority background task, there is always the risk of failing to yield often enough.

However, in a pre-emptive system, you can put any complexity into the lowest priority task (just above NULL task or as the NULL task itself), without ever worrying about the execution times. This task could even be written by programmers without multitasking experience.

Of course, you have to be very careful how often and for how long the higher priority tasks execute, but as these are small and simple (or at least they should be, the higher the priority is), this is not a serious problem.

Paul

Reply to
Paul Keinanen

Yes the maths library is the most usefull part.

Reply to
CBarn24050

small

tasks

see

alternatives to

not

are

is not

than

requirements

other

be. This

every

into

the

with

before

small

Distributing the problem over more processors simplifies each controller, but you still have to make the total system work. It seems your just moving the complexity somewhere else (interface specification) and not really making the total system any less complex.

Reply to
joep

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.