one task per priority

Has anyone every tried to implement a message-based embedded SW system, of large or medium complexity, with one execution thread per priority?

Reply to
Walt
Loading thread data ...

On Tue, 13 Jan 2009 13:13:34 -0800 (PST), Walt wrote in comp.arch.embedded:

Not just tried, but succeeded.

With full preemptive priority based task switching. Restricting to no more than one thread per priority is in fact the fastest and smallest way to do it. Everything is bit mapped.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/alt.comp.lang.learn.c-c++
http://www.club.cc.cmu.edu/~ajo/docs/FAQ-acllc.html
Reply to
Jack Klein

ITs the best way to implement RTOS when size of OS/ memory /processor are constrained in one way or another. the ucos ( people read for fun as mucos) RTOS is just one example.. Technically its termed as bitmap execution

Mohiuddin Khan Inamdar

formatting link

Reply to
mohnkhan

Don't know what is meant by "large or medium complexity", but the whole question doesn't make sense. Fixed priorities is bad concept; the only vitrue is the minumum overhead. Messaging is good as a concept, but it comes with the huge overhead. Is this a homework question?

Vladimir Vassilevsky DSP and Mixed Signal Consultant

formatting link

Reply to
Vladimir Vassilevsky

No precise meaning, I just wanted to discard systems that are so simple there would be no reason to have two or more tasks running at the same priority.

And yet most tasks that I have seen in actual implementations never change their priorities (except perhaps implicitly due to priority inheritance). How/why are fixed priorities to be avoided?

I meant messaging in a general sense that could include event flags. Our am I misunderstanding your point?

No.

Reply to
Walt

Can you list or describe examples? Any open source examples?

I don't understand what you mean by this.

Reply to
Walt

On Tue, 13 Jan 2009 23:13:19 -0600, "Vladimir Vassilevsky" wrote in comp.arch.embedded:

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Why, yes, they might be, in the hands of the unskilled. On the other hand, they may be well implemented and extremely efficient.

I'm sure messaging has a huge overhead when incompetently programmed. And, of course, not so much when well programmed.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/alt.comp.lang.learn.c-c++
http://www.club.cc.cmu.edu/~ajo/docs/FAQ-acllc.html
Reply to
Jack Klein

I don't get this either unless Vladimir is thinking of a system where the priority is forever fixed. Allowing inheritance so as to avoid priority inversion I think of as a given.

Events are degenerate, 2-state messages. They are also asynchronous by nature. The term "messaging" generally refers to a more expressive communication channel which may be either synchronous (rendezvous) or asynchronous (buffered).

However, messaging does not necessarily imply a lot of software overhead - it could be implemented in hardware.

George

Reply to
George Neuner

Hmm.

Do you really have an application in mind that you can assign a distinct priority to every execution thread? Does it really matter if two of X execution threads have the same priority? Or does the OS you use restrict you to assigning unique priorities? (It must have a lot of priority levels then, right?)

Ed

Reply to
Ed Prochak

If you have multiple threads/processes/tasks running at the same priority, you have to either run co-operative multitasking between them or use some kind of round robin with time slices between the threads at that priority level.

In my opinion, things are a lot simpler when each thread is at a separate fixed priority. If you need priority boosts to avoid priority inversion problems, there is usually something wrong with the system design.

Paul

Reply to
Paul Keinanen

What I was getting at (maybe I should have elaborated from the start) is, if you have N (non-ISR) execution threads (in the general sense, not the thread-in-task sense) that run at priority X, should all N threads be combined into a single message/event-driven thread. One clear benefit is to potentially reduce the need for mutexes, since there are fewer threads. Another effect would be to force more extensive use of the non-blocking, message-driven paradigm. I'm guessing a system that used one thread per priority would be a good test of the desirability of ubiquitous use of the message-driven paradigm.

Having fewer message queues can also reduce complexity. If we know that message B is always sent after message A, then the processing of message B can depend of the effects of the processing of A, if A and B arrive at the same queue. There would be no need to introduce additional messages just to ensure correct ordering.

Reply to
Walt

That's non-sense, unless all of the threads are running polling loops (yuk). Usually, threads should be blocked waiting on I/O or communications from other threads. If you have two threads at the same priority, then it doesn't matter which one runs first should they both become ready at the same time.

--
Michael N. Moran           (h) 770 516 7918
5009 Old Field Ct.         (c) 678 521 5460
Kennesaw, GA, USA 30144    http://mnmoran.org

"So often times it happens, that we live our lives in chains
  and we never even know we have the key."
"Already Gone" by Jack Tempchin (recorded by The Eagles)

The Beatles were wrong: 1 & 1 & 1 is 1
Reply to
Michael N. Moran

I agree. Round-robinning is most applicable to multi-user operating systems, for sharing a CPU by multiple, independent applications. It's rarely if ever useful for an embedded CPU with a single fixed application.

Reply to
Walt

u

Okay, I think I see your point here. Interesting.

Wait, either "B is always sent after message A" or it's not. There should be no need to "introduce additional messages just to ensure correct ordering" if the applications are coded correctly.

IOW, the message protocol either: * requires the sequence A then B, otherwise error or * allows A and B to arrive in any order.

Perhaps you were thinking of the error case where the threads must exchange a series of messages to resync?

Ed

Reply to
Ed Prochak

r

you

Suppose it's the Battleship embedded system. The "enemy seen" event occurs, and all the shells are down below in the magazine. So processing the "enemy seen" event generates the "bring up shell" and the "fire shell" events. If "bring up shell" and "fire shell" are queued in the same queue, you don't have to worry about trying to fire a shell that's still in the magazine. And you don't have to make it the job of the "bring up shell" processing to decide whether the shell should be immediately fired or not. While there are many obvious nitpicks with this example, I hope it communicates the general idea.

Reply to
Walt

Unfortunately, there exists a lot of operating systems with a very limited number of available (or usable) priority levels. If you only have 7, 16 or even 32 suitable priority levels to assign a few dozen threads, you can end up in the situation, in which you have to assign multiple threads on the same priority level, even if they logically would require a separate priority level.

In such situations, those threads that should have a lower priority (but must use the same priority due to lack of usable levels) and which usually also run longer at a time, must use some co-operative tactics, e.g. by yielding at a regular interval to allow more urgent thread on the same priority level to run.

The operating systems should have sufficient number of priorities, so that each thread can be assigned a separate priority. I have never had problems with 256 priority levels, even if some levels are preassigned to other uses.

A very simple RT kernel can have unlimited number of priorities (or actually 1 priority/thread) by using a simple static task list, which is scanned in a linear way, each time the scheduler is activated.

Paul

Reply to
Paul Keinanen

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.