Task priorities in non strictly real-time systems

Preemptive scheduling probably causes more problems than it solves, over some problem domains. SFAIK, cooperative multitasking can be very close to fully deterministic, with interrupts being the part that's not quite deterministic.

--
Les Cargill
Reply to
Les Cargill
Loading thread data ...

I could show anyone how in an afternoon. So I take your statement as being "only 1% of 1% have been forced to take that afternoon to learn it."

I should qualify that - I could show anyone working on a classic architecture how. With multilevel caches and certain sorts of MMUs, there may be more to it.

I'm thinking the WindRiver drivers course was about one week, which should about cover everything conceptually.

I seriously doubt Rust represents some quantum leap here.

And that's about an afternoon, really. Not so much the barriers and bothering with lock-free. That may take a little more.

--
Les Cargill
Reply to
Les Cargill

Mailboxes are just semaphores with extra steps :) When I've written mailboxes for use in user space, they usually use a semaphore.

In kernel space, you're already under a "semaphore" ( but still subject to asynchronous interrupts ). This of course varies by O/S....

Erlang is a fine system,. The ( arguably ) best thing it provides is the "actor pattern", which applies independent of language choice.

--
Les Cargill
Reply to
Les Cargill

The question becomes - how important is determinism in your system? IMO, this can be expressed in economic terms - more determinism means fewer times the phone rings.

--
Les Cargill
Reply to
Les Cargill

Behold a 0.99%-er!

:)

Reply to
Clifford Heath

I think that makes it sound harder than it is. Avoid directly shared variables when you can, using whatever message-passing tools the language or RTOS provides, or wrap all operations that access shared data in mutexes, and that will solve most problems.

What remains is mainly the skill and experience to design the shared data structures so that the mutex-protected operations are short and snappy without introducing polling, race conditions, deadlocks or starvation. But that polish is needed only for systems where processing resources are tight, which is not often the case today.

I've been implementing pre-emptive, priority-driven real-time systems, off and on, since the mid-80's and never has the phone rung because of the non-determinism.

Yes, many customer write requirements saying that they want "simple, deterministic scheduling", and then write other requirements for which the only clean solution is a pre-emptive system.

--
Niklas Holsti 
Tidorum Ltd 
niklas holsti tidorum fi 
       .      @       .
Reply to
Niklas Holsti

Preemptive scheduling solves a lot of serious issues when there are significant Real-Time requirements, as without it, every task needs to at least check for a possible task switch often enough to allow the tight real-time operations to complete on time. Yes, if that operation can be done COMPLETELY is the hardware ISR, then other operations don't need to worry about them as they are just interrupts. It isn't that uncommon for these sorts of operations to need resources that mean they can't just complete in an ISR.

Which is better for a give set of tasks if very dependent on those tasks, and the skill set of the programmer(s). I tend to find for the problems I personally run into, preemption works well.

Reply to
Richard Damon

We can larf, but I think there's less to serialization than is made of it.

--
Les Cargill
Reply to
Les Cargill

I don't know how to square the two things being said in that sentence fragment. Preemptive is inherently less deterministic than cooperative.

Yes. You need to conform to some granularity in time.

Not so much...

This isn't about interrupts; it's about chunking the mainline processing into phrases. After each phrase the thread can block.

Preemption is often the default state now, and people simply get used to it.

--
Les Cargill
Reply to
Les Cargill

Serialization using one giant lock isn't too hard - that's how people write code using interrupts. But it's a very simplified case, and the problem is to use such simple locking to construct correct algorithms at a higher level.

Many apparently straightforward algorithms have dire pathologies when the atomic operations can get interspersed at will. In many cases the bugs will not ever show up during testing. And *that* is the real problem. Such code must be *correct by construction*, but people generally don't work that way.

Clifford Heath.

Reply to
Clifford Heath

Well said, Clifford, by the way.

I really feel it's just a matter of people stretching metaphors beyond the breaking point. Most of the example cases I have seen to illustrate these pathologies are rather contrived.

Most of these are due to ... reversing dependencies within a logic structure, not abuse of constructs. And yes, I have trouble reading that sentence too.

I also don't want to appear to be too blase about things like cache lines, memory fences and all that. I just mean basic semaphores.

Because it can't very well be "at will". Aren't semaphores like the escapement mechanism in a regulator clock more than anything else?

They say "Not now."

I will not disagree; it doesn't mean I have to like that state of affairs. To an extent the very word "correct" seems to inspire helplessness.

--
Les Cargill
Reply to
Les Cargill

Any book on operating systems will have a section on synchronization. The discussions typically revolve around the "dining philosophers" problem and/or the "producer-consumer / bounded buffer" problem:

formatting link
formatting link

Undertanding when to use synchronization generally is the easy part. The hard part is that operating systems vary considerably in the synchronization primitives they provide, what those primitives are called, and how those primitives operate.

The term "lock" is non-specific: some systems may have something that actually is called a "lock", but generally the term can refer to any non-message based synchronization mechanism.

The weight and semantics of the primitive may be important: is it a user-space or a kernel primitive? Does the task wait (sleep) if the lock is not available? When the lock does become available, does it wake all waiting tasks or only one? Is the wait queue strictly FIFO or is it by task priority?

Then there is the question of reentrancy: can you take a "lock" multiple times? And if so, do you have to release it an equal number of times? To both questions, the answer is implementation dependent.

A "mutex" and a "semaphore" conceptually are different in that a semaphore is able to count and therefore it can enable some number of concurrent accesses (perhaps to a replicated service). However, a mutex is a binary yes/no primitive. But a mutex can be emulated by a semaphore with its counter initialized to 1, so some systems provide only semaphores. And some systems provide what really are semaphores but call them mutexes.

Basically you can't rely on the name of anything to understand its semantics - you really have to study the system(s) you are writing code for. Particularly if you switch between different operating systems.

YMMV, George

Reply to
George Neuner

Tony Hoare's book "Communicating Sequential Processes" is online at:

formatting link

Note: the 2015 date on the book refers to the online PDF - the material in the book was published in 1985.

It will teach you everything you need to know about how to use message passing to solve synchronization and coordination problems. It won't teach you about your specific operating system's messaging facilities.

George

Reply to
George Neuner

George

It's gems like this why I keep browsing the newsgroups...

Thanks for the reference to Hoare's book. I've just skimmed it now, and will spend some more detailed reading in the coming weeks (and months)...

Regards,

Mark

Reply to
gtwrek

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.