Task priorities in non strictly real-time systems

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
I have always worked on non real-time systems, i.e. when the reaction of  
the system to an event should occur in a reasonable time. For example,  
when the user presses a button, a motor should start rotating. If the  
motor starts after 100us or 100ms is not so important.

I never used RTOS, so the architecture of my firmware is based on  
"superloop technique" (background tasks running in the loop and interrupts).

while(1) {
   task1();
   task2();
   ...
}

All the tasks don't block, ever. As a rule of thumb, I accept a block  
time of maximum 100us-1ms. When the task needs to block for a greater  
amount, I try to implement it as a state-machine, avoiding blocking.

The ISRs are very lightweight: they only set/get some flags or push/pop  
a byte to/from FIFO queues.
With 32-bits MCUs (modern Cortex-M MCUs), I can change 32-bits variables  
(mainly the ticks of the system) in ISRs without caring of  
race-conditions that could occur on 8-bits MCUS when the background  
tasks access the same variables.

In the past I used this architecture with success even in medium-complex  
systems featuring Ethernet, lwip, mbedTLS, USB and touchscreen (emWin).

Sincerely I think this architecture is good enough for all non real-time  
systems, so I don't understand why to use a RTOS in those cases.
However I need to use a RTOS (FreeRTOS) for the next project, because it  
is one of the requirement. It isn't a real-time system, but the RTOS is  
required.

I think I can convert my architecture to RTOS by creating a task for  
each of the function I call in the superloop and starting the OS  
scheduler. However now the task function can't return, so I can write it  
in the following way:

void task1_main(void) {
   while(1) {
     task1();
   }
}

task1() can be the *same* function of the superloop architecture.

I can assign each task the same priority: in this case, FreeRTOS will  
use round-robin scheduling, giving all the tasks the same opportunity to  
run.

Is it correct?

Re: Task priorities in non strictly real-time systems
On 03/01/2020 14:41, pozz wrote:
Quoted text here. Click to load it



Quoted text here. Click to load it

RTOS's have their advantages and disadvantages.  They can make it easier
to guarantee particular timing requirements for high priority tasks, but
make it harder for low priority tasks.  They can make it easier to write
individual tasks, but harder to write efficient inter-task data sharing.
 They can make it easier to modularise and separate the code, but harder
to debug.

An RTOS is /not/ necessary for real-time coding.  Conversely, an RTOS
can be useful even when you don't need real-time guarantees.

Quoted text here. Click to load it



Quoted text here. Click to load it

You might be better using cooperative scheduling and :

void task1_main(void) {
  while(1) {
    task1();
    taskYIELD();
  }
}

With cooperative scheduling, you know exactly when the current task can
be changed - it can happen when /you/ want it to, due to a yield or a
blocking OS call.  With pre-emptive scheduling, you will have to go
through your existing code and make very sure that you have locks or
synchronisation in place for any shared resources or data.

Re: Task priorities in non strictly real-time systems
Il 03/01/2020 15:19, David Brown ha scritto:
Quoted text here. Click to load it



Quoted text here. Click to load it

Yes, I agree upon everything.


Quoted text here. Click to load it



Quoted text here. Click to load it

You're right, cooperative scheduling is better if I want to reuse the  
functions used in superloop architecture (that is a cooperative scheduler).


Re: Task priorities in non strictly real-time systems
pozz wrote:
Quoted text here. Click to load it
<snop>
Quoted text here. Click to load it

Preemptive scheduling probably causes more problems than it solves, over  
some problem domains. SFAIK, cooperative multitasking can be very close  
to fully deterministic, with interrupts being the part that's not quite  
deterministic.

--  
Les Cargill


Re: Task priorities in non strictly real-time systems
On 1/5/20 2:32 PM, Les Cargill wrote:
Quoted text here. Click to load it

Preemptive scheduling solves a lot of serious issues when there are  
significant Real-Time requirements, as without it, every task needs to  
at least check for a possible task switch often enough to allow the  
tight real-time operations to complete on time. Yes, if that operation  
can be done COMPLETELY is the hardware ISR, then other operations don't  
need to worry about them as they are just interrupts. It isn't that  
uncommon for these sorts of operations to need resources that mean they  
can't just complete in an ISR.

Which is better for a give set of tasks if very dependent on those  
tasks, and the skill set of the programmer(s). I tend to find for the  
problems I personally run into, preemption works well.

Re: Task priorities in non strictly real-time systems
Richard Damon wrote:
Quoted text here. Click to load it

I don't know how to square the two things being said in that sentence  
fragment. Preemptive is inherently less deterministic than cooperative.

Quoted text here. Click to load it

Yes. You need to conform to some granularity in time.

Quoted text here. Click to load it

Not so much...

Quoted text here. Click to load it

This isn't about interrupts; it's about chunking the mainline processing  
into phrases. After each phrase the thread can block.

Quoted text here. Click to load it

Preemption is often the default state now, and people simply get used to  
it.

--  
Les Cargill

Re: Task priorities in non strictly real-time systems
On 1/5/2020 12:32 PM, Les Cargill wrote:
Quoted text here. Click to load it

Preemptive frameworks can be implemented in a variety of ways.
It need NOT mean that the processor can be pulled out from under
your feet at any "random" time.

Preemption happens whenever the scheduler is invoked.  In a system
with a time-driven scheduler, then the possibility of the processor
being rescheduled at any time exists -- whenever the jiffy dictates.

However, you can also design preemptive frameworks where the scheduler
is NOT tied to the jiffy.  In those cases, preemption can only occur
when "something" that changes the state of the run queue transpires.
So, barring "events" signalled by an ISR, you can conceivably
execute code inside a single task for DAYS and never lose control of
the processor.

OTOH, you could end up losing control of the processor some epsilon
after acquiring it -- if you happen to do something that causes
the scheduler to run.  E.g., raising an event, sending a message,
changing the priority of some task, etc.  In each of these instances,
a preemptive framework will reexamine the candidates in the run queue
and possibly transfer control to some OTHER "task" that it deems
more deserving of the processor than yourself.

     process();   // something that takes a REALLY LONG time
     raise_event(PROCESSING_DONE);

In the above, process() can proceed undisturbed (subject to the
ISR caveat mentioned above), monopolizing the processor for as long
as it takes.  There will be no need for synchronization primitives
within process() -- because nothing else can access the resources
that it is using!

*If* a task "of higher priority" (ick) is ready and waiting for
the PROCESSING_DONE event, then the raise_event() call will result
in THAT task gaining control of the processor.  To the task
that had done this process()ing, the raise_event() call will just
seem to take a longer time than usual!

It's easy to see how a time-driven mechanism is added to such
a system:  you just treat the jiffy as a significant event
and let the scheduler reevaluate the run queue when it is
signaled.  I.e., every task in the run queue is effectively
waiting on the JIFFY_OCCURRED event.

(i.e., the jiffy becomes "just another source of events" that
can cause the run queue to be reexamined)

It's easy to see how you can get the same benefits of cooperative
multitasking with this preemptive approach without having to
litter the code with "yield()" invocations.  This leads to more
readable code AND avoids the race/synchronization issues that
time-driven preemption brings about.  The developer does have to
be aware that any OS call can result in a reschedule(), though!

Re: Task priorities in non strictly real-time systems
Don Y wrote:
Quoted text here. Click to load it

Quoted text here. Click to load it


That seems to me to be incorrect. "Preemptive" means "the scheduler runs  
on the timer tick." I'd say "inherently".

Quoted text here. Click to load it

Quoted text here. Click to load it

I agree in that I think we have to hold the run queue discipline  
separate from whether the jiffy tick runs the run queue. But in my mind
at least, your "something" is inherently cooperative - a thread has
blocked - unless that "something" is the timer tick.

At this point, I should probably say that my opinion on this is
probably derived from the Peterson-Silberchatz book.

Quoted text here. Click to load it


Quoted text here. Click to load it


Quoted text here. Click to load it

So how is "raise_event" not blocking? Er, rather, what value would that  
serve? It's nominally the same as a relinquish() operation on a  
semaphore. I see this as three cases:

- current task priority < another ready task - then the intention
   of priority is thwarted.

- priorities are equal. Meh - flip a coin, possibly biased away from
   the current task.

- Greater than - well, we're not gonna switch, are we?

So I think this all sums to "it's blocking."

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Right; so it's "fractionally" blocking. Like dude, that's
totally cooperative multitasking :) The running thread did something to
run the ready queue.

Historically, this all goes back to the transition from "batch" to  
"multiuser" in the Dark Ages - it was necessary to synthesize an event  
to  switch. A time was as good as anything.

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Yep. However, I'm  not prepared to call the absence of the jiffy
event  preemptive.

In truth, it's kinda like a middle place between "cooperative big loop  
stuff" and a fully preemptive situation.

Quoted text here. Click to load it

I really think of the goal here to chunk an operation into meaningful
... chunks, each of which is more or less "atomic", and yielding between  
them. I've never had that not work out for me. Goes back to things
on DOS in state machines, headending on things like multiple
serial ports.

Quoted text here. Click to load it


Mmmmm.... I think you still have those when it comes to shared state.
That's more of something to prove out locally. I will say - I prefer to
think in terms of "regulation", like a regulator clock releasing things,  
when I can more than "grab(); process() ; relinquish();" if I can.


Quoted text here. Click to load it

It's been my experience that any O/S call should be assumed
to cause a running of the ready queue :)

--  
Les Cargill

Re: Task priorities in non strictly real-time systems
Hi Les,

On 1/5/2020 8:26 PM, Les Cargill wrote:
Quoted text here. Click to load it

Then, by your definition, any system without a scheduler tied to
the jiffy is "cooperative"?  And, any system without a jiffy (at
all!) would also have to be considered cooperative?

How do you further refine systems wherein OS calls run the
scheduler if the run queue has been altered as opposed to
those that only run the scheduler when "yield()" is invoked?

Do you expect the current task to EVER lose control of the
processor in the following code fragment (in a cooperative
system)?

      process(...);
      raise_event(...);
      send_message(...);      // consider case of message queue "full"!
      receive_message(...);   // consider case of no message is pending!

If I disable the timer interrupt (e.g., assume timer fails!)
in a preemptive system, do you expect the processor to ever
transfer control to any other task (same code fragment)?

Quoted text here. Click to load it

The "something" is not "yield()".  Should the developer be surprised by
the fact that the scheduler ran (in a cooperative system)?

If the developer tries to transmit a character via a UART and the
UART Tx register is presently full, should he expect his task to
lock up the processor while waiting for the UART ISR to (eventually)
make the Tx register "ready"?  (in a cooperative system?  in a
preemptive system with the timer IRQ disabled?)

[What does the putchar() code look like in each of these cases?
Do you *spin* waiting on the UART? ]

A more natural expectation (re: preemption) is that calling on the
"system" should be fair game for the system to preempt the current
task whenever the situation changes enough to alter the choice of
"most runnable" task.

A more natural expectation (re: cooperative) is that the absence
of an explicit "yield()" means the task retains control of the
processor.

Quoted text here. Click to load it

In a cooperative environment (using my above definition), I
could raise dozens of events, sequentially, without fear of
any of them being acted upon /until I explicitly yield()-ed/.
I wouldn't worry about adding some sort of mechanism to
treat the series of raise()'s as an atomic operation -- because
my postponing of the yield() effectively guarantees this atomicity.

Effectively, this becomes:
      priority = get_priority();
      set_priority(MAX+1);
      raise_event(...);
      ...
      raise_event(...);
      set_priority(priority);
(i.e., running the scheduler in each raise() will still not
steal the processor away from the current task)

Quoted text here. Click to load it

I'd wager that most folks using cooperative multitasking DON'T expect
a task switch to occur at any time other than yield().  And, have
probably not coded to guard against these possibilities.

Quoted text here. Click to load it

Yes you'll agree that the scheduler can run at times OTHER
than the jiffy?  And, the user shouldn't be surprised that his
task has lost control of the processor between two lines of code,
neither of which was a "yield()"?

Quoted text here. Click to load it

Another misnomer is "big loop == cooperative".

The "big loop" serves the function of (crude, naive, simplistic)
scheduler.  It's presence is not required for a cooperative system.
E.g., yield() could call a function that saves the current PC/state
on the current stack, examines a list of TCBs on the run queue,
selects the most runnable, restores the stack from that TCB,
restores the state for that task and "returns" to the PC resident
on that stack.

No timer.  No loop.  100% cooperative.

The earliest designs I worked on saved NO task state (besides PC),
supported different "task priorities" and didn't require any
special mechanisms to ensure the statements in a task() were
processed sequentially.  So:

     while(FOREVER) {
          task1();
          task2();
          task3();
     }

     task1() {
          foo = 0;
          yield();
          foo++;
          yield();
          foo++;
          yield();

          if (2 != foo) {
             panic();

          <blah>
          }
      }

would behave as you'd expect (i.e., no panic).  You can do this
on all sorts of super-tiny processors that don't seem like they'd
lend themselves to multitasking, otherwise.

Quoted text here. Click to load it

If you look at many practical applications, you can see how
this almost naturally follows.  E.g., you update a FIFO
(head, tail, contents) and THEN do something that exposes
the scheduler -- instead of having to remember to lock the
scheduler out while you do SOME of the FIFO update, tinker
around with something unrelated, finish the balance of the
FIFO update and THEN unlock the scheduler.

Paraphrasing the above:
     task1() {
     not_done:
          if (data != available)
               return;
          grab_available_data();
          yield();

          process_grabbed_data();
          yield();

          if (data == complete) {
                // act on the processed data
          } else {
                goto not_done;
          }

          <blah>
      }

I.e., grab_available_data() likely involves talking to
a shared FIFO (the producer being some other process).
Note that there is no need for synchronization primitives
because grab_available_data() is written NOT to contain
any explicit yield()'s -- and the developer operates on the
assumption that he can do anything he wants without losing
control of the processor as long as he avoids yield()!

Likewise, while this task moves on (during its next
activation) to process_grabbed_data(), the producer
can have been working on making more data available
(in the aforementioned FIFO).

It's not hard to look at a "task" (job?) as consisting of
these incremental steps towards the desired solution.
The only contrivance is the presence of the "yield()"
statements; i.e., you could see how the code would
make sense "as is" with them elided... the series of
steps would likely remain the same!

Quoted text here. Click to load it

Only if you allow operations on parts of that state that SHOULD
be updated concurrently to be broken by the placement of a
yield() -- or, other system call (which could effectively bring
about a preemption, in the preemptive case).

Quoted text here. Click to load it

Isn't my above example exactly this?  With yield()'s interposed between
steps?

Quoted text here. Click to load it

Should reading a character from a UART cause the scheduler to run?
(what if the UART doesn't have data pending?  Should the call cause
the system to spin -- effectively blocking ALL tasks?)

Should posting a datagram cause the system to spin until the
network is ready to XMIT it?  Should the network stack be implmented
in an ISR?

Where are the routines that one would consider "O/S calls" enumerated
(so I know if it's safe to call them without risk of a reschedule())?
Do all the debug() mechanisms exist within the O/S framework?  Or,
outside it?

This is why it's "safer" to assume that cooperative frees the
developer from thinking about losing the processor in all cases
other than an explicit yield().

And, that the scheduler in the cooperative case need not be a
"super loop"; you may not be able to predict the successor task
to yours when you yield!  (but, you can be assured that nothing
will steal the processor away from you until you relinquish it!)

And, why its safer for preemptive to require the developer to
behave as if any O/S call can cause preemption.

And, that the timer may or may not be tied to the scheduler.
Your system can work in the absence of a timer -- and, the
absence of "yield()"!

Treat the domain of O/S characteristics as a multiaxis space
where each choice/implementation-option gives you another degree
of freedom.  It allows for a greater variety of approaches
without pidgeon-holing solutions into just a few naive stereotypes.

Exercise:  Try writing an application where the timer just provides
timing services but doesn't run the scheduler.  Or, where the timer
doesn't exist, at all!  The run-time dynamics of your solution will
change; but, the basic solution will likely remain the same.

Likewise, try writing cooperative solutions where the processor can't
be stolen away *without* a yield().  And, solutions where ANY operation
that might affect another task's readiness could potentially result
in a reschedule.  And, solutions where there's no "superloop" but,
rather, some other sort of scheduler.

Try writing with a single/shared stack.  Try writing with *no* saved
state.

You'll be amused at how LITTLE you need to actually solve many
problems.  Then, start to wonder why you're "expecting so much"!  :>

Re: Task priorities in non strictly real-time systems
Don Y wrote:
Quoted text here. Click to load it

Hey Don. :)


Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

There may be a jiffy because we always need a global free-running timer
from which other timers can be derived.

Quoted text here. Click to load it




Quoted text here. Click to load it

Absolutely.


Yep. There are a lot of options, but I'd expect each case there to
potentially reshuffle the queue.

"Cooperative" just means the thread must explicitly yield at some point.
That means you have to know which entry points in the O/S call stack
can yield and when.


Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Never. Perhaps it's incorrect, but by long habit, I have always assumed
this in my own work.

Quoted text here. Click to load it

Quoted text here. Click to load it

That is about context. There are three expected outcomes ( all IMO ):

- Increment a "oops, dropped send to UART" counter.
- "spinlock" and wait
- Go around and try again later.

Quoted text here. Click to load it

Yep!

Quoted text here. Click to load it

By what nature? :) I think that's mildly naive more than natural.

Quoted text here. Click to load it


Quoted text here. Click to load it


Quoted text here. Click to load it






Quoted text here. Click to load it

First, any mucking around with priority at all is at best risky.

Second, I still don't see what value is preserved by doing this.
Just yield and go on.

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

I suspect we've all seen purely round-robin systems, which conform to  
that expectation. That basically means "no priority".

But you have to yield some time. You are around the campfire, you Get  
the Stick, you can talk, you yield the stick.

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it


Quoted text here. Click to load it

Quoted text here. Click to load it

Indeed.


Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it





Quoted text here. Click to load it







Quoted text here. Click to load it


Quoted text here. Click to load it



Quoted text here. Click to load it

Quoted text here. Click to load it

Exactly.


Quoted text here. Click to load it






Quoted text here. Click to load it


Quoted text here. Click to load it





Quoted text here. Click to load it


Quoted text here. Click to load it

but as you say - an explicit "yield()" sort of seems hack-ey. We've
all done it but it's a sign that improvement is possible.

Quoted text here. Click to load it

Quoted text here. Click to load it

I think so.

Quoted text here. Click to load it

Quoted text here. Click to load it

What I am more familiar with is that the interrupt from a UART
causes an ISR to run, which then changes state in a buffer and signals
( possibly thru a semaphore ) that the buffer-consumer task may now be
eligible to run.

I think of three primary paradigms:
    - The VxWorks-ish three layer ( ISR, middle and "userspace" )
    - A Linux ioctl(..READ...).
    - fake multitasking where the ISR updates the
    buffer with interrupts off.
    


Quoted text here. Click to load it

Quoted text here. Click to load it

No and no.

Quoted text here. Click to load it

Quoted text here. Click to load it

Those are open questions. It depends.

Quoted text here. Click to load it

Possibly. That depends on the perception of the balance between the  
explicit and the implicit. No, really - it's more like that.

Quoted text here. Click to load it

Quoted text here. Click to load it

Again, I think it's more complex than that. Because of experiences, I  
still think of "preemptive/cooperative" as being more like "does the  
jiffy ISR run the ready queue?"

That's more concise to me. Whether ... nominally device drivers run the  
ready queue is a more nuanced decision, with a common answer being "oh  
yes."

Quoted text here. Click to load it

Quoted text here. Click to load it

Exactly.




Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Yep.



Isn't that funny? We seem to be perfectly driven to get in our own way  
in so many cases.

--  
Les Cargill

Re: Task priorities in non strictly real-time systems
Hi Les,

[much elided as this thread is consuming more time than I'd
prepared to spend on it...]

On 1/6/2020 6:37 AM, Les Cargill wrote:
Quoted text here. Click to load it

What if you DON'T need any notion of time?

Build a box that has some flavor of communication port (with protocols
that do not rely on time -- serial, bytewide, etc.).  Accept messages
on those ports intended to query a datastore hosted in the device.
Parse incoming messages.  Act as commanded.  Issue replies.  Await
next message/request.

The content of the replies don't change based on any notion of time
(presumably, the client will wait until the reply is issued; OR, will
initiate another request as an indication that it is no longer willing
to continue waiting for the previous request's resolution)

Quoted text here. Click to load it

I suspect many would be surprised if the processor "went away"
in the absence of any yield()s.

Quoted text here. Click to load it

The whole point of NONpreemptive is that the developer is freed from
contemplating the "what ifs" of losing control of the processor -- because
those are exactly the situations where intermittent (and hard to test)
bugs creep into the system.
     "Hmmm, lets's see if the code works properly if I yank the
     processor away *here*!  OK, how about HERE?!  And, here?!!"

With a cooperative system, you are forced to consider how every
action that could, by its nature, "block" (e.g., character not
available at UART when you go to look for it) is handled.

E.g., instead of "getchar()", you'd be more inclined to write
a testchar() that returned a status and a grabchar() that ALWAYS
returned (something, even if no character available).

Or, you would adopt the discipline of KNOWING that anything that could
potentially block WOULD explicitly yield.

E.g., I have a FSM implementation that I keep reusing.  I *know*
that it yield's after each iteration (i.e., the yield is built
into the FSM interpreter).  This is highly intuitive, once you've
coded under it -- you wouldn't expect the FSM to hog the CPU
hoping that something would come along to be processed!

Quoted text here. Click to load it

Ans:  the developer, cognizant of HIS OBLIGATION not to hog the CPU,
crafts the putchar() implementation to satisfy this obligation in
a manner that will "fit" his coding style and the needs of the
application.

E.g., when I use timers in cooperative systems, I rarely implement a
"wait_on_timer()" -- which would obviously have to contain a yield()
to be well-behaved.  Instead, I *test* the timer and decide what to do
in light of the result:  do I want to spend time working on something
else WHILE waiting for it to expire (e.g., maybe it is a timeout
that tells me when I should abort the current operation); or, do I want
to stop further progress on my algorithm UNTIL the timer expires
(in which case, I insert a yield() in the "timer not expired" branch).
This makes it very clear to me where my code is losing control of
the processor -- and where it's NOT!

Quoted text here. Click to load it

Which operations should tolerate "buried" yield()'s?  If I call
on the math library to multiply two values, should I expect to lose
control of the processor?  (imagine it's handling 80 bit floats!
would you want it to monopolize the processor for all that time?
would the developer EXPECT the multiply to have a buried/concealed
yield??)

It's safer (IMO) to let the user decide when he's monopolized the
processor too much.  It might be more prudent for him to write
    yield();
    fmul();
    yield();
    fadd();
    yield();
    fmul();
than to bury the yield()'s in those floating point operations beyond
the control of the developer.

[How would the developer implement an atomic SET of FPU operations
if the operations internally relinquish the CPU?  Or, do you propose
ADDING a "ignore_yields()" to your preemptive system??  :>  ]

Quoted text here. Click to load it

In the above, I've ensured that the highest priority task waiting
for ANY of these events runs next -- not necessarily the task
waiting on the first event raised!

And, why do I want to yield if Ive got other things that I could be doing?

Quoted text here. Click to load it

But, *you* decide when to pass the stick; imagine if you went to
blow your nose and someone ASSUMED control of the stick!  "Hey!!!  I
wasn't DONE YET!!!"

Quoted text here. Click to load it

But, it need not!  All yield() says is you want to give up your
control of the processor.

      ...
      if (timer != expired)
          yield();

      do_something();

is a common coding technique that I use.  Everything up to the conditional
will be re-executed the next time I get control of the processor.  And,
this pattern will continue until the timer expires -- at which time, I
will advance to do_something ELSE.

Imagine the ... was:

wait_for_motor_to_home()
{
      if (limit == detected) {
         motor(OFF);
         return;
      }

      if (timer != expired)
          yield();

      motor(OFF);
      signal(ALARM);
      display("Limit switch failure in motor 3.  Shutting down!")
      shutdown();
}

Quoted text here. Click to load it

No!  It's THE sign of cooperative multitasking.  YOU have control
over the processor once it's been given to you.  Seeing a yield()
in a *preemptive* implementation should have you asking "why did
the developer decide he didn't have anything else to do, here?
Is this task, possibly, incorrectly engineered (perhaps it should
be DONE at this point and the balance of its actions relegated to
some OTHER task that might be started on its completion?)

Quoted text here. Click to load it

See my FPU example.  Also, keep in mind that cooperative approaches TEND to
be favored in SMALL/resource-starved environments -- that can't afford
the resources of more supportive O/Ss.

Quoted text here. Click to load it

I recall reading a <mumble> text that talked about this phenomenon.
You have to continually question your assumptions as you progress
towards a solution.

This is why I favor creating RIGID definitions for terms -- despite
what may be common practice (which is often wrong or subject to revision;
look at how "real time" has evolved from "non batch" or "real world").

One of my favorite amusements at parties is to set 3 beer bottles
(preferably long-neck) on the table -- along with 3 "butter/steak
knives" (or, chop sticks or any other similar item).  Then, task
folks with "balancing the three knives atop the three bottles".

Invariably, the solver arranges the bottles in an equilateral
triangle (depending on their level of sobriety) and then
arranging the knives connecting the mouths of the three bottles
(again, can be amusing depending on sobriety!  :> )

After they congratulate themselves on this feat, I take away
one of the bottles:  "Balance the three knives atop the TWO
bottles".

[there are obvious solutions]

Remove another bottle!

Then, "balance the three knives atop the ONE bottle"!

[again, more solutions]

Then, the kicker:  the last solution ALSO satisfies the criteria that
I stated in the beginning -- as well as the first revision of that
criteria.  So, why didn't they just come up with THAT solution in
the first (and second and third!) place??  Why are they imposing
additional constraints on their solution that weren't part of the
problem specification?

[How much hardware and software gets included in designs that doesn't
NEED to be there?  If you need FADD, do you *really* need FSUB?  Or,
FDIV?  Do you even need FADD, at all??]

Re: Task priorities in non strictly real-time systems
Don Y wrote:
Quoted text here. Click to load it
Fair enough - thanks for your thoughts.

<snip>

--  
Les Cargill

Re: Task priorities in non strictly real-time systems
Just a notice that the generally accepted definition of a "Cooperative"  
scheduling system, vs a "Preemptive" one is that a Cooperative system  
will only invoke the scheduler at defined points in the program, that  
generally includes most 'system calls', as opposed to preemptive, where  
the scheduler can be invoked at almost any point (except for limited  
critical sections).

A system that only runs the scheduler at explicit calls to it isn't the  
normal definition of a cooperative system, I would call that more of a  
manually scheduled system.

The advantage of a cooperative system is that since the scheduler  
happens at discrete points, most of the asynchronous interaction issues  
(races) go away (as it is very unlikely that the code will call the  
system functions in the middle of such an operation.

The advantage of a preemptive system is that, while you need to be more  
careful of race conditions, low priority code doesn't need to worry  
about the scheduling requirements of the high priority code.

Re: Task priorities in non strictly real-time systems
Richard Damon wrote:
Quoted text here. Click to load it

Right. And for path-dependent reasons, that *usually* means the timer  
tick ... thing. It, of course, doesn't have to.

Quoted text here. Click to load it

It's all fun and games until you have interrupt service... :)

Quoted text here. Click to load it

--  
Les Cargill

Re: Task priorities in non strictly real-time systems
On 1/7/20 2:38 AM, Les Cargill wrote:
Quoted text here. Click to load it

It can be ANY of the various interrupts that the machine has, it could  
be the timer, or it could be a serial port, or any other device.

I find most of my scheduler invocations are a result of a device driver  
interrupt, and only a lesser number from the system timer.

If you REALLY are doing most of your rescheduling on timer ticks then in  
my experience you likely aren't really needing real-time performance.

Quoted text here. Click to load it

ISRs should have a very limited focus in what they manipulate, so most  
of the code shouldn't be touching anything that the ISR is going to  
touch. In my opinion, if you are trying to 'peek' at the progress of an  
interrupt based operation, your probably doing it wrong.
Quoted text here. Click to load it


Re: Task priorities in non strictly real-time systems
On Sun, 5 Jan 2020 21:26:13 -0600, Les Cargill

Quoted text here. Click to load it

Quoted text here. Click to load it

Not exactly.  "Preemptive" really means only that the task is not
(entirely) in control of when a context switch can occur.  

Preemption does not have to be based on time, and Don suggested a
scenario where it is based on changes to the run queue.  

Since only the OS (via interrupt, I/O completion, etc.) or the running
program can do something that changes the run queue, the program does,
in effect, exert *some* control over when a context switch occurs.  If
no event happens that causes a change to the run queue - or events
that do happen leave the same task in control - that task could run
indefinitely, just as if it were highest priority in a time based
system.

YMMV,
George

Re: Task priorities in non strictly real-time systems
On 06/01/2020 04:26, Les Cargill wrote:
Quoted text here. Click to load it

Quoted text here. Click to load it

I agree that Don is wrong here - but you are wrong too!

"Pre-emptive" means that tasks can, in general, be pre-empted.  The
processor /can/ be pulled out from under them at any time.  Thus your
threads must be written in a way that the code works correctly even if
something else steals the processor time.

But pre-emptive does not require a timer tick, or any other time-based
scheduling.  The pre-emption can be triggered by other means, such as
non-timer interrupts.  (To be a "real time operating system", you need a
timing mechanism in control.)


Re: Task priorities in non strictly real-time systems
On 1/7/20 4:11 AM, David Brown wrote:
Quoted text here. Click to load it

Real-Time does NOT need a timing mechanism in control. Real-Time means  
that operations have a reasonably strong definition of a dead-line of  
when they need to get done, but many systems can be designed to do that  
without needing a clock/timer.

For example, a given device needs to have its request serviced within a  
specified time. I can design the system so that requirement is met by  
the known workload and priority given to the various tasks. Often the  
timer is only needed to detect that something is wrong, and I need to  
sacrifice some deadline to meet another, or abort a failed operation. A  
system with a static priority system, and a run the highest priority  
ready task scheduling can be simpler in design (IF it can meet the  
requirements).

Re: Task priorities in non strictly real-time systems
David Brown wrote:
Quoted text here. Click to load it

Quoted text here. Click to load it

:)

Quoted text here. Click to load it

I'd really think that in practice, the timer tick would be
"first among equals" . I do have to admit that I have never really seen  
a case where there was preemption and no timer tick.

--  
Les Cargill


Re: Task priorities in non strictly real-time systems
wrote:

Quoted text here. Click to load it

What is a jiffy ? Is it the time between two clock interrupts as in
Linux ? What is so special about timer interrupts vs. other interrupts
e.g. UART interrupts ?


Here is example how a very simple pre-emptive RT kernel worked:

In the original version for a processor with only a few registers and
when the hardware saved all registers on the stack before the
interrupt service routine (ISR) was executed and restored after the
return from interrupt instruction was executed.

When the system was started in the main program, a subroutine call was
executed for each task to be created. This subroutine allocated a
local stack for the task as well as a 3 byte task control block
consisting of 1 byte task state and 2 byte saved stack pointer. The
task was started, which then performed a software interrupt storing
original registers for that task on the local stack.  

After all task had been created in priority sequence, the main program
entered an eternal loop (the null task) possibly containing a wait for
interrupt instruction to reduce power consumption.

The 3 byte task control blocks were in fixed priority order (as
created) and adjacent locations in memory.

When a task was running using its local stack and some kind of
hardware interrupt occurred, the registers were saved on local stack
by the hardware. The active stack pointer was then stored into the
task specific saved SP in the task control block. The ISR was then
executed using the local stack, potentially altering the runnability
of some other dormant task.  

Before performing a return from interrupt the scheduler was executed.
The scheduler checked the task state byte of each created task
(requiring just 3 instructions/task). If no new higher priority task
had become runnable, the scheduler performed a simple return from
interrupt restoring the registers from the local stack and the
suspended task was resumed.

However, if the ISR had made a higher priority task runnable, The
scheduler would then load the stack pointer from the saved stack
pointer from the new stack and a return from interrupt is performed,
restoring the saved registers from the newly activated local stack and
the activated task continued from where it was originally suspended.

If a running task had no more work to do but wanted to wait for an
interrupt or message from other task, it could execute a software
interrupt, saving the registers on local stack and activate the
scheduler.

If no tasks were runnable, execution fell through into the null task
at the end of the main program.

With processors with more registers in which all registers weren't
saved automatically upon each interrupt, the scheduler had to check if
a task switch to a new task needs to be performed, the additional
registers were pushed to the old stack and from the new stack
additional registers were loaded before returning from interrupt. If
no task switching was required, no additional register saves/restores
were needed, just the hardware save/restore was automatically
executed.

Thus, a RT kernel can be really simple.
  

Site Timeline