RTOS popularity

That's a pretty vague question.

What criteria are you looking at/for? Or, are you just fishing?

Are you planning on coding in an HLL? Or, ASM? You can achieve relatively more in an ASM environment than a HLL as you are freed of many "requirements" imposed by the HLL.

Do you really need an RTOS? Or, will an MTOS suffice? IME, the biggest benefit comes from the multitasking capability (which can be relatively inexpensive). Most folks don't truly appreciate what real-time constraints impose on an RTOS (and simply claim the MTOS is an RTOS -- when it often isn't).

Which deterministic guarantees are important? What sort of resources can you afford to expend on/in the OS? There are no free lunches so you have to decide what you are willing to pay/trade away for the benefits you want to accrue.

Will your task set be static? Or, will they "come and go", over time, in response to particular needs in your application?

Do you need a preemptive MTOS/RTOS? Or, will a cooperative scheduler work for you? How much do you trust your (present/future) developers to be good citizens? How much attention do you want them to have to pay to what they are doing in their code?

Will you be handling all of your memory needs statically? At design time? Or, do you expect the OS to provide some assistance in that regard?

What sort of communication mechanisms do you need? Shared memory? Monitors? FIFO's? Message passing? Or, are you just planning on developing ad hoc mechanisms, "as required"?

Likewise, what sort of synchronization primitives do you expect? Or, are all of your "tasks" completely independant of each other?

Do you need support for hardware devices ("drivers")? Or, will you develop that yourself and integrate it with the operating environment?

How much support do you require for the OS in your development environment? Do you expect the debugger to react to asynchronous changes in the execution environment? (how do you expect to address this for a preemptive/timesliced implementation)

How well do you understand your application and the likely design of the constituent tasks therein? Can you already see the concurrency in your (planned) design? Or, are you just hoping to put mechanism in place that you may (or may NOT!) use later?

How much effort are you willing to invest in this (assuming you aren't keen on investing monies)?

Just some issues to get you thinking about what you really *need* vs. what you would *like*...

Good luck!

Reply to
Don Y
Loading thread data ...

The thing that generalizes is that "if system A depends on a larger/more complex suite of task priority than system B, it is by-design inferior to system B, which has a dependency on a smaller st of priorities." This assuming both A & B more-or-less work.

It is a heuristic which defines an ordinal relationship that's quite useful in evaluating systems.

I have the conceit that we are clockmakers. When an interrupt fires or an event is gleaned from other input, this is like the escapement mechanism of a pendulum clock, that allows the system to deterministically move forward one notch.

If you don't build things in that manner, you will suffer.

*In this case*, it should be possible to build things such that no dependence on priority is needed at all. Exactly one task per system plane is eligible to run at a time.

This should provide as close to purely deterministic behavior as is possible.

Now, given planes of a system, there may need to be priority established.

I have the advantage(?) of having worked with ObjecTime, in which all "threads" could be run on bare metal with no O/S and only a minimal amount of fiddling with interfacing to the hardware to support the system - you might need a timer or so.

Filters should be completely deterministic for any given bank of filters for a stream of fixed bandwidth.

If we mean the same VNC, then it's not very good software; it's just available. But video is like that'; the formats are insane and on the other side, the video cards and drivers are even crazier.

So I am sympathetic to that particular plight.

indeed; it's networking that drives much of what I am saying. You have to be able to drop things in a smooth and linear fashion.

And that's fine; I have no rash with user I/O device handling being at high priority.

Yeah, could be. I'm more familiar with the scheduler in Linux and various RTOS offerings anyway.

It's a sort of "trimpot" thing. Any trimpot you can get rid of is a good thing :) And I value determinism highly; multiple task priorities indicate low levels of determinism.

Right. I know video is always challenging.

--
Les Cargill
Reply to
Les Cargill

In the middle of the 1980's I was running a small department with about half a dozen programmers writing programs for the PDP-11/RSX-11 platform.

When getting a call for tenders, I first divided the problem so that each program would fit into the 64 KiB PDP-11 address space (possibly designing the overlay loading). After that, I split the functions into individual tasks and assigned priorities into each task and did some general view about messages between tasks.

After that, I made some assumption about how long it would take to program each task. and wrote the offer. For some reason, the working hours in my offer was quite close to the actual hours after the project was shipped :-).

The only times that I have had to alter process priorities was when I had to split functionality into two or more processes.

Reply to
upsidedown

Hmmm, I think this is a bit oversimplified. Again, I understand you completely and probably share the same feelings about it. But what if system A manages to do its job at say half the silicon system B takes.... :). If we add "all other things equal" to your rule I accept it unreservedly of course.

Well yes but things get messier when two or more interrupts come at the same time... IRQ latency is a key parameter in any design really, I have gone to great length to ensure it does not vary no matter what the OS does (this includes page fixing, in DPS this is done with IRQ unmasked, basically DPS is a large OS with IRQ latencies close if not equal to bare metal).

But "deterministic" means a fixed delay - i.e. no latency really; this would be overkill to a point of being outright impractical in the vast majority of real life situations.

If I ever knew what "this case" was I have long forgotten. I entered the discussion when it became generic enough to be interesting... :-)

Filters are; overall conversion calculations are not as they depend on the incoming events (to be converted each) rate. So this can vary a lot, from say 5-6% (no events, just filtering for event detection) to well above 50% (maximum rate).

The VNC server (like any other bit of software running on my machines) is written by me so I'd say it is pretty good :). And It certainly was not available before I wrote it.

On the other side there are various VNC viewers and most of them fit your description; the best is RealVNC for windows, it works well for me. I mean really well, no complaints whatsoever.

RFB is a pretty sane format I'd say, as long as one picks the right compression which the viewer can do. In my case it is pretty much RLE, works fine, compression does not cost a lot (up to 15% system time per VNC instance @ 400 MHz power, over a 100 Mbps cable one just does not notice this is over VNC).

Oh and it gets only "better" digging down... :). The Ethernet controller on this SOC expects a smart DMA to do the buffering - which I had to implement as well, multiple circular receive areas etc. Then this SDMA does plenty of other things ("tasks"...), like disk (ATA) RAM, RAMRAM (off-screen buffers to "display" framebuffer), incoming 16 bit samples at several MSPS in a circular buffer, probably more. These "tasks" have fixed (8 bit IIRC) priorities; setting these right was crucial. Pretty advanced thing this SDMA, it is a shame Freescale stopped making it (programming it turned out too challenging for the typical customer today I suppose, no other sane reason to do that).

Actually the video was the least challenging IIRC (but then I am pretty good at it, have long since lost the count of display controllers I have designed).

My general point is to agree with you that simplification is always good as long as it does not turn into oversimplification.

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
Dimiter_Popoff

You should set the task priorities during _design_phase_ not during testing phase !

There might be situations in which it would look like tasks X and Y would require higher priority than the companion and vice versa. In this case, you redesign tasks X and Y so that the priority order becomes clear. Sometimes introducing a new task Z with high priority and very short execution time will help, then the priority of X and Y doesn't matter or they have a natural priority order.

Of course, the priority system should have sufficient number of available levels, things get hairy with only two levels (foregroun/background) or only a few (7 in early WinNT real time priority class), but with 256 or potentially infinite number of priorities I never have had problems assigning priorities.

BTW, the simplest way of having unlimited number of priority levels is to create a task list in the same order as tasks are declared at startup. At any system significant event, such as after an interrupt or system request, the task list is scanned and the first task in runnable state is selected for the next task to be executed.

Of course, this becomes costly with a few dozen or few hundred tasks to be scanned after each interrupt, but using bit masks, you can scan for runnability for 8/16/32/64 tasks at a time, depending on the processor instruction set.

Reply to
upsidedown

Yep - I never get everything typed right in all the time :)

I don't sympathize with that point of view much. I think things are inherently event-driven and that everything is, ultimately, a state machine.

Hardware guys agree with me and their stuff works.

For two interrupts, I need a timeline with events drawn on it to understand the worst case and best case.

It's unusual for low latency to be of vital importance. If it is of vital importance, then it's worth making sure the hardware is up to it.

Ummm.... no, I think deterministic means "reflecting the delay ( and delay variation ) of the underlying physical processes of the hardware." It means additional software delay/delay variation is of low order of complexity - constant or linear.

:)

Ah; so you don't mean say, Butterworth filters then. Sure; you have to be able to calculate and measure worst case for event filters.

I seem to have latched onto the VNC video *player* there; apologies.

Ah; good. That's gotta be a tough nut to crack.

That sounds good, but I don't know enough to truly understand it :)

Huh. Yeah, he's a busy little guy then.

Yeah, this happens a lot - you get a half-finished hardware feature.

Always.

--
Les Cargill
Reply to
Les Cargill

Except in a different sub-thread, you state that the phone call was about your possession of a confidential document that STM didn't think"

Indeed, STMicroelectonics did not think.

"you should have. That's a bit different than threatening to sue you for reporting a documentation error."

I reported an error which inspired STMicroelectronics to falsely claim that it supposedly could sue us, therefore STMicroelectronics wanted to sue us for reporting an error.

"I'm not saying that the phone call wasn't in error, but is it possible they were calling because they thought"

STMicroelectronics did not think.

"you didn't have the proper NDA in place to posess the document"

STMicroelectonics falsely claimed this.

"rather than because you reported a mistake?"

This intellectual property was disclosed to us therefore we noticed a mistake which it contained therefore we reported it therefore STMicroelectronics made an unfounded claim against us over this. Ta da.

Regards, Colin Paul de Gloucester

Reply to
Nicholas Collin Paul de Gloucester

On January 5th, 2016, Rickman sent: |-----------------------------------------------------------------------------| |"[. . .] So one guy jumped a gun and accused you of violating an | |agreement and you think that means the entire STM company should be | |shunned?" | |-----------------------------------------------------------------------------|

Paul Carpenter and Rickman made unwarranted conclusions. There would be a big difference between shunning and "I advise against STMicroelectronics" and "If you would have a product from STMicroelectronics which would work and which you would be happy with, then be happy, but I would not blindly trust a datasheet." I do advise against STMicroelectronics - this NDA ex-event was not an isolated example of lack of quality control at STMicroelectronics which I found and it is not one of the examples which I cited in a paper.

|----------------------------------------------------------------------------| |"You still haven't provided all the relevant information and what you say is| |not clear. If the guy was not a lawyer what was his position?" | |----------------------------------------------------------------------------|

Would it be worth my while to answer this? Perhaps it would be cheaper for thee to ask STMicroelectronics.

|--------------------------------------------------------| |"Surely he | |identified himself before launching into an accusation?"| |--------------------------------------------------------|

He was identified.

|---------------------------------------------------------------------------| |"Did he explain what | |he meant by you violated an NDA by "having" the document? If your "having"| |the document was a problem then it was someone else who violated the NDA by| |giving it to you. | | | |-- | | | |Rick" | |---------------------------------------------------------------------------|

There was not an NDA violation. E.g. search court records - we were not taken to court over this, they could not make coherent sense to explain to a jurist what they were on about - they did not have a case.

Regards, Paul Colin Gloster

Reply to
Nicholas Collin Paul de Gloucester

At this point you seem obsessed with debating every nuance of the issue without actually explaining anything that would make us understand the issue. You had a problem with STM that was quickly resolved it would seem. So what is your point?

--

Rick
Reply to
rickman

Glad we got to the bottom of this. Case closed, we now return you to your regularly scheduled discussions. :)

--

Rick
Reply to
rickman

That sounds bogus. Whether 100 lines of "preemptive" or 100 lines of "run to completion," you can have a LOT of bugs.

I would much, much, MUCH (did I say much?) rather use a preemptive OS. You can always make it a "run to completion" by placing all tasks at the same priority and ensuring nothing blocks in the body of each task (except at the end, e.g., a yield()). So a preemptive OS just gives you more flexibility. Of course you can always shoot yourself in the foot with more flexibility. So be it - I'd rather have the choice.

I agree that this is not really related to managing multiple programmers, however.

--
Randy Yates, DSP/Embedded Firmware Developer 
Digital Signal Labs 
http://www.digitalsignallabs.com
Reply to
Randy Yates

Of course you can. But with preemption, you get Heisenbugs of a certain sort, thread-safe issues, reentrancy and race conditions of another certain sort.

Of course, you gain habits in how to avoid these, but still...

I just really prefer deterministic operation of systems. And, frankly, I don't understand other people not preferring that. I fully realize you can't always do that, but I bet you can get closer than you think you can.

I do, generally, or I make it message passing, or interlocking state machines with message passing...

Here's the key - it's now possible to reason about these systems in a proof-like manner. You can enumerate all the states cross events and check lists of invariants at each transition.

So what if it's big? Sure, you'll make mistakes but if you keep this as a regression check, I bet it's worth it. You can get a long way with < 100 states and < 12 events, well, that's 1200 things. Not impossible at all.

I find that when I do things this way, people don't have to reset the machine quite so much. If at all.

I don't wanna put words in your mouth nor make other presumptions, but it almost sounds like you're arguing against rigor.

Sure. I get that. But so long as you block once per iteration of a loop, it works out the same anyway.

--
Les Cargill
Reply to
Les Cargill

That's a very sound, scalable, and fault-resilient way of thinking, which has the benefit of being implementable by different companies in different countries using different implementation technologies.

Existence proof: the largest and most complex machines that the human race has ever developed - the telecoms system.

Well, /proof/ in the mathematical sense rapidly becomes untenable with real FSMs due to state-space explosion.

But your other points are spot-on.

I'd add that it is trivial to add instrumentation that allows comprehensive performance measurement in live systems, and the ability to prove that your system is correct and the other company is at fault. Been there, done that :)

Yes, it can be. And managing multiple, ahem, "cooperating" companies.

Reply to
Tom Gardner

Well we may be in violent agreement after all. I've based a few of my threaded projects on the paradigm of doing everything through messages and blocking (for the most part) on a message being available. Works beautifully.

What do you mean by "invariants?" This paragraph is Greek to me.

Don't get me wrong, I believe any testing is good; very good. But just testing across states and events doesn't give you a lot of coverage, does it? What about the order and timing of the events and inputs?

--
Randy Yates, DSP/Embedded Firmware Developer 
Digital Signal Labs 
http://www.digitalsignallabs.com
Reply to
Randy Yates

I am sure we are.

Badda bing, badda boom.

An invariant is something that is always true.

For state machines, you can check sets of true/false tests based on state and learn an awful lot about whether the thing works or not.

Sure it does. Done properly, it can reduce error rates dramatically. But this is somehow easier if things are, more or less, a state machine.

Again, I think of these things as a fancy regulator clock, with complicated escapement mechanisms.

If it's permutation oriented, we're programmers. We know how to automate generating permutations. If it's uncomfortable running this on the target, hoist it out to a PC program and run it against the test vector that way.

You still have to do things like worst-case, stress and illegal input tests. And you'll still miss stuff.

I'd rather slip a week doing this than spend months going through wreckage. And frankly, those approach feels more productive to me.

You (nearly) always have to tolerate out of order events.

Timing should be manageable by buffering. And you do what you can to count lost events.

If you lose events, it might be worth considering polling, in cases ( especially on small, high-speed micros ).

--
Les Cargill
Reply to
Les Cargill

There is a large difference between "proof" and "increased confidence".

While explicit FSMs are almost /necessary/ for analysis and *proof*, they aren't /sufficient/ for many real systems.

The state space explosion rapidly becomes intractable when all possible sequences of events is considered.

Nonetheless FSMs are highly beneficial and the best known technique for the reasons you and I have previously agreed.

Those arguments are missing the point.

Reply to
Tom Gardner

The point is to constrain that space. The actual problem is the driver for any such explosion; we're just managing it.

As a practical matter, I have not seen too many problems where state space explosion was a practical limitation.

The "all possible sequences of events" thing is slightly red-herring; I've not seen too many cases where it was impossible to control this.

Events tend to be pretty orthogonal. If they're not, make 'em orthogonal.

I just reject the general ... nihilism of most of the discourse on the subject.

--
Les Cargill
Reply to
Les Cargill

When the events and states are defined by standards bodies you don't have that option. Doubly so if the standards are rationalisations of what's found in existing manufacturers' equipment.

That's the case for telecom and networking standards, and I'm sure many other examples.

I reject panglossian optimism in favour of realistic objectives.

Reply to
Tom Gardner

You are 100% correct in that - although in the limited case I'm aware of, the number of events & states are low.

I've done this very thing, so I'm hesitant to call it Panglossian...

Much depends on properly generating the suite of permutations.

--
Les Cargill
Reply to
Les Cargill

Let me rephrase, Tom. ( how critical is that comma? )

I think (still) after 30 years, that we still have to try. You won't get 'em all. Doesn't matter.

The critical thing is that we continually and habitually overestimate the size of The Beast Within. If you can hold all the interruptions off for... half a day, half a week, half a month, you can get well into the belly of it.

it may matter; it may not matter. Here is to those times when it does.

--
Les Cargill
Reply to
Les Cargill

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.