suggestion for free RTOS

I need to know if some preemptive RTOS is available somewhere with no royalties.

I can consider to port by myself into the selected microprocessor platform if not available.

What I need is the following

- open source kernel (C or C++)

- something working.

- basic services : priority threads, binary semaphores, heap, partitions, message services ( i.e. mailbox , queues), task delay

- interrupt services: signal a semaphore, send a message

- time slicing is not important

- timers are not important

I do not know Echos. Is it a good solution?

Regards Daniele

Reply to
daniele fiorini
Loading thread data ...

I was wrong I mean eCos.

Sorry

Daniele

"Michael N. Moran" ha scritto nel messaggio news: snipped-for-privacy@bellsouth.net...

platform

partitions,

Reply to
daniele fiorini

I used SMX on a project about 6 years ago. It was quite effective for an embedded 80188 application, and they provided usable workarounds for the two bugs I tripped over. But they were expensive up front, with 20% of list per year for support, and a per project fee of 20% after the first project. I looked again a couple of years later and the price for the initial SDK had gone from US$2500 to US$3500 with the other fees increasing apace. That turned out to be a bit more than I was able to afford over the long run.

I then picked up a copy of CTASK from SimTel and have used that several times since. It came with full source code, a decent manual and absolutely no support. I was unable to contact the author at all. It is pre-emptive, prioritized and extendable. Look for version 2.2d, and get a good hardware based logic analyzer if you need to rewrite any ISR's. I made extensive use of an Arium ML4400 in order to minimize the latencies between interrupts. (Let's see, there were four UARTs, a LonTalk NIC, Ethernet NIC, watchdog and three timers. We finally dropped the effort to make it work on a 9.5MHz V53, and used a

25MHz 386sx as the minimum CPU. We couldn't move TCP packets on and off the NIC fast enough through an 8 bit I/O port.)

There was a CPPTASK rewrite in C++ available for a while, if you like objects. It was not complete the last time I saw it, with some printer and timer functions still missing.

I have also run across a couple of devices that use ZINU as the kernel in their SDK. Unfortunately, most of the Comer texts on that subject appear to be out of print. However, the source code for several processors is still available.

Bob McConnell N2SPP

Reply to
Bob McConnell

Not necessarily. We recently did an 8 UART version on a 33MHz 68EN360, but without the LonTalk. Lots of legacy equipment out there with serial ports.

I hope so. I haven't figured out why OOP is even desirable. The promises sound too much like the old Top Down Programming scam, and the code I have seen looked like a classroom exercise in obfuscation.

Bob McConnell N2SPP

Reply to
Bob McConnell

Hi,

you may want to have a look at:

formatting link

uC/OS ports are available for many MCU and the RTOS is extermely well documented.

For private , noncommercial use it is almost free ( you just have to buy the book - the source codecomes with it).

regards /jan

daniele fiorini schrieb in im Newsbeitrag: XKbSa.55264$ snipped-for-privacy@news1.tin.it...

Reply to
Jan Homuth

I confess to never having used an RTOS, despite having done a large number of real-time projects (most of them are) and having considered an RTOS in many cases. Most of the time my projects are too cost-sensitive, and I've become very wary of third-party code: been bitten too many times by poor commercial library code. I can ensure bugfree deliverables only when I take ownership of every single line of code.

Besides, I've become adept at writing my own schedulers. And cooperative multitasking is just so much more efficient...

Steve

formatting link
formatting link

Reply to
steve at fivetrees

Interesting. For me it's worked the other way - writing my own scheduler is quicker. Basically just a round-robin that looks for and acts on semaphores. Trivial, really. (Of course it assumes that the cooperating tasks don't hog the CPU - and are written to a set of rules. Not hard.)

But with high-volume products I generally have severe (down to the last penny) unit cost constraints, meaning that memory, cheap as it is, is at a premium. I've never had enough space for e.g. context-switching and the overheads involved.

There again, I'm a dedicated follower of the KISS maxim ;).

Steve

formatting link
formatting link

Reply to
steve at fivetrees

Nice story, which I can fully relate to.

It's been my experience that, for whatever reason, project managers seem to have more confidence in a commercial RTOS than in a home-grown cooperative scheduler. I find this puzzling, since my experience as a designer is entirely the opposite. I've had to argue this point many times, and have usually won the argument by example rather than words.

But this "let's buy it in and save money" headstate isn't limited to RTOSs; I once argued against the use of COTS hardware in favour of a bare-chip design (again, the KISS principle). I lost that argument, and the project overran due to various hardware bugs in all of the implementations of the COTS we could find. As a postcript to the project, and out of bloody-mindedness, I then ported the application over to an H8 eval board - it took me a week and worked perfectly.

Going back to RTOSs: usually the argument goes "but we have all kinds of processes running in parallel, and they're all complex, and they need to intercommunicate". My answer is simple: state machines. ("What are they?" )

There *are* times when buying in 3rd-pary code makes sense (e.g. TCP/IP stacks), but usually a better solution is to keep things simple.

Steve

formatting link
formatting link

Reply to
steve at fivetrees

I use state machines, as well, with and without processes in place. They can provide a very nice way to handle complex situations in bite-sized ways. Several years ago, I used them for an application (no process stacks here) which supported background updating of analog DACs, rather complex serial communications with a DSP with special conditions to monitor and rules to apply, and updates to the external serial EEPROM in the background. Simple mainline code organization resulted, contained modules for each of the state machines with simple interfaces and no exposed state, nice, precise waveforms, and robust, predictable behavior.

Interrupts did expose a bug (feature?) in the compiler I was using, though. It used static compiler-generated temporaries and live-variable analysis to tell it which had to be saved across function calls. The live-variable analysis, though, was worthless in the face of any interrupt. So *if* an interrupt called a C function, any C function at all, you took your life in your hands.

Jon

Reply to
Jonathan Kirwan

to

RTOSs;

board -

Reading this thread I agree totally but...

I also wrote my own cooperative OS with TCP PPP etc, but lately I have been working with an open source one that fits my needs nicely. You have every line of source, you have a community of people that are contributing to it. So it seems like the best of both worlds. This is espicaly useful in a one or two man shop.

I would never want to ship a product (or use) an embedded OS I didn't have the full source for. I would never sleep.

Ralph

Reply to
Ralph Mason

Our earlier project used uC/OS II, which has full source, but not a lot of functionality. But that was ok, since we added what we needed. The next project we were pressured into getting an RTOS, even though we were reusing code from the earlier project. This would be an OS that would be in common use across the various projects and groups.

We eventually decided on OSE, and part of the reason came from the add on packages. Ironically, after committing to that course, we couldn't afford the add on packages we wanted :-) Now that the project is over, I've seen no advantage to it over uC/OS for what we were doing.

--
Darin Johnson
    "You used to be big."
 Click to see the full signature
Reply to
Darin Johnson

Very much agreed. I've done the same (albeit with a slightly canny roundrobin approach) so many times I just re-use old code, which amounts to a couple of dozen lines or so and a set of rules - which *has* to be simpler than a 3rd-party product with a dense API.

Exactly.

Steve

formatting link
formatting link

Reply to
steve at fivetrees

I'm jumping in rather late, but there's a nice discussion of when and whether to employ an RTOS (as well as some examples of when not to) at

formatting link

--
Rich Webb   Norfolk, VA
Reply to
Rich Webb

Look, I've written a couple of co-operative tasker myself so I understand the complexity involved. One of my previous posts indicated that it wasn't the implementation that was time consuming, but the design and verification. If you are supplying a product, worse still if it's a medical device as in my case, you'll will have to provide a test plan & test protocol (possibly in the form of test vectors) to attain a level of confidence that the bit of code that you've wriiten does what you have set out to do. It has to:

  1. Conform to your design
  2. Have structural integrity by testing all if/else and switch/case statements.
  3. Meet performance criteria under load - stress testing
  4. Always perform within design parameters such as stack allocation.
  5. Have all defined interfaces working as specified in the design

Now I see this as more than just one afternoon's worth of effort. Also this is not an issue of whether I'm savvy enough to produce "good" software but one of producing evidence to prove to the customer or regulatory bodies that I do produce good software. Sure, if it's for self-educating purpose, I could knock-up a multitasker pretty quickly, run some rudimentary tests. I could write some application around it & it'll probably run reliably for years. If it does fail -- what the hell, it hasn't killed anyone, so who's worried.

Sorry for being such a pessimist, but when I hear statements like "can code it up in one afternoon" or "it's good software because it's run the whole month without a problem", I begin to question the real "quality" of the software. I begin to wonder whether people that make such statements really understand what "Software Quality" is all about. Writing succinct, efficient code is only a part of it -- it is not the full story.

All I can say is this & this is the way I "see" it. A multitasker is a central piece of software. It's "goodness" or correctness of operation is key to the success of the final deliverable. If one bases the criticality of a software component by its role in the system, then the multitasker would rate fairly high. The criticality criteria should determine or provide guidance as to how much effort one should expend on a software component to ensure its correct operation. Based on this, this component (the multitasker) would afford a high level of verification. The level of attention would reduce if the component is inherited (from a previous product) or has a certain maturity -- this increases the confidence level. However a new piece of code is an unknown quantity that has no discernable metrics to judge it by. If one is only allocating "an afternoon's" worth of effort, then this seems insufficeint to me for such a critical component. In fact how could anyone justify to their boss, the level of inadequate testing on such a software component as a homebrew multitasker. One could argue, I guess, on the pressures of the schedule, but then this just reverts back to the original premise of this forum thread.

Please don't take me too seriously :-) I recognized that we all have our own personal modus operanti that we follow to produce good software, that doesn't fall into the "standard" Software Process.

Ken.

+====================================+ I hate junk email. Please direct any genuine email to: kenlee at hotpop.com
Reply to
Ken Lee

Why you'd imagine that validating a simple scheduler you fully apprehend would be more work than doing so for a commercial O/S with a lot of code and additional complexity than is needed and which cannot possibly have been validated for a specific, new medical product is beyond me.

Meaningful validation of medical software is more than just following a specific process (which is a vital part, no argument) -- just how meaningful the results are have a lot to do with the detailed and comprehensive understanding (breadth and depth) that can be applied by the team members, in developing and applying that process.

· How do you handle _statement coverage_ in a medical system, with sufficient test cases for each program statement to be executed at least once? (This is a specific requirement for most validation procedures I've followed.) · Or _decision/branch coverage_, with sufficient test cases so that each possible outcome occurs at least once? · Or _condition coverage_, with enough test cases for each condition to take on all possible outcomes at least once?

Just to name a very few of what should usually be covered in a medical system. Your comments, as I read them, are more an argument AGAINST a commercial O/S. Not for one. In medical systems, in particular.

By the way, I have medical-certified products which use my schedulers [along with other code of mine] (and they didn't take long to write and they definitely took less time to validate than an external O/S would have taken.) And products operating in critical city-wide power delivery and IC wafer FABs, to name two more.

So I'm very, very interested in your comments on this.

Please realize that I'm just commenting on your "I can't see how writing your own scheduler could be quicker than buying one off-the-shelf."

Jon

Reply to
Jonathan Kirwan

FDA have a guideline for 3rd Party Off-The -Shelf products. The scheduler may be "simple" (and this may figure into the adopted test strategy) but the real issue is the level of criticality that the software component posseses. Your ascertion of the relevance of the unused features of a commercial O/S in a specific application are correct, but the FDA guidance only requires that you show evidence that the OTS software is appropriate for your use and that appropriate fault mitigations have been incorporated into the system. Now if we are saying that your homebrew O/S has the same functionality (in the scope of its intented use) as the commercial one, then the same level of verification (and mitigation) is required. Thus my comment, "I can't see how writing your own scheduler could be quicker than buying one off-the-shelf."

I've no argument here, but typically you're NOT trying to prove to oneself or your team members that you're producing quality software -- it's really to your customers or in my case regulatory bodies. The Software Process (good or bad) is just the vehicle by which you do this.

Our goal at the onset of a project, is to attain 100% coverage of all branches & conditions. We achieve this by extensive unit testing & automating as much of it as possible. This may cover 95% or better - the code that can't be tested this way is subjected to code inspection. Also I'm talking about complex code that involves at least

4 active software engineers.

No, this is a misunderstanding. My view on commercial O/Ses is that they can be used where appropriate. I'm neutral on their use. I'm just arguing on the point that it's "faster" (or not) to roll your own O/S and I was fired up by the statement from a previous poster who said that he thought it was faster (in the context of delivery time). At which point I said, "I can't see how writing your own scheduler could be quicker than buying one off-the-shelf".

I've no problems with homebrew O/Ses. However I still maintain that the commercial O/S should not have taken any longer to "validate" than your homebrew one -- in the context of how you were going to use it. Provided that one has purchased a reputable O/S with a reasonable user base, then one could argue that the extent of the acceptance testing could be reduced. This also applies to a homebrew O/S that has been used in other products. The predicate of usage is a strong argument to save time. However if it is a new, untried homebrew O/S, then all bets are off & full verification would have to be done.

With all things being equal, I fail to see the rationale as to why a O/S (homebrew or commercial) would not have the same level of verification. Surely just because one writes his or her own O/S shouldn't mean that it circumvents all of the typical unit testing or code inspection.

Yes, that has been the context of my posts. Hopefully I've presented my points in a logical manner.

Ken.

+====================================+ I hate junk email. Please direct any genuine email to: kenlee at hotpop.com
Reply to
Ken Lee

That's *an* important issue. But that's a judgement call, when you are discussing operating system software you cannot possibly know the details of. Yes? Or... How do *you* assess the level of criticality of all the components of the operating system you imagine you are using only parts of? Keeping in mind that you've no real idea what is being called. I'd be interested to hear.

Glad we can agree on this point.

It's one thing to "show evidence." You can certainly bull your way through any investigation by people less technical, I'm certain. I can, too. What's important to me is that I assure myself and those who work with me on coding it. Design reviews, etc. Perhaps we just don't accept guidances about what we might be able to get away with, insisting instead that we can defend ourselves well in a hostile design review, sufficiently well to convince critics who might get paid at our expense when they find errors we don't find.

I'm not satisfied with "only requires," I guess.

Find a case where *that* is true. Never have I seen that once in 30 years. Or do you somehow believe that commercial O/S's are often an exact fit? Not in my universe, anyway. Maybe yours.

Yes. But that never happens in my world.

A conclusion clearly based on an assumption not universally true, at a minimum.

_Sound_ logic depends on two things -- _true_ premises and a _valid_ logic. I may be convinced of the validity of your logic, but not yet the truth of your premises.

Jon

Reply to
Jonathan Kirwan

Criticality of a software component (in the context of medical equipment) relates to the probability & severity of the harm that may befall a patient or user if that component fails. For instance, if a LCD driver fails if may (in most cases) not cause any injury to the patient. It may be an inconvenience but in all probability it is unlikely to cause death. This is an assessment that may be arrived at by the collaboration of the software & system engineers. At the moment the assessment of criticality is determined by treating the component as a black box. So being a good little software engineer & in accordance to good engineering practice, the software engineer would assess the criticality of all components that make up the entire software system. The O/S (whether homebrew or commercial) may be assessed to have a high-ish level of criticality, as it's failure would lead to mass system chaos. One could reduce the criticality by say employing a watchdog or maybe running parallel systems (examples only please). Now the FDA do understand the complex nature of software and does allow one to demonstrate the suitability of use of the 3rd Party software. This means that even though a commercial O/S may have

10 times more functionality than you are likely to use, that within the restricted scope of usage for your application, that you show evidence that the 3rd Party software is appropriate. Now this would be the identical scope of usage for your homebrew O/S, but in the case of 3rd Party software, one is restricted (normally) to black box testing.

So what's so important about criticality? From the textbook on Software Engineering, criticality is an indicator that may provide guidance to a software engineer as to how much effort should be spent on that component. So a component of high criticality should have an equally high level of effort expended on it. If a severe mismatch of criticality-to-effort exist then this doesn't necessarily mean that there's a problem but it should certainly set off the software engineer's radar for further investigation.

From the above, that in the first instance a homebrew O/S and a commercial O/S should have the same black box testing. Why would it be any different in the context of use? However extending the concept of the good little software engineer, the homebrew O/S is derived from the software engineer, he has the code and so white box unit testing may be desirable --- especially when the software engineer makes the bold statement that all branches & conditionals will be 100% tested in the software that is produced. I know, I actually made that claim, but I'm sure that other software engineers share the same mission statement.

I'm sure one could bullsh*t one's way through any investigation particularly if you're a software engineer. After all if worse comes to worse it will be the company executives who will be prosecuted. Our view here is to comply with regulatory authority requirements and "do it right" the first time. My company has a FDA audit about every 2 years (and we're in Australia) -- all medical companies are subjected to this. Continual failure to comply (at least they give a warning) will lead to an immediate embargo of your products. Also if they find any malicious falsification of a submission then company executives could be prosecuted. Basically the FDA has the power to send companies bankrupt. This is a powerful incentive to produce quality software.

Every company producing software have their own Quality System that they follow. The Quality System is governed by the economics and may take years for it to evolve to the correct level. Depending on the type of product and market that you're in, it may be perfectly acceptable for the product to have a certain level of known bugs. Sadly Microsoft maintains a profitable business based on this premise. Although it may be encouraging to demonstrate quality software to yourself and your peers, this is only part of the story. One gets a warm , fuzzy feeling inside when one feels he/she has done good work, however it doesn't really help you on the business end. Producing "quality" software shouldn't mean that you need to defend oneself in a design review. All that is required is the evidence that the Software Quality Process has been adhered to -- that should basically shutup anyone who questions it.

I'm sorry, but I've been producing medical devices for 22 years now & so producing the required documentation for regulatory bodies seems like second nature to me.

Hmm - it depends what you mean by exact fit. In the scope of intended use, the above statement MUST be true, otherwise WindRiver, ATI, etc would never sell an O/S. I'm sure those that have purchased commercial OSes have weighed the pros & cons of writing their own -- we certainly have.

A multitasker has: o A scheduler o Some kind of Task Control Block which maintains the system context o Various interfaces maybe for delays or force context switching o Maybe services such as queues, events, semaphores

I would have thought that this was textbook territory. My last project was to port an application written to run on pSos to a new platform using ATI Nucleus. The application stayed the same but the micro (68300 --> SH-1) and the operating system changed (and also drivers)

Also, years back, I installed my own homebrew O/S into a product that we were developing. Eventually it was replaced by a commercially available RTOS called MTOS (don't know whether this RTOS still exists). The application stayed the same but some of the OS interfaces changed slightly.

All I can say is "That's Life" or is it "Horses For Courses" ;-)

My company is competing in a multi-billion dollar industry where "time to market" is God. Achieving this with the constraint of regulatory compliance & producing a quality product (& software) with low product manufacturing costs. The Software Process that is deployed has evolved to achieve this.

No problems -- I can live with that. The varying opinions is what makes this forum interesting :-)

As you've stated we live in different worlds - for one I live in the Southern Hemisphere where the toilet flushes in the other direction (as demonstrated by Bart Simpson) and where we endure 30'C heat in December.

+====================================+ I hate junk email. Please direct any genuine email to: kenlee at hotpop.com
Reply to
Ken Lee

I'm reading your comments like this:

  1. The OS is treated as a black box component.
  2. Black boxes by definition aren't looked into, you only examine worst-case outputs in terms of end effect on patient.
  3. OSes "A", "B" and "C" were demonstrated good once.
  4. When we want to create a new product, we can skip a testing step, or at least cut-n-paste documentation, if we use one of "A", "B" or "C". If we want to use our own OS, it becomes a new component subject to fresh from-scratch worst-case analysis.
  5. Therefore, using a homebrew OS is always more work than using an off-the-shelf, because of the extra required documentation.

Is that a fair summary?

Only 30? When I lived in Melbourne, it routinely reached 40. The lawyer across the road from my house used to come out and fry eggs on the hood of his black BMW.

Reply to
Lewin A.R.W. Edwards

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.