I don't use an RTOS because...

an O/S

Nice post.

As I've said a number of times in other similar threads: what we're dealing with here is tools (and skills) for managing complexity. So long as it works, and you get to sleep nights, I don't care if it's an RTOS or a clockwork tomato.

Steve

formatting link

Reply to
Steve at fivetrees
Loading thread data ...

If what needs to be done requires response to asynch events that contend for resources and which have hard real-time deadlines, The software developers who I know who've ever worked on really life-critical stuff (safety of flight or nuclear reactor control) tend to view *not* using one particular RTOS (QNX) with a deep suspicion bordering on paranoia. If what needs to be done can be done with a loop, they tend to view using any RTOS - including QNX - with a deep suspicion bordering on paranoia. The general feeling is that making the wrong decision when deciding whether to use an RTOS is a important diagnostic of the system engineer; anyone stupid enough to get this part wrong will get many other parts wrong as well.

(This is not to say that QNX is the *only* good RTOS, but rather that QNX is *a* good RTOS, and is the one that the people I have worked with are most familiar with.)

--
Guy Macon
Reply to
Guy Macon

...for the same reasons that Object Oriented Programs are easier to fully understand, easier to test for compliance and easier to maintain. Alas, just like OOP, it is possible for a sufficiently clueless engineer/programmer to make a bad product using good tools. That's no reason not to have good tools, though.

--
Guy Macon
Reply to
Guy Macon

It

of

used

to

number

decent

rate

a

to

are

Ah! Another post to comp.arch.embedded! Let's see what this one has to say... (opens post, whistling a happy tune)

********** WHAT THE.....**BAM!!!** ********** (SFX: sound of parts falling off of a recently-crashed automobile.)

...

Honest, officer, I was cruising along at the speed limit when I ran into this giant block of text right in the middle of the newsgroup! No paragraphs, no whitespace, just a dense square block of text...

Yes, I tried to stop, but the information superhighway was slippery. Someone had filled the road with this huge greasy sheet of quoted text full of random ">>>" and ">" an ">>" strings. I think I saw a .sig in there as well.

No, I didn't get the license number of the fellow who spilled the toxic post, but I remember that he was driving this old SUV - maybe a joep? - that was making an annoying "www...www... www...www...www..." sound, and on the side of it I saw the words "

formatting link
" spray painted over a quite attractive but faded "DejaNews" sign.

Officer, Please catch him before he kills another thread!

--
Guy Macon 

 : ) : ) : ) : ) : ) : ) : ) : ) : )
Reply to
Guy Macon

Overall system resilience can be achieved much easier despite the number of physical connections. I also indicated that the system would have been (at minimum) a dual processor system in anycase. One system for control, the other system running a permit to operate. Even so, with the calculations performed on the system integrity, I favoured the many processors design because it was easier to prove that it met the integrity requirements. This was due to a lack of common mode issues between the various tasks that would have plagued the development within a single (or just two) processor(s).

Not really that much of a problem for the kind of systems I deal with (mostly Robotic or Automation for Nuclear Energy and Transportation Systems). I have one development environment and a code library that I can use on a wide range of processors (we are talking real re-use here). Also note that many of my systems have no need for upgrade over time as they are specified for specific tasks over long periods of expected operation. I can build the same type of nodes with several different processors and its functionality would be exactly the same in each case.

I expect that you could also find a considerable difference in costs between the two approaches.

To run the sort of control I deal with at the Integrity Levels demanded of my systems you would probably consider processors with high MIPS ratings, high dollar per chip, masses of fast memory and some RTOS that you do not have the code for. With a requirement for 100% coverage testing you would be tied up for ages to prove your system is safe.

I, on the other hand, count how many actuators there are, note what type they are and can see my way to using simple cheap microcontrollers that are fully committed to really looking after the needs of the actuator in meeting the demands of the system. Occassionally I may use two processors per actutor (one for control and one for comms). All such nodes also perform some limited data-logging (for diagnostic purposes) and have a range of self checking features that signal the nodes health status up to the group controller. I have all the source code for my systems and I can provide full certification for it. Testing the individual nodes is quite simple and once installed the system will usually operate for its required lifetime (25 years+) without hidden faults (actually, to date only one of my systems has been installed long enough to reach decommissioning early in its 26th year - the others are still going strong with the longest lived current system now in its 20th year - no upgrade having been necessary).

--
********************************************************************
Paul E. Bennett ....................
 Click to see the full signature
Reply to
Paul E. Bennett

Absolutely!!

--
********************************************************************
Paul E. Bennett ....................
 Click to see the full signature
Reply to
Paul E. Bennett

It is *that* "simple."

--
Michael N. Moran           (h) 770 516 7918
5009 Old Field Ct.         (c) 678 521 5460
 Click to see the full signature
Reply to
Michael N. Moran

(Cconcerning multiple processor designs)

Like everything else, it depends. :) When I was working on toys that shipped 100,000 unit per day, a penny per unit was huge. When I worked on a multi-million dollar DVD-RAM production line, the cost of the computers was two orders of magnitude lower than the cost of the programming.

Reply to
Guy Macon

What's your point after this totaly useless post?

Reply to
Elder Costa

I think the right (if one can state there is a "right" anyway) solution is highly system dependent. I used to favour one processor designs because of (hardware) simplicity. It was a "let the software guys solve" attitude even though I also did software. After some (several :-) ) years undergoing the pains of such an approach and after attending some of Jack Ganssle's lectures I was struck by the perception of how fool I was by chosing it (because it's quite obvious when one gives it some thought). Just to cite an example I have a design in which a rotary encoder is handled by the main processor. Putting a small microcontroller there would make things much easier and reliable. The main processor handles (proprietary) serial communication with some acquisition modules. Another low end microcontroller would do the magic and release the main processor for more apropriate duties, not to mention would make the software design much easier. These are simple examples.

I have seen this multiprocessor approach proposal more often lately. I am not sure if that is because I am paying more attention to the subject or just because of a paradigm change though.

I am in the process of designing a new architecture for new products and that is the reason why I posted this questions. The posts so far have been very interesting and enlightening. One thing at least is clear to me right now: there is not a "one fits all" solution for the problem.

Regards.

Elder.

Reply to
Elder Costa

A long time ago my group was developing multitasking applications on PDP-11 under RSX-11. Most people had good experience in designing and writing batch and interactive time share applications. After teaching them about multiple tasks and some of the most important OS mechanisms, it took several months until they independently could split the applications into tasks and design the synchronisation and communication between them. Even after that, you had to check their code for stupid things, such as busy loops (as a heritage of their batch and timesharing days).

Paul

Reply to
Paul Keinanen

I guess you quoted the wrong guy. :-) Strictly speaking you are right, though IMNSHO the definition of either "complex" or "more complex" varies from case to case and from designer to designer. However, from the posts I have seen so far my question despite lacking a clear definition was clear enough to generate excelent posts. :-)

Regards.

Elder.

Reply to
Elder Costa

This sub-thread is contrasting rolling your own code scheduler and writing your own libc versus using an existing RTOS, and whether or not the RTOS is to be trusted.

My experience is that at the core RTOS API's are the same and that the structure of a pre-emptive RTOS kernel is simple enough to be as reliable (perhaps more so) than most large libraries such as libc.

As for having a team of inexperienced software engineers working on a project for which they are not qualified ... that's another issue ;-)

--
Michael N. Moran           (h) 770 516 7918
5009 Old Field Ct.         (c) 678 521 5460
 Click to see the full signature
Reply to
Michael N. Moran

I was trying to remember one of these articles just before I posted. Now I found it:

formatting link

Regards.

Reply to
Elder Costa

I

subject

Hey, people read it! I'm flattered :)

Reply to
larwe

Great posts maybe, but did they answer your question?

Reply to
CBarn24050

I guess that was the attitude back in the early 80's as almost everyone was trying to cram computing abilities into almost everything. Even today I am quite happy to do things in relay logic when that is the simplest solution.

One can only discover the right approach by taking the time to explore the various ways of providing a solution and selecting the best in terms of apparent simplicity, time, costs and quality. Usually, when you get to systems that require one to deal with more than 20 I/O you should really start looking for the natural architecture of the problem. You can still look for the architecture of the problem below that level but it is usually very apparent . As I have indicated, I usually find that this is closely allied to the actuator distribution and groupings.

It is a quite scalable approach too.

As the low end processing silicon gets less and less expensive to develop for and communication capabilities between them improve, it becomes easier and easier to support the strategy. I have been advocating the processor per drive strategy for 20+ years now. Especially when you realise that for almost any process or machine control actuator based solution there are only 28 types of control block. This is not a great many to have to develop strategies for.

As already stated, problem space needs analysis in order to determine what the structure of the problems architecture is, what tasks are necessary to accomplish the system goals and what risks are posed by achieving the solutions activities. Once you have looked at and thought about all of that you should start forming the simples and most appropriate strategy for achieving your solution.

It is quite often easy to see where one of the 28 control blocks would likely fit in without expending too much brain power on it.

--
********************************************************************
Paul E. Bennett ....................
 Click to see the full signature
Reply to
Paul E. Bennett

I hadn't paid attention to the author's name until now. :-) My fault though - a matter of bad habit not looking at article's authors names. I look forward to reading the sequel. Brought PPC to my attention as a possible candidate for the main processor (thogh not necessarily for realtime tasks) in a new architecture. It's a pitty it lacks a built in LCD controller.

Regards.

Elder.

Reply to
Elder Costa

To some extent, yes. I will play with the idea of using a state machine based cooperative executive as pointed by some (I deployed the concept long ago but didn't design it apropriately and the implementation was awful).

The thread also provided a lot of food for thought.

Regards.

Elder.

Reply to
Elder Costa

I

The article series was tentatively set to be ten pieces long. To give you a sneak preview: The next article coming up (early Feb release date) talks about differences between x86 and PPC Linux startup. It also suggests a few different layouts for both the software bundle and the flash.

Article #3 goes into details about the Linux distro shipped with the Kuro Box, and also how to upgrade it (some).

Article #4 talks about building a web-administerable backend from a beginner's perspective (it sounds like a digression from the primary theme, but really it's not).

Looks like they will be publishing them once a month, or thereabouts. I'm trying to keep at least two ahead.

Reply to
larwe

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.