Self restarting property of RTOS-How it works?

You might also want to know whether packet traffic is independent. Many networks consist of a large population of clients that wind up being synchronized by a few servers. For example, if a file server stalls for several seconds, how likely are all the other nodes to fall silent?

--
	mac the naïf
Reply to
Alex Colvin
Loading thread data ...

I think I misconstrued your 'synchronized clock'. You are not talking about time, but about a data clock, i.e. a strobe. In this case I consider the whole design flawed, because once more I don't want to trust to statistics. The transmitter should be emitting a preamble to synchronize the clocks.

--
"If you want to post a followup via groups.google.com, don't use
 the broken "Reply" link at the bottom of the article.  Click on 
 Click to see the full signature
Reply to
CBFalconer

It can be better than might appear, even in such a case. But you (and others) are perfectly correct that it is not necessarily a good model - it depends.

Yes. And even more on interactions between the packets, whether in the sources, the sinks or the transport.

That's only because of ignorance of previous work. Most of the serious work in this field was done decades ago by statisticians working in the telecommunications industry - there was a vast body of knowledge when I did my diploma in statistics (c. 1970), with both a great deal of theory and experimental data.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

fields

higher.

computer

the

receives

checks

can

long

last?

lengths.

computer

system?

true!)

never

Reply to
del cecchi

It think it's extremely unfair to blame comp-sci for linked lists and and buffer overflows; most of that stuff was invented before comp-sci was being taught.

My favourite example: people without comp-sci hardly ever get floating point comparison right.

Casper

--
Expressed in this posting are my opinions.  They are in no way related
to opinions held by my employer, Sun Microsystems.
 Click to see the full signature
Reply to
Casper H.S. Dik

architecture

paradigm

store

multiple

talking

But who was it that put linked lists and fixed size buffers with undefined size inputs in widely used software? Engineers? I can't imagine just reading until end of record with no control or checking on the input. A program that dies due to the ping of death? Who wrote this stuff? Did they test it? And years later we still have it?

I'm not trying to insult folks or start a flame war but some of these things boggle the mind.

del cecchi

Reply to
del cecchi

Firstly, please separate the two. There is nothing wrong with using linked lists appropriately. They should NOT be used when a necessary primitive is to index by entry number, and you need to take care to avoid fragmentation, but that is about all.

The latter was widepread in the 1960s in commercial systems written in various assemblers.

The latter is provably insoluble, though it is possible to write code that is resistant to it.

Computer scientists. Students. Employees of software houses. etc. Generally, they didn't test it. And, as with the MVT Linkage Editor, we had it 20 years after it should have been buried with a stake through the heart of the last listing.

They do, indeed. But your viewpoint of what happened and who was responsible is a bit simplistic. It is very messy, and only SOME of the blame should be assigned to computer scientists. Here is a very rough summary of one viewpoint on it:

Back around 1970, most computer scientists damned Fortran for being unreliable (i.e. uncheckable), and supported Pascal, Lisp etc. (to taste). Now, they were unfair to blame Fortran, as it WAS checkable, but had a point about the programming styles.

In parallel, A,T&T Bell Labs produced a semi-portable assembler and computer scientists' experimental bench (C and Unix), with the full knowledge and conscious decision that diagnostics, robustness and so on were largely omitted both for clarity and to allow the experimenters maximum flexibility.

In the 1970s, the first generation of people who had been trained as computer scientists became professors etc., and regrettably many of them took the attitude that it was someone else's business to turn their leading-edge ideas and demonstrations into real products. The "someone else" was assumed to be computing service staff, vendors' engineers etc. Those people brought C and Unix into the mainstream.

Round about 1980, mainly in the USA and UK, the governments starting demolishing central computing services and (in the USA) giving almost unlimited budgets to leading-edge computer science departments for industrial collaboration (culminating in theings like Project Athena). The theory was that industry would behave as in the previous paragraph.

What is now called Thatcherism in the UK (though it predated here), and can be described as dogmatic divide-and-conquer monetarism, meant that many of the traditional links and controls (NOT just university computing services) were emasculated or destroyed. In subsequent years, this affected standards organisations, government quality control agencies and so on. And industry often did the same with their internal equivalents - which caused the FDIV bug, at least one of IBM's disk fiascos, and many other failures.

Monetarism in turn gave a major boost to marketing over engineering, which was synergistic with things like the IBM PC, leading to a wide acceptance that it is better to have leading-edge gimmicks than actually work. People may forget that a single wayward program (and EDLIN was one that could do it) would not just crash the system but could trash the whole filing system, irrecoverably. But that was OK. That was the point at which I refused to move with the times, and suffered as you might expect.

The whole of this came together, with the result that the traditional enemy camps (Fortran, Cobol, Pascal, Lisp, compiler-generated checking and other aspects of software engineering) got shoved into a ghetto and deprecated as obsolete. Most computer science work on software engineering is on methodologies (often completely unrealistic) and on largely irrelevant tools (i.e. they tackle needs that experienced programmers don't have). But EXACTLY the same is true of vendors' products, because it is the zeitgeist that has changed.

Note that this has now reached even standards organisations, where the misguided but traditional (i.e. precise and consistent) POSIX was taken over by the woolly and inconsistent so-called Single Unix Standard. And it has reached hardware, where many vendors' now design their firmware to reduce the visibility of failures rather that make their products more robust.

Who comes out of this with credit? Damn few organisations and people.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

We can only punt an answer to that if the packets are independant. Be the first one I've come across if they are. The trafic paterns are the required info, plus the allowable latencies.

Neither poison nor gaussism stats do a good job in most cases, or if they do, the load is so low you need not bother at all.

--
Paul Repacholi                               1 Crescent Rd.,
+61 (08) 9257-1001                           Kalamunda.
 Click to see the full signature
Reply to
prep

I don't think it makes him look foolish at all; I read the same meaning into your words. The fact that your reply borders on being a personal attack rather than striving to correct any misunderstanding adds weight to Ed's interpretation.

You exhibited the same behavior towards me when you wrote

Note that this was in a thread posted to comp.arch.embedded with the phrase "RTOS" in the subject line - off-topic as well as being overly confrontational.

I advise you to examine your posting style. It is not conducive to a civil and reasoned technical discussion of the subject at hand.

"A little rudeness and disrespect can elevate a meaningless interaction into a battle of wills and add drama to an otherwise dull day." -Calvin discovers Usenet

Reply to
Guy Macon

That tends to happen to people who set their line wrap too small and thus mess up the number of ">" characters in the replies.

Or, to put it another way...

people

replies.

Reply to
Guy Macon

[snip]

Everyone else here manages to post without screwing up the ">" characters. Please study how the rest of us manage that and learn how to do it in your posts. Thanks!

Reply to
Guy Macon

No, it happens to those who use a newsreader too dumb to not wrap quotations, and to quotations from those readers too dumb to wrap the originals at 65 or so.

--
"If you want to post a followup via groups.google.com, don't use
 the broken "Reply" link at the bottom of the article.  Click on 
 Click to see the full signature
Reply to
CBFalconer

I was hoping to get the post-mangler to see that he has a problem before getting into the specifics of the best way to solve it. This being Usenet, there is a large chance that the thread will continue with: "I *like* mangling repies! It's convenient!" :(

Reply to
Guy Macon

Let me explain in VERY, VERY simple words.

One of my hobby horses is the need for a precise computational model before starting any design, and another is the need for precisely defined, logically consistent specifications. I am probably rather a bore on both of them, and anyone following comp.arch for more than a few days would have difficulty not noticing my views. Enough threads where I have banged on about those have been cross-posted to comp.arch.embedded that I am surprised you haven't noticed.

Secondly, the fact that both he and you read the same thing into my words merely shows that you are unaware that design comes at many levels. Even if you were to regard them as ambiguous (which IS reasonable, if you were unaware of my views), it is absolutely clear that there were two interpretations. At least if you have any experience of designing practical, complex systems, that is.

I have pointed out the difference between "broad brush" and detailed designs in other postings, and don't plan to expand them here. But, if you are unaware of the vast number of the former that have been produced by computer scientists and have been quite impractical to turn into working, detailed designs, then I am afraid that I have to say your experience is severely limited.

Regards, Nick Maclaren.

Reply to
Nick Maclaren
[snip]

That sounds more like numerical analysis than compsci.

And just what is this wonderful trade secret that compsci people know about FP comparison that others don't?

Reply to
Everett M. Greene

No. I won't let you do that. I have no further interest in reading anything else after the above, so I hit the delete to end of file key without reading the rest of your post, and I will now hit the killfile key so that I will not see any future posts by you. Bye-bye, flamer.

*plonk*
Reply to
Guy Macon

science

We will see if the defaults in Outlook Express are better than the defaults in Thunderbird.

apparently this thread, or this part has been trimmed to only follow up to cae. So bye bye. Sorry for disturbing you folks.

del

Reply to
del cecchi

I'd put queueing theory into probability theory, i.e., mathematics.

Jan

Reply to
Jan Vorbrüggen

Either there or into statistics, a very closely related branch of mathematics.

Most queuing theory taught in computer science is bad, because it is over-simplistic. There clearly isn't time to go into much detail (there isn't even in full-time statistics courses), but it is really bad to omit the general probabilistic background that shows that some of the standard assumptions are not universally true. And, as probabilists and statisticians have known for centuries, some of the problem cases are common in practice.

For example, it is common for the distribution of file sizes to have effectively no mean - which has more consequences in queuing theory than might appear. Typical computer science concentrates on what is now called 'discrete mathematics', which gives a false sense of simplicity. A related example here is the bogus claim that bucket sorting is O(N) in the number of elements, and that has close analogues in common mistakes in queuing theory.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

(snip)

Every time I am in a building with more than one elevator, press the button wait a long time and then more than one arrive at the same time I wonder why they don't use any theory at all in programming them.

Not that I know much about queuing theory, at least I know that there is a theoretical basis for it. It seems that people in that business should know something about it.

-- glen

Reply to
glen herrmannsfeldt

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.