In article , glen herrmannsfeldt writes: |> Nick Maclaren wrote: |> |> > Most queuing theory taught in computer science is bad, because it is |> > over-simplistic. There clearly isn't time to go into much detail |> > (there isn't even in full-time statistics courses), but it is really |> > bad to omit the general probabilistic background that shows that |> > some of the standard assumptions are not universally true. And, as |> > probabilists and statisticians have known for centuries, some of the |> > problem cases are common in practice. |> |> Every time I am in a building with more than one elevator, press |> the button wait a long time and then more than one arrive at the same |> time I wonder why they don't use any theory at all in programming them. |> |> Not that I know much about queuing theory, at least I know that |> there is a theoretical basis for it. It seems that people in that |> business should know something about it.
Yes, though that is a bad example. One could use queuing theory in a building with lots of them, but it doesn't really help with that problem. Indeed, it is possible that the cause of the behaviour is that people HAVE used naive queuing theory without understanding its limitations.
That was the cause of a lot of the technical and political problems with early dial-up modems, and more recently with domestic Internet access. The model being used was that of a large number of similar and temporally independent clients - as those of us with experience could have told them, the model doesn't match the usage.
Regards, Nick Maclaren.