Re: My Vintage Dream PC

He's reinventing what didn't work well at all. Furthermore, it is extremely insecure to insist the computer system have a single-point failure which includes the entire running monitor.

/BAH

Reply to
jmfbahciv
Loading thread data ...

Times have changed, guys. When cpu chips have 64 cores each running 4 threads, scattering bits of the OS and bits of applications all over tha place dynamically, and virtualizing a dozen OS copies on top of that mess,... is that going to make things more reliable?

Furthermore,

Fewer points of failure must be better than many points. Few security vuls must be better than many, many. The "entire running monitor" could be tiny, and could run on a fully protected, conservatively designed and clocked CPU core. Its MTBF, hardware and software, could be a million hours.

Hardware basically doesn't break any more; software does.

The virtualizing trend is a way to have a single, relatively simple kernal manage multiple unreliable OSs, and kill/restart them as they fail. So why not cut out the middlemen?

John

Reply to
John Larkin

=20

IFF the monitor is well designed and written. If MS develops it the corruption will be coming from the monitor.

Reply to
JosephKK

Not really. The proportions of speed seem to remain the same.

First of all, the virtual OSes will merely be apps and be treated that way. Why in the world would you equate 4 threads/core?

You need to think some more. If the single-point failure is the monitor, you have no security at all.

It isn't going to be tiny. YOu are now thinking about size of the code. YOu have to include its data base and interrupt system. The scheduler and memory handler alone will by huge to handle I/O.

That doesn't matter at all if the monitor is responsible for the world power grid.

That is a very bad assumption. You need soft failovers. Hardware can't take water nor falling into a fault caused by an earthquake or a bomb or an United Nations quanantine [can't think of the word where a nation or group are declared undesirable].

HOney, you still need an OS to deal with the virtuals. Virtuals are applications.

/BAH

Reply to
jmfbahciv

MS DOESN'T KNOW HOW TO DEVELOP!! That's the point I've been trying to make. It's a distribution business and that is rooted deep in its folklore.

/BAH

Reply to
jmfbahciv

A salesman who thinks he can program is even more dangerous than a programmer with a soldering iron.

--
Roland Hutchinson		

He calls himself "the Garden State\'s leading violist da gamba,"
... comparable to being ruler of an exceptionally small duchy.
--Newark (NJ) Star Ledger  ( http://tinyurl.com/RolandIsNJ )
Reply to
Roland Hutchinson

There is a "proof by existence" that such a monitor is possible. QNX. 15k kernel. Doesnt know what a file is, but knows about a file/socket handle and messages and processes and memory and i/o privs. But only who gets to see each io port.

The file system (well, multiple file systems) is in a suitably priviliged user process. So are nameservers, databases, tcp/ip, device drivers etc.

Extending this is dead easy. And each component can run in a separate thread. And they can run on other systems too with a simple proxy that extend messages over networks. A workgroup of machines can be an integrated cluster.

It has been done in 15k. On Intel hardware. And we have had it available for 27 years.

QNX has excellent MTBFs. This OS can even have uptimes that exceed the age of the actual hardware it runs on now.

One of the places they use QNX. No kidding.

-- mr

Reply to
Morten Reistat

Nice unsubstantiated, peanut gallery mentality comment.

WHAT have you heard?

Mine runs fine. Vista has run fine for over three years, and W7 has been running fine for several months now. You nay sayer retards are idiots.

I love it how folks that have ZERO actual experience with things expound on them like they actually know what is going on.

You do not.

Reply to
FatBytestard

Core kernel subprocesses can evolve where a dedicated core given to that sub-process would be the prudent manner to handle it. Remember when Bill said that 64kB was "enough"? There will come a day when the kernel is so big, and has so many functions to manage, that in a multi core world, the best solution would be a segmented kernel implementation.

Networking, for example, WITH security built into the kernel, would be best handled on a locked, protected core that the main kernel is only able to access. The kernel could become a manager of hardware between other segments of that hardware, running their own little kernel segments on their own CPUs. Not unlike JBOD paradigm. A JBOCores thing.

The Cell CPU does networking at near wire speeds. Usually such numbers are not attainable due to various protocol overhead problems.

8.5 out of 10 Gb/s is pretty damned good.

Hardware IP encryption and HAIPE and such is in your future, if you have half a brain, and can see the bigger picture. The cell is easily superscalar.

Reply to
FatBytestard

So what?

Without you providing a more detailed description, including what exactly your cosmic ray is supposed to have done, you remark is just shy of meaningless.

Have you seen (now or ever) any modern, embedded systems under operation?

Your mindset appears to be the single point of failure.

Reply to
FatBytestard

But memory caches, buffers, etc. HAVE changed, and your analysis (and training)is about three decades OLD, minimum.

Speed scales over time. The number of transistors that can be integrated into a given die area scales over time.

We all already know that. Your reply is meaningless.

The paradigm by which we utilize the hardware can and has changed, and will continue to change. You claiming it is all the same is a sad hilarity.

Your mind set is what has stagnated.

Do you even know what current mode logic is, for example?

Reply to
TheQuickBrownFox

Have you seen ANY modern systems? Do you know how seldom bit errors occur these days? Do you know what ECC can do? Do you know how seldom the ECC area needs to be referred to on systems where it is utilized? We are talking over periods of years!

When do we get to correct your errors? You were struck by several cosmic rays. Errant data spews forth. When are you going to push your reset button? You ARE the weakest link.

Mind sets like yours have stutter stepped man's advancement rate for centuries!

Reply to
TheQuickBrownFox

You seem to be deluded by the belief that BAH will listen to reality. Only those systems that were designed in the 60s to run on hardware of the 60s are acceptable to her.

Reply to
Lawrence Statton

Yes. Applications are million-line chunks written by applications programmers who will make mistakes. Their stuff will occasionally crash. And the people who write device drivers and file systems and comm stacks, while presumably better, make mistakes too. So get all that stuff out of the OS space. Hell, get it out of the OS CPU.

How can an OS ever be reliable when twelve zillion Chinese video card manufacturers are hacking device drivers that run in OS space?

The top-level OS should be small, simple, absolutely in charge of the entire system, totally protected, and never crash.

Why not?

John

Reply to
John Larkin

Indeed, Windows 7 (of which you can download the final beta and run it for free for the next year or so) is widely held to be, as advertised, the most secure Microsoft operating system ever.

Just remember that damnation with faint praise is still damnation.

--
Roland Hutchinson		

He calls himself "the Garden State\'s leading violist da gamba,"
... comparable to being ruler of an exceptionally small duchy.
--Newark (NJ) Star Ledger  ( http://tinyurl.com/RolandIsNJ )
Reply to
Roland Hutchinson

solve=20

I see your point. Get some crap going just well enough to be useful and pretend it is the second coming. Very marketeer. Very MS. Very not developing (let alone anything like properly). And currently always go for the eye heroin (far worse than candy, kinda sweet but addicting), and always leaving you "hungry (addicted)" for more. Gee why fix the problem, just look at this "neet" eye heroin.

Reply to
JosephKK

solve

an

And all aware persons know which large companies attract them intentionally.

Reply to
JosephKK

No problem. If core in this context means memory for you, then ECC fill fix that problem. Most of the mission critical computers use ECC memory. You will just get report to a log that bit was flipped and was fixed.

If the core means one processor core, then the story is more difficult, but usually caches are protected by ECC, datapaths have different forms of protection. Also if cosmic ray hits regular control logic, the propability for something bad happening is quite low (10% derating is quite often used, because not all logic nodes are relevant for each cycle).

If I remember correctly also the cosmic neutron/proton effects are not as bad as the alpha radiation caused by the semiconductor itself and the packages.

--Kim

Reply to
Kim Enkovaara

Most OS's are threaded, but I don't think dedicating a core to a thread would ever be a good idea, no matter what resources you have. The reason the systems are threaded is because most of what they do is waiting.

Networking is probably the exception.

IBM is moving in this direction, with dedicated special-purpose processors for special functions. Only time will tell, but I'm not sure this is a good idea, except perhaps in marketing terms.

Reply to
Peter Flass

Sign over _this_ door "Alt.Folklore.Computers"

--
greymaus
.
 .
...
Reply to
greymausg

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.