Re: My Vintage Dream PC

Good grief, Andrew, you're the only other person in this thread that's willing to think.

John

Reply to
John Larkin
Loading thread data ...

No, it doesn't. It's multithreaded, and the threads are free to migrate from core to core (there is a cost associated with this, to keep things cached).

Though some of the low-level IO required is migrating back into the kernel.

Some of the filesystems. Actually, the only FUSE filesystem I've seen that wasn't built on top of an existing in-kernel filesystem was written by a student of mine to read CDs...

Reply to
Joe Pfeiffer

Slower than dog S%%t.

Reply to
Peter Flass

They haven't changed all that much in the last 40. Sure, things are bigger and faster now, but people still keep making the same stupid mistakes.

Reply to
Peter Flass

As did OS/2 years ago. I think the file systems run in ring 2.

Reply to
Peter Flass

Early versions of Window NT (...which begat Win2K, then XP, then Vista) had the GUI in user space -- it was moved back into the Kernel is something like NT 4 as a means to increase performance.

The file systems have always been in Kernel space though, AIUI.

Reply to
Joel Koltner

if

other

begins=20

begun)=20

Mine is paisley, how about yours?

Reply to
JosephKK

That's one heck of a Linux box.

--
Roland Hutchinson		

He calls himself "the Garden State\'s leading violist da gamba,"
... comparable to being ruler of an exceptionally small duchy.
--Newark (NJ) Star Ledger  ( http://tinyurl.com/RolandIsNJ )
Reply to
Roland Hutchinson

20+ years ago we were doing real time NTSC DVE with 768 KB of 12 bit RAM, on a Z80B. It was a 'Vital Industries Squeeze Zoom' It filled a full relay rack, and had a 1000 amp 5 volt power supply, along with multiple supplies for the analog circuits. It was an amazing design, for the mid '80s. It could also take non broadcast quality video and time base correct it for broadcast.
--
You can\'t have a sense of humor, if you have no sense!
Reply to
Michael A. Terrell

Are you certain about that? Today we have possibility to add millions of gates just to protect the chips from unexpected events, radiation for example. Altough the chips contain more and more transistors, the FIT rates have not exploded. This has been achieved by designing redundancy to the chips and also by fine-tuning the silicon process, and better materials control (less alpha radiation).

The only catastrophic event from cosmic radiation is a latchup in the cells, but that is very rare event, or nonexistent depending on the process, wafer type and chip design.

And of course you could run two kernels in parallel, and switch from active to passive if the active one notices problems in its environment. This has been done for decades in telecoms equipment. Often it is easier to notice the problem early enough to recover from it by doing active/passive switchover than to fix the problem in HW for the active kernel.

--Kim

Reply to
Kim Enkovaara

Nothing so good, more like TSO.

Reply to
JosephKK

I will not ever let you design any life and safety critical stuff that i have anything to do with.

Reply to
JosephKK

When enough people have enough compute power, the software they use will begin to use that extra CPU resource.

/BAH

Reply to
jmfbahciv

Nope. The extra pluses are to fool you into thinking it's faster... than molasses in January in the Northern Hemisphere.

/BAH

Reply to
jmfbahciv

apps

same

formatting link

Nope. You are still assuming I'm longing for the old gear which I am not.

/BAH

Reply to
jmfbahciv

But not OSes which need to supply computing resources for general timesharing.

Lots of them..

No.

My observation is that the computing biz has cycles. What I'm seeing now is what we saw in the 70s. The hardware is too slow for customer needs so the software has to compensate. Since hardware speeds were increased over the last two decades to supply adequate computing power, nobody had to write software well. Now that the hardware is hitting a silicon ceiling, the focus is slowly, IMO, going back to using software solutions to squeeze out the extra performance.

/BAH

/BAH

Reply to
jmfbahciv

I don't think they will be rare events. Think about just-in-time executables.

the problem I'm seeing here (with the people who are posting) is that they are limited to thinking in single-user, single-owner systems, where single-user implies a single job or task and not multiple tasks/human being.

/BAH

Reply to
jmfbahciv

Nope but I was thinking about all the scanning that gets done for security.

Will this gear survive a vacuum running near it?

/BAH

Reply to
jmfbahciv

ObNewEnglandFolklore:

Ah, that would explain the similarity between C++ and the Great Molasses Flood.

--
Roland Hutchinson		

He calls himself "the Garden State\'s leading violist da gamba,"
... comparable to being ruler of an exceptionally small duchy.
--Newark (NJ) Star Ledger  ( http://tinyurl.com/RolandIsNJ )
Reply to
Roland Hutchinson

If I'm doing OS development and I need a UUO which gives me access to the nether regions of all customer systems so I can distribute a file FOO.EXE, my proposal would be turned down by TOPS-10 but accepted by NT. Assume that FOO.EXE is an executable that is required to run on all systems.

NT has to make the tradeoff decision because their business is to distribute FOO.EXE while TOPS-10's is to provide general timesharing to users without allowing disasters to happen. Allowing someone outside the customer site to copy any file at any time to any place was considered a security risk.

/BAH

Reply to
jmfbahciv

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.