Re: My Vintage Dream PC

Interupts are not required since you can do that communications by round-robin. You just need a fixed shared memory area with a separate word/block/register for each core. Some of requests could be implemented by the cores setting a request-to-talk bit in a hardware register. An alternative is serial links between the cores. This has to be defined in the high level design of the hardware.

There is a second priority interrupt on the OS core - watch dog timer.

Probably via that core's shared memory area.

On chip shared ram is needed. it could be off chip but that is slow.

Multi-core systems are very complex.

One of the reasons that the chip manufactures can sell new chips with faster clock speeds.

Andrew Swallow

Reply to
Andrew Swallow
Loading thread data ...

Unfortunately, most Datamation stuff isn't online. A few articles have been scanned and/or re-keyed by individuals, but Google hasn't yet picked up the magazine. Sometimes the ads are more interesting than the articles.

Reply to
Peter Flass

I think you've just re-invented CICS.

Reply to
Peter Flass

No, the part he's right about is that it really is possible to distribute a lot of the functionality of a conventional kernel (even if he can't spell it) among several services, so the microkernel doesn't have to be involved with the details of the IO. The slave does have to ask the boss for the resources as you say, but only once at system startup and never again.

The part he's missing is in thinking this is new, and thinking it'll somehow work better for general computing now on a dozen cores or in ten years on 1000 cores than it did in 1990 on one core.

Reply to
Joe Pfeiffer

Not so. It can be used to design, analyze, clarify and document hardware logic design, VHDL, even relay logic and analog design.

Proper pseudocode looks a lot like BASIC.

John

Reply to
John Larkin

That sounds good, except that the hardware trend is lots of processors on one piece of silicon, with central cache and one front-side bus out to the world. In that case, you may as well have one of the processors be the manager.

I bet they'll sell chips with some number of failed processors, whether that's visible or not. And some special-purpose stuff, like ethernet macs, graphics controllers, multibank dram controllers, disc controllers maybe. A few big silicon companies (hopefully more than one!) will gobble up all the system functions.

John

Reply to
John Larkin

n

sign

to

the

Say "structured BASIC" and I would certainly agree. Often there is a thing that looks like a "switch" or "case" statement with more words:

The table entry can have the following values:

Null pointer: return the "not found"

Dead entry: Scan down table

Obsolete: Look to the new table

Good pointer: This is the one we want

Reply to
MooseFET

The boss loads code into a device driver CPU (or tells a file manager CPU to do it) and sets up permissions, and kicks it off. After that, apps CPUs can make requests to the driver. The Boss just occasionally looks over the details from a dignified height. If anybody messes up, that will trigger a system call to El Bosso, who will come in and fix things up. Heads will roll. It's the difference between intelligent management and annoying micromanagement.

Having the control of the

Memory management, at the highest level, yes. I/O and scheduling, no. At Boss level, there's not a lot of scheduling to do. If a worker-bee CPU wants to run its own subscheduling thing, that's fine; some will, some won't.

Thus, when

That's nonsense from any number of standpoints. Just a few are...

  1. Uniprocessor systems manage to have a single CPU do *everything* now, including running all the drivers, stacks, GUIs, fine-grain memory management, context switching, scheduling, ***and all the applications***. So why couldn't one CPU do a small fraction of this?
  2. The Boss, like any sensible Boss, doesn't have a lot to do. She has delegated all the real day-to-day work, and her only job is to make sure the minions are doing what they were told. It's that way because she designed it to be that way.
  3. If The Boss were to drive off a cliff (the allegorical equivalent to halting the Boss CPU) the organization would keep running just fine... for a while at least. Smart Bosses set up the system that way, so they can go to management conferences in Tuscany.

Think big!

BAH yourself!

John

Reply to
John Larkin

It's the disc/disk thing. It's done both ways. Actually, it makes sense to distinguish between the Olympic thing you throw, and a computer storage gadget; ditto a bit of a seed versus some silly code.

among several services, so the microkernel doesn't

The tightly-coupled hundreds-of-cores-on-a-chip things are coming; silicon and interconnect issues make it so. Nobody here seems much interested in how they might best be used. Been there, done that, nothing is ever new, move on, nothing to see here; is the entire computer community this dull?

John

Reply to
John Larkin

Current uniprocessor systems with big sloppy OSs and weak protections are a major cost to individuals and to industry, and a very serious threat to national security. Scattering more copies of the same insecure OSs across more cores will only make it worse.

I have no doubt that the Russians or the Chinese or the French could shut down the US for a week any time they think that would be useful; Bill Gates made that possible for them.

John

Reply to
John Larkin

True. I had the head of a university CS department as a houseguest, and at dinner I happened to ask about what sort of programming techniques were being taught these days. She was highly offended, along the lines of "we don't program."

Oh. Sorrrreee. Want some more salmon?

John

Reply to
John Larkin

formatting link

I'm a simple circuit designer. And I think I see the way computer chips are headed. I'm interested in how that might change OS design. Apparently nobody else is. So whoever does design the next gen of operating systems, they aren't here.

John

Reply to
John Larkin

The Boss program could be as simple as a state machine that runs every millisecond. It would check to see if any of the worker CPUs has crashed or violated any memory-management rules, or if, for example, a file manager CPU wants to blow the whistle on an application that's asking for something that's not allowed. It would also service requests from one process to launch or kill another. Some of this could be interrupt based, but certainly need not be.

Most of this could be done through a simple shared-memory region with suitable hardware protections. Just don't do stupid stuff like have the Boss accept unchecked pointers from other cpu's; in other words, write it all in ADA.

I'd prefer the Boss to have its own separate RAM and cache, but that's just me.

John

Reply to
John Larkin

Right. The Boss may as well finish what it's doing - which never takes long - before picking up any pending requests. Service interrupts imply suspension/reentry, always messy in situations like this and seldom necessary.

My RTOSs had reentrant schedulers (and damned near reentrant everything) but, looking back, that was more for intellectual fun than from necessity, and it did make the code riskier.

John

Reply to
John Larkin

Nice stuff... I was at Pyramid where they were a major competitor... one of my friends from my pre-Pyramid jobs was at Sequent.

Never got the time to get together and swap tech info on the similarites/differences. A lot of the Pyramid stuff seems to have moved to SunOS/Solaris like the disk suite stuff and cluster stuff.

Bill

--
-- 
Digital had it then.  Don\'t you wish you could buy it now!
              pechter-at-pechter.dyndns.org
Reply to
Bill Pechter

The systems have way more horsepower than the average (office apps & web & email) single user needs. If you only drive short hops in town and have a 400HP car, adding a second engine doesn't actually gain you anything.

I maximize my CPU usage by running World Community Grid and file sharing, but neither of those offers a direct benefit to me (and CPU usage is still fairly low). A single core 1.8GHz CPU is way overpowered for 95% of what I do.

You're personally running an OS that gives you the ability to do more than one thing at a time, but I'll wager that your CPU is maxed out less than 0.1% of the time.

Since the power is there, unused, there's an incentive for the developer to max out on eye candy. How else do they justify their existence and get you to buy a newer version?

Dave

Reply to
Dave Garland

That reminds me of a story:

Once upon a time, scientists decided they would create a computer that would think like a human. So they gathered to together thousands of CPU's with local memory, and networked them into one giant machine. When all of this was assembled and turned on, something immediately started printing out on the printer. The operator ripped off the printout and read:

"That reminds me of a story."

;=)

--
+----------------------------------------------------------------+
|   Charles and Francis Richmond     richmond at plano dot net   |
+----------------------------------------------------------------+
Reply to
Charles Richmond

That reminds me of a story:

Once upon a time, scientists decided they would create a computer that would think like a human. So they gathered to together thousands of CPU's with local memory, and networked them into one giant machine. When all of this was assembled and turned on, something immediately started printing out on the printer. The operator ripped off the printout and read:

"That reminds me of a story."

;=)

--
+----------------------------------------------------------------+
|   Charles and Francis Richmond     richmond at plano dot net   |
+----------------------------------------------------------------+
Reply to
Charles Richmond

Cue the story of the assembly line foreman who was sitting around with his feet up daydreaming. When his manager asked him why he wasn't doing any work, he threw a wrench into the machinery, saying "Now I'm busy. Are you happy?"

-- Patrick

Reply to
Patrick Scheible

Proper psuedocode looks a lot like Algol, to which it is, in a sense, a successor language.

--
Roland Hutchinson		

He calls himself "the Garden State\'s leading violist da gamba,"
... comparable to being ruler of an exceptionally small duchy.
--Newark (NJ) Star Ledger  ( http://tinyurl.com/RolandIsNJ )
Reply to
Roland Hutchinson

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.