a dozen cpu's on a chip

formatting link
?articleID=207600531

I bet we'll see 256 one of these days.

John

Reply to
John Larkin
Loading thread data ...

What is the current status of Microsquish regarding supporting more than two CPUs?

Reply to
miso

I love the "nearly monolithic levels of performance" quote.

Next we'll have CPU performance of biblical proportions!

Dave.

Reply to
David L. Jones

When you get to large numbers of CPUs it seems to make sense to stop making them identical. For servers this would be doubly so. Many of the CPUs won't need to do floating point operations.

It also would make sense to do things like memory moves in the "Memory Mismanagement Unit" since the values don't need to be modified on the way through.

This will make it a lot harder to say how many CPUs are in a chip. If there is only as much hardware as 200 full CPUs but 500 threads can be running at the same time, do you call it 200 or 500 CPUs.

Reply to
MooseFET

It depends a lot on what you call supporting. I think that NT could tie up two cores.

Reply to
MooseFET

Maybe. The problem is heat. The current E8500 Intel Wolfdale Core2Duo chips are about 110 sq mm for 2 cores and burn about 65 watts of heat running at 3GHz. Let's call it 50 sq mm and 30 watts per CPU. Extrapolate that to 256 CPU's, and we have 12,800 sq mm and 7.7kw of heat. If the chip were square, it would be 113mm on a size or roughly the size of a CDROM disk. 7.7kw of heat would make it the equivalent of about 15 coffee warmer hot plates running full blast. Using air cooling is out as it would require enough air flow to launch the PC into the air. Liquid metal cooling might work. At $0.15/kw-hr, this machine will cost $1.15/hr in electricity to operate (without energy management or shutting down un-used CPU's).

Of course there will be improvements in technology, but using the existing available processes is a dubious proposition.

--
Jeff Liebermann     jeffl@cruzio.com
150 Felker St #D    http://www.LearnByDestroying.com
Santa Cruz CA 95060 http://802.11junk.com
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

Right. Amybe a few cpu's would have serious floating point power, or a few separate fp engines could be assigned to cpu's as needed. Lots of cpu's, doing stuff like file i/o or serial stuff, could be less powerful. I suppose we'll always need special graphics hardware, but just a few of those per chip.

Next step is to get rid of task swapping and threads altogether. One CPU is the OS, and one cpu gets assigned per process.

John

Reply to
John Larkin

The real advantage of multiple cpu's isn't performance; it's management. There's no reason to run all the cores flat-out all the time.

John

Reply to
John Larkin

Their policy is that for all improvements in hardware, there is an equal and opposite deterioration in software resulting in zero net gain in overall performance. In 1983, my 4.77Mhz PC took about 90 seconds to boot PCDOS 1.1 on a 160KB floppy. 25 years later, my

2.4GHz machine takes about 180 seconds to boot XP.

My guess is it will follow the classic time management axiom. If a job takes one person 1 hour to accomplish, two people will take 2 hours, 3 will take 3 hours, 4 will take 4 hours, etc. Same with CPU's.

--
Jeff Liebermann     jeffl@cruzio.com
150 Felker St #D    http://www.LearnByDestroying.com
Santa Cruz CA 95060 http://802.11junk.com
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

It's all about licensing... Windows NT/2000/XP/Vista have always supported up to 32 CPUs at the OS level (it's 32 because somewhere there's a DWORD that has one bit per CPU for flags), and I believe that through NT 4 it actually would use however many CPUs it found. Starting with Win2K they started having difference licenses depending on the number of CPUs present -- the basic Win2K package would support 2. With XP it was clarified that the liensing would applying to the number of physical CPU *sockets* and not the number of

*cores*, so while Win XP Professional with regular licensing still supports only two CPU sockets, if you find a dual-socket quad-core CPU, you can get 8 CPU cores all running at once.

For the right price, Microsoft will enable more CPU sockets if you have them (they were targeting high-end servers), although even when four-socket motherboards were more common than they are today, the problem was often one of scalability: If you used the "regular" north bridge chipsets, performance didn't scale very well because the CPUs would get starved for memory bandwidth. Some companies built their own north bridge chipsets to get around this problem, but realistically the payback wasn't that great... which I think is now why it's atypical to see more than dual-socket motherboards for contemporary CPUs: People instead just get a couple of 1U-high servers and spend far less money for the same overall level of performance.

---Joel

Reply to
Joel Koltner

Sure, if you have a general purpose application. However, if you're trying to solve a problem that requires massive simultaneous computation (examples previously listed in another thread), the chances are really high that all or most of the CPU's are going to be running flat out. Energy management is nice for reducing average consumption, but if you're grinding a weather system model, predicting the stock market, or run a brute force chess game solution, you want all that horsepower running all the CPU's at once.

The leader in the CPU burning applications is currently the PC games industry. I've watched the "task manager" under Windoze to see how well Vista balances applications on a quad core HP machine. Actually it's the Process Explorer:

Most apps use perhaps 10-20% of the available horsepower per CPU while waiting (yawnnnn) for the hard disk to finish doing whatever. Not so with the games. They use every CPU cycle they can grab to do predictive imaging (to paint the screen in RAM in anticipation of the next frame or move), variable image quality depending on available cycles, and such. 80% usage and up all the time is the norm. Little wonder game machines run hot.

I got a better analogy. Suppose your automobile has 8 cylinders, but only enough radiator cooling capacity to keep 4 of them cool at a time. Think small water jacket and very small radiator. Around town, it will work as the engine energy management system shuts down cylinders that are allegedly not needed. For bursts of speed, you get all 8 cylinders. For idling, you only get perhaps 2 cylinders. For cruising, maybe 4 cylinders. That's fine until you decide to tow a trailer. Now, your average load is well over what the cooling system can handle and the engine melts. (There are engines that work on exactly this principle). Kinda like your energy managed CPU. The power is there when you need it, but only for a short time. (Comcast also calls it "burstable" download speeds). It works, but leaves something to be desired.

Whether anyone will buy a machine that has the ability to rapidly run

256 simultaneous solutions, but only 1/256th of the time, is open to debate. Kinda reminds me of the early multitasking operating systems (Desqview, Topview, Windoze 2.0, etc). I would take a simple sieve benchmark and run it without the multitasker, and compare it with running multiple copies under the multitasker. I don't recall the exact numbers but the first Windoze, with cooperative multitasking, would take about 5 times longer to run as compared to running under DOS. Run multiple copies, and they would take far more than twice as long per copy (due to multitasking overhead). At the time, I was wondering if all this multitasking was really worth the huge performance hit. I guess it was.
--
Jeff Liebermann     jeffl@cruzio.com
150 Felker St #D    http://www.LearnByDestroying.com
Santa Cruz CA 95060 http://802.11junk.com
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

Apologies for all the typos in there... I haven't had any caffeine this morning yet!

Reply to
Joel Koltner

That might be the minimum supported h/w configuration for their next OSA version. :-/

--
Paul Hovnanian	paul@hovnanian.com
-----------------------------------------------------------------------
Procrastinators: The leaders for tomorrow.
Reply to
Paul Hovnanian P.E.

The processors would use finer geometry to some degree, which reduces heat since there is less charge to shuffle around. You can't make the chips arbitrarily big due to defect density issues. [Defect density will kill you way before heat is the problem. Just ask Gene Amdahl, or better yet the investors in Triology. Granted bipolar, with vertical current flow, is much more sensitive to defect density than CMOS, i.e. lateral current flow.].]

The first dual CPU box I owned that actually worked as advertised is my AMD64 X2. Everything else had memory bandwidth issues. Incidentally, while Intel is a MCM, the AMD is monolithic.

Reply to
miso

I doubt it. A mailman won't deliver mail faster when he is using a Ferrari instead of bicycle. I think we are going to see slower, cheaper, more energy efficient computers with back-to-basic software for every day / office use.

For example an embedded ARM or MIPS system running at 300MHz has enough power to run most common applications. It consumes at least 10 times less energy compared to a standard PC.

--
Programmeren in Almere?
E-mail naar nico@nctdevpuntnl (punt=.)
Reply to
Nico Coesel

Maybe. But there's certainly no trend in that direction yet. Vista is even more bloated than XP. My conjecture is that multiple cpu's on one chip will make computers simpler and more efficient, and certainly more reliable.

Oh, things are coming along nicely:

formatting link

"Sun foresees the need for extending thread count beyond 64 separate instances of a computer, which is why the chip that will follow UltraSPARC T2 (currently code-named Victoria Falls) is being designed to have 128 threads. These threads, however, will be fully extendible. This will make it possible to link two instances of Victoria Falls ? both sharing common memory through a hub chip ? for a grand total of

256 threads."

Just threads, not full cpu's [1], but it looks good.

Here's one PPC and eight smaller processors on one chip:

formatting link

John

[1] I kept the apostrophe so that people whose expertise stops at 20 KHz can have something clever to say. But to me, cpus looks awkward; you'd pronounce it "seepuss."
Reply to
John Larkin

Are you sure? Service Pack 3 was just released, and I would bet that XP is a lot more bloated now than it previously was.

Funny, my Vista installation is still working flawlessly, and any error it ever did have was repaired either in session or on the next boot.

I had one blue screen on boot, and it worked immediately after, and found the issue to be an Nvidia driver, which nvidia replaced with a newer, proper driver.

Reply to
MassiveProng

The Cell will be in our future. The power 6 looks pretty good too.

A power 6 Cell would be neat.

CPUs It's an acronym, so it should be capitalized. The rules on pluralizing an acronym are what need to be defined.

Reply to
FatBytestard

formatting link
?articleID=207600531

I'll bet we won't. Except maybe as an educational design excercise. Multiple CPUs are hard to manage efficiently for general purpose computation when N>4. N> Ferrari instead of bicycle. I think we are going to see slower,

Yes there is and they are all around you in consumer items like mobile phones and PDAs. ARM has just announced their multicore CPUs end of last year designed to have performance on demand for streaming video whilst having extremely low power consumption when doing ordinary GUI stuff. eg.

formatting link

No denying that, but it is from Mickeysoft so what do you expect. XL2007 is barely functional but still business users flock to buy it like sheep.

It will do none of the above. Systems using multiple CPUs have been tried already using separate CPUs. I doubt if common consumer kit will ever go beyond 4 cores. Law of diminishing returns sets in about there.

Multiple CPUs are good for certain types of problem requiring the same calculation on lots of data or a calculation that can be split across multiple threads but they are not well suited to general computing.

Hardware support for threads and context switching makes good sense, but it isn't all that exciting unless you have a thing about lots of CPUs.

Regards, Martin Brown

** Posted from
formatting link
**
Reply to
Martin Brown

when N>4. N with the Chinese usage (ie big iron box didn't work and had service engineers tending it daily for about 6 months after delivery).

Think again. AMD is planning 12 cores for their next wunderchip.That's right, not 8 but 12. Probably due to the fact that their cores now don't perform on par with Intel's so this is how they plan to catch up/prevail. If they don't fold before they get it out with their massive debts and losses. We'll probably see 5,6,7,8,9,10,11 cores if they attempt to sell their broken chips as they are doing now with their quads sold as 3-core processors.

More than 4 will definitely happen (and soon) regardless if it makes sense or not simply because numbers sell. Marketing people want it. It will be interesting to see how far this game goes.

Mark

Reply to
TheM

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.