The Future of Computing

Assuming power on self tests to disable non-functional blocks.

After the wafer has been produced, just do a test on the disconnection logic (not the internal functionality of the block) and if some disconnect logic fails, burn off the bad block data, clock and power connections using a laser.

This method should not take too long compared to cutting a wafer into chips, testing each individual chip, skipping bad chips and packaging good chips into packages.

Reply to
upsidedown
Loading thread data ...

This sounds much as disk bad block replacement algorithms. Just address the processor with a logical processor number and some (redundant) hardware will do the logical to physical processor mapping.

Reply to
upsidedown

Three dimensional printing ?

Reply to
upsidedown

When we get semiconductor materials that allows high density ICs running reliably above +100 C, vapor cooling helps getting away with a lot of heat.

Reply to
upsidedown

The power consumption doesn't go down anywhere near zero at zero clock speed, even in a fully static design. Sub-32nm FETs are horribly leaky.

And the utilization percentage of highly multicore processors is nearly always poor, because it makes it much harder to write correct, efficient programs.

And the thermal expansion problem strangles the I/O bandwidth of super-size chips, as I noted upthread.

Cheers

Phil Hobbs

Reply to
Phil Hobbs

There's nothing special about 100C--heat pipes work fine at normal operating temperatures. I have several boxes that use them, and you probably do too.

Cheers

Phil Hobbs

Reply to
Phil Hobbs

This is true for mobile and simple desk top applications, but for servers (such as data base and web) , you really want to throw in as much processors as are available). This has been the case for at least

40-60 years, then the physical I/O performance limits kicks in.
Reply to
upsidedown

In some cases the program/algorithm/problem is dominated by Amhdal's law.

But there are other important "embarassingly parallel" programs and problems - telecom systems are an obvious example.

In all cases the modern bottlenecks are IO/memory bandwidth and memory latency. "DRAM is the new disk" :(

Reply to
Tom Gardner

The heat pipe helps removing the heat from the chip/wafer, but it doesn't help removing the heat into the environment. Still big heatsinks are required.

I have a 7.5 kW heat source in my sauna. Throwing water on the sauna stones will quite effectively transfer the heat into the sauna room or into the outside world, if the sauna window is open.

Thus, I really do not think that getting rid of 10 kW computer dissipation would be a problem, provided that the electronics can comfortably handle +100 C.

Reply to
upsidedown

If the workload is dominated by unconnected small tasks, as in a web server, sure. But not things like ERP or other big database tasks, which don't parallellize at all well.

(Plus the connection between "wtiting correct and efficient programs" and web dev is, *ahem*, oblique. ;)

I've written a big-iron (100-core-ish) 3D electromagnetic simulator, and getting it to scale well was a man-sized problem.

Database locking breaks parallellism very badly.

Cheers

Phil Hobbs

Reply to
Phil Hobbs

Open cycle cooling has its own problems, such as scale deposits and the requirement for a water tank that never runs dry.

Cheers

Phil Hobbs

Reply to
Phil Hobbs

ing temperatures. I have several boxes that use them, and you probably do t oo.

You do have to check the performance of heat pipes working below 100C. It d oesn't take much air-leakage to make them work really badly.

My 1996 thermostat went over to heat-pipes after I'd left, and what was nee ded to make them work reliably was a low temperature test bed. Initially ab out a quarter of the parts supplied failed the test, but eventually the su pplier got the message and tested them before he shipped them.

--
Bill Sloman, Ssydney
Reply to
bill.sloman

You want to turn the power off on any bad, or unused, blocks to save power. Modern processors do this. It's well known art.

There is no guarantee that because a chip tests good at time=0 that it will down the road. If its neighbor had a defect, there may be a good chance it does, too (probability dependent on defect). The chip manufacturers are well aware of the costs.

Reply to
krw

IBM was using vapor cooling at 85C, forty years ago. The problem is that boiling = distillation. Any gunk in your coolant tends to get deposited on the chips. That's the main reason IBM went away from it, in favor of the Thermal Conduction Module.

Reply to
krw

s/and efficient/ / :(

Reply to
Tom Gardner

And of course the Cray II that had a continuous flow of Freon over the circuit boards of the entire CPU. There may have been local boiling, but not the entire content. It takes measures like that to get rid of 100KW of heat from a space the size of a large fridge.

Reply to
Clifford Heath

eed, even in a fully static design. Sub-32nm FETs are horribly leaky.

A pile of silicon simply has to be energy efficient, and static leakage mus t be minimised. Hence I mentioned sticking with a larger process.

lways poor, because it makes it much harder to write correct, efficient pro grams.

It's a core computing problem, pardon the pun. But a future computer which I propose will be running many times more apps & background tasks than toda y (a topic for another day perhaps) can at least make good use of many more cores/CPUs. And it can choose whichever CPUs deliver the wanted result in the wanted time with the least energy use.

ze chips, as I noted upthread.

How does that prevent one for example mounting an IC upside down on a heats ink and soldering 20mm wide strips of Parlux to its connections?

NT

Reply to
tabbypurr

series of logic gates that connect or disconnect, and power rails too for the 50mW CPUs? Maybe to guard against a bad CPU not being disconnectable du e to coincident logic gate failures one can use multiple busses.

Best for each CPU or whatever unit to have its own test & disconnect system , followed by local neighbourhood system that tests & disconnects several C PUs etc. Putting all eggs in one basket is not a good plan - unless it's si mple enough that the yield is high enough for that bit of silicon.

NT

Reply to
tabbypurr

TCMs were 1.2kW (~100W/chip) and about 4"x4"x3". The liquid encapsulated modules, about 1kW (also around 100W/per). There's more than one way to skin a skunk.

Reply to
krw

I can't see how that would produce monocrystalline silicon. Unless it was laid down atom layer by atom layer, and ones that settle wrongly are then stripped off and relaid. As was pointed out that would simply be far too slow.

NT

Reply to
tabbypurr

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.