The Future of Computing

... which limits us to 65nm+ process. Not much of a problem when you can slip a smartphone sized piece of silicon in your pocket.

NT

Reply to
tabbypurr
Loading thread data ...

Make it out of GaN, SiC or diamond, so it can handle high temperatures. Add TECs (or nano-engines of whatever sort), and boom, you've got computronium!

All that's left, at that point, is to build a swarm* of the things in the orbits between, say, Earth and Mercury, and take it from there.

*You build a swarm because it's trivially scalable and it doesn't have the obvious structural instability of a Dyson sphere or ring.

(Don't worry about Venus or Mercury disturbing the orbits; we'll have to disassemble them to obtain enough material. No, the view won't be so great to future sky-watchers, but no one will want to spend their time doing anything so boring. The simulations of the "old days" will be more pure, anyway -- not being subject to biological limitations, like poor eyesight, or chills from standing outside in winter.)

Tim

--
Seven Transistor Labs, LLC 
Electrical Engineering Consultation and Contract Design 
Website: http://seventransistorlabs.com
Reply to
Tim Williams

Scanning beams are extraordinarily slow -- ICs are unimaginably detailed today. (They're tolerable for prototypes, AFAIK, but still very expensive.)

Printing layer upon layer, nanometers at a time, would take so long, just printing one chip would take as long as *developing and perfecting* a whole new alternative technique!

Tim

--
Seven Transistor Labs, LLC 
Electrical Engineering Consultation and Contract Design 
Website: http://seventransistorlabs.com
Reply to
Tim Williams

I vaguely recall a wafer scale system intending to do that which sank without trace. Some of them tried quite hard!

formatting link

It seems easier to cut the parts up and run them at whatever speed they are stable at than have to run at the speed of the worst good one.

The interconnects are typically N-D where N can be upto 8 or so.

--
Regards, 
Martin Brown
Reply to
Martin Brown

The last IBM foundry process I knew at all well had 11 layers of metal even in 2007.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

The resolution is also too low for modern chips. SEMs ran out of resolution about 8 or 9 years ago--you have to use TEMs nowadays even for failure analysis, which is almost unimaginably labour-intensive, involving focused ion beams to chop out a tiny thin sample that is then transferred to the grid of the TEM.

Probably so!

Cheers

Phil "former chip guy" Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

There's no need to run all blocks at the same speed. With buffers everywhere each bit can run at its own max error free speed less a margin. But since the block has to be very low power density, sections will run far below their speed limits.

For a large scale computer it makes sense in principle. No point chopping, packaging and reassembling it all on PCBs if you can let it self-test, lose bad sections and run as is.

I wonder where they ran into problems. Imperfect yield shouldn't be a problem for large computers, maybe they weren't able to successfully disconnect dud sections from busses, maybe they leaked badly anyway.

I don't know what you mean there.

NT

Reply to
tabbypurr

One of the processors I worked on in 2000-2006 had 10 layers of copper interconnects.

Reply to
krw

d

Probably not true.

t is > then transferred to the grid of the TEM.

The problem is more that integrated circuits are now layered structures and working out what is going on deep in 10 or 11 layers of inter-connecting metalisation requires penetration as well as resolution. Even a high voltag e TEM can't drive an electron all that deep into an integrated circuit.

The electron bean tester that I was working on from 1988 to 1991 was going to be sold with an ion beam source to cut holes down to the lower layers of metalistion (and deposit tungsten interconnects to replace stuff which had been deposited in the wrong place - we had to worry about getting rid of u nused tungsten carbonyl in the vacuum exhaust). Happily the project got can ned before we'd got our hands on the ion-beam column.

--
Bill Sloman, Sydney
Reply to
bill.sloman

I did some work on this one, a 3D tomographic atom probe.

formatting link

It constructs a 3D image of the sample, atom by atom, and identifies the species and isotope of each atom. But the sample prep is horrible; they have to ion mill a tiny, nm-radius tip out of the semiconductor.

(I got a bunch of stock which, of course, turned out to be worthless.)

John "former microscopy guy" Larkin

--

John Larkin         Highland Technology, Inc 

lunatic fringe electronics
Reply to
John Larkin

One thing that would radically change things would be if there was no longer a system clock and everything was allowed to free run asynchronously. But they you get all sorts of potential nasty side effects and race conditions - just like in software.

ISTR That was the reasoning at the time but it proved way too difficult to implement and get anything like acceptable yield. Power dissipation and obtaining good enough silicon process was I think the killer in terms of commercial success. Anamartic was one player I knew in Cambridge - making for the time large 40MB ram from 2x 6" wafers.

formatting link

The infamous relativity denier Ivor Catt had some patents on it.

You would have to research the history. It is quite long time ago now. I am pretty sure there were others trying to do the same sort of thing with transputer clusters too. Here is a paper from the early days when the technlogy was full of promise - enjoy:

formatting link

CPU clusters's have fast interconnect data paths typically configured like a hypercube of dimension 8. A square is 2, cube is 3 links per node etc. In 3D space that is about where fitting the cables in and getting enough cooling to the internal works starts to bite!

--
Regards, 
Martin Brown
Reply to
Martin Brown

Trilogy was the big dog, but died for several reasons. A few:

  1. You can't get signals in and out of a chip larger than about 22 mm square, because even with underfill to relieve the stress, differential thermal expansion rips the solder balls off the base level metal. That cripples the I/O bandwidth of anything larger.
  2. You can turn off cores and memory lines with some types of defects, but not other types. Bleeding edge processor yields are low enough that you never get a working wafer.
  3. Logic and DRAM processes are really different.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

It has been done; maybe 10-15 years ago.

Reply to
Robert Baer

-Solid-State-Disk/

So solder on LEDs and do it optically. Or maybe solder it to that flexy pla stic Parlux.

That's the major problem I suspected. One might perhaps improve the odds by adding 1 or 2 switchbanks between bus & core etc, and switches in power li nes for those cores, to much improve the odds of being able to take them ou t of circuit electrically.

Use a high yield CPU. It may not be what's wanted for today's supercomputer s, but if that's the one way to make this work, the imagined future can liv e with it.

Or split the CPU into sections, use a fair bit of redundancy and the yield goes up. With so much silicon one can waste a lot of it, at least in the fu ture when it's relatively cheap.

I'm hoping there is some type of RAM that can be made on the same process, even if it's dino RAM.

The goals are quite different. The single wafer attempts were trying to pro duce supercomputers, where peak performance per core is a requirement. I'm trying to have as much computing power in a future user's pocket as possibl e, a future where silicon has become extremely cheap.

NT

Reply to
tabbypurr

There's a good reason that almost all the computation in a human brain is done in the outer 2mm (that's why it's called "cortex"), and why it's so deeply folded to increase the surface area. The rest is mostly interconnect. Thermal management is *the* issue.

Clifford Heath.

Reply to
Clifford Heath

Liquid cooling doesn't hurt.

Reply to
krw

Birds seem to do better. They are more committed to minimising weight and maximising performance.

--
Bill Sloman, Sydney
Reply to
bill.sloman

tioning to a solid 3d block silicon IC structure?

Well, it's certainly one of them, not the only one. While we're familiar wi th computing hardware running near to flat out, producing lots of heat, the re are also cold running CPUs of lesser but still useful performance. Run t hat 3.6W PowerPC 750FX 900MHz CPU at 90MHz and it only eats around 0.36W. R un an XScale 80321 600 MHz 0.5 watt CPU at 60MHz and it eats about 0.05W. R un a Pentium M ULV 773 1.3 GHz 5 W CPU at 130MHz and it's down to 0.5W.

Even today a lot can be done with 200MHz if you give every task, subtask an d subsubtask its own CPU or set of parallel CPUs. Now assess each task to s ee what CPU performance it really needs, and you find that most of the CPUs can be run at much lower speed than 200MHz, and power consumption dwindles dramatically. Also most of those CPUs can be simpler, again cutting power use.

Since this is set in the future I can well believe enough progress is likel y to occur to permit running CPUs at far below their speed limits. The moti vation is enough computing power in one handheld block to run a huge number of tasks in parallel. (It can do a percentage of that by farming the proce ssing out elsewhere if necessary.)

NT

Reply to
tabbypurr

I hadn't heard of that. I recall a Ball Semiconductor which developed processing of spherical silicon integrated circuit equipment. Not sure what they are doing these days though. They appear to be around, but not making many waves. Still, that isn't really 3-D silicon as they only use the surface.

--

Rick C
Reply to
rickman

ers, but if that's the one way to make this work, the imagined future can l ive with it.

d goes up. With so much silicon one can waste a lot of it, at least in the future when it's relatively cheap.

The smaller the CPU, the less silicon area is lost from each defect. Maybe a lookup table CPU with RISC.

, even if it's dino RAM.

on-die cache is fast

Yield is the problem. What's wrong with connecting CPUs to busses via a ser ies of logic gates that connect or disconnect, and power rails too for the

50mW CPUs? Maybe to guard against a bad CPU not being disconnectable due to coincident logic gate failures one can use multiple busses.

NT

Reply to
tabbypurr

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.