I'm familiar with custom hybrids with parallel processing dies and must confess to not having kept up with the industry over the past decade -- That both Intel and AMD are now mass producing duals in a single package, are the factories bumping up against Moore's law or are these devices simply taking advantage of available packaging advances? I've no doubt their accountants weighed in on this one... :-)
Nothing personally imperative but interesting announcements nonetheless.
As I understand it, there are some interesting gotchas which are currently affecting the applicability of Moore's Law to processor design.
As you increase the number of semiconductor junctions per square, you end up having to use even-finer-pitch deep-submicron fabrication processes, thinner gate insulators, and lower voltages. Parasitic effects (inductance and resistance of the metal layers, capacitance between nearby traces and gates) begin to have a significant effect on the circuit performance.
Leakage current across the gates and through the transistors starts to climb, so that even a circuit which isn't being clocked is drawing power and dissipating heat. This can necessitate the use of variable-voltage circuitry (parts of the chips which can run at lower speed are fed lower voltages), or smart-powerdown logic, etc.
Total power supply currents become quite large (many tens of amperes at 1-2 volts) and the high currents lead to higher I*2*R losses. Power densities on the chip increase, leading to a need to remove a great deal of heat, very efficiently, from a very small area. I've seen pictures of what happens (very quickly) if an Athlon CPU loses its heatsink... smoke and flames within a few seconds. There was one famous comment made not all that long ago, that if power densities continued to climb at their current rate, it wouldn't be many years before a high-end CPU has a power dissipation rate equivalent to a similar-sized patch of the sun's surface!
What it boils down to, is that in some situations, you can get a certain amount of total CPU performance less expensively by using two or more CPUs or cores, clocked at a lower speed, in place of a single CPU/core clocked at a higher speed. As long as the operating system is capable of taking good advantage of multiple CPUs, money can be saved.
I saw one embedded-system board described this week in a trade rag, which uses a pair of 1-gig VIA processors rather than a single higher-speed CPU. It's capable of running with passive/convective cooling only - no CPU fans.
So, I think the answer to your question as to whether it's improved packaging / expertise, or problems in taking advantage of Moore's Observation, is that it's both.
Dave Platt AE6EO
Hosting the Jade Warrior home page: http://www.radagast.org/jade-warrior
Smoke & flames?!?! I reckon Legal had a say in this too in addition to the tech folks and finance departments...
Being infatuated with hybrids, we daydreamed a couple (humungous at the time) 6" wafers as the circuit board with traces printed on them, built off diode or transistor fab lines needing something to do. These would be populated with flip chip components sandwiched a couple layers high and topped off with a 6" solar cell as icing. Never decided on how to pot the rascal ;-)
The I/O seemed sort of unwieldy but wireless wasn't an ingredient back then. Hmmm...
With more and more gates available the chip designers have the problem what to use them for. More execution units for the same instruction stream won't do much good when they have to wait for branches to decide which instruction stream will be the one to execute. Executing multiple indepenent instruction streams puts the extra silicon to some use, provided of course that the OS and/or the application can provide an extra instruction stream.