Time to ditch CPUs and concentrate on GPUs?
-------------------------------------------
Taking apart an Asus Revo, I couldn't help notice that the CPU chip is half the size of the graphics controller!
It was always destined to happen.
So now we have a situation where the CPU is graphically slowed down because it has to make huge number of connections the graphics chip to make that work. The same amount of silicon is repeated on the graphics chip. In the process, a large amount of electrical power and silicon is wasted trying to get the two devices to talk.
AMD and ARM are doing the right things by building the graphics controller into the CPU chip.
But it still 'feels' all wrong because the emphasis is on the CPU and treating the graphics controller as a peripheral.
The 'correct' way to do this is to treat the graphics controller as the 'central processing unit', and then put three or more CPUs around it as 'peripherals' to do the menial work.
Starting with lowest spec CPU, the low spec CPU will control all the peripherals such as UART, SPI, DMA and the like.
The next one up in power is the traditional CPU that runs your user programs.
The next one up in power is the graphics CPU that is dedicated to wrestling with the graphics engine with enormous programmable bandwidth into the graphics engine. This is the thing that will drink power if power is available and critically, scale back speed and the number of connections into the graphics engine if power is low.
So if power is available, it could use say a 128 bit bus from the graphics CPU into the graphics engine, but if power is low, then software could power down the big bus and possibly the graphics CPU and use a 1 bit serial bus to squirt the data into the graphics engine. Any of the lesser CPUs could do that as well shutting down higher spec CPUs and transferring execution of the main programs between the three computers.
Such an architecture does rely on ditching the idea of CPU as the most important things in a computer, and focusing all efforts to get graphics as the number one function inside a consumer computer chip, and littering CPUs around the GPU as peripherals that service the GPU functions.
Currently all the GPUs are encumbered. This creates documentation access problems for anyone trying to build an ecosystem around a CPU as rasberry pi have found out the hard way.
So it would be good to start with an
The development of a graphics supercomputer is made easy if one notes some critical advances have been made at opencores.
Their OpenRISC CPU which is similar to an ARM CPU is now operational on an FPGA and it has gcc and runs Linux so its entirely feasible to add the graphics CPU through 'arm chair' design effort into the FPGA and get it operational in next to no time.
The Linux and gcc compiler takes care of converting existing graphics libraries and generating executable OpenRISC assembler code and whatever else needed to feed a custom graphics engine. The custom graphics engine can be designed and modified to heart's content until it works because its just FPGA real estate. So the graphics centric GPU architecture could be developed in accelerated time and then rolled out to customers.
Who benefits? Mobile makers, desktop computer makers, gadget makers that have color graphics displays. In short, the beneficiaries are the vast majority of consumers that buy products with a display on it.