At every computer conference I attend, I see numerous papers that show
> how to incrementally increase the capabilities of present products,
> plus a paper or two about some aspect of distant future processors.
> There is a sort of consistency among these papers that, taken
> together, creates an image of the manifest destiny of processors that
> are VERY different from present-day processors and networks. I am
> interested in that image, and I suspect that others here may also be
> interested.
I am reading in comp.arch.fgpa, but comp.arch readers may have different ideas.
Here is the sort of image that I see emerging. Perhaps you have your
> own very different vision?
> 1. Processors would be able to automatically reconfigure around their
> defects with such great facility that reject components will be nearly
> eliminated. This would make it possible to build processors without
> any practical limits to complexity. Several papers have been presented
> explaining how this could be done with Genetic Algorithm (GA)
> approaches. Initial reconfiguring would be done at manufacture, but
> power-on reconfiguring would adapt to on-shelf and in-service
> failures. Processors with large numbers of defects would be sold as
> lesser performing processors.
Reminds me of stories about Russian processors that came with a 'bad instruction' list the way disk drives (used to) come with a bad blocks list.
If you follow such conferences, you necessarily get far-out ideas. But if you look at the actual processors in use today, they are not so different from 40 years ago. Bigger and faster, yes, but otherwise not that different.
2. An operating system would distribute the work as tasks, with each
> task having input and output vectors. Any task that fails to
> successfully complete would be re-executed on other sections of the
> processor while diagnostics identify the problem in the failed
> section, which would then be reconfigured around the new defect. This
> would allow systems to keep running and continue producing correct
> results, despite run-time failures.
I suppose there are some problems that could work that way. A web browser updating multiple windows on a page could farm out each to a different task. But many computational problems don't divide up that way.
3. Memory would be integral to the CPU, and would be in the form of
> thousands (or millions) of small memory banks that would eliminate the
> memory bus bottleneck. Switched memory buses could quickly move blocks
> of data around.
> 4. The processor would be organized as a small (2-4) number of CPUs,
> each having a large number of sub-processors capable of dynamic
> reconfiguration to specialize in the computation at hand. That
> reconfiguration would be capable of the extensive data-chaining needed
> to execute complex loops as single instructions, and do so in just a
> few machine cycles, after suitable setup. Sub-processors would
> probably be reconfigurable for either SIMD or MIMD operation.
Very few problems divide up that way. For those that do, static reconfiguration is usually the best choice. Dynamic reconfiguration is fun, but most often doesn't seem to work well with real problems.
5. The system would probably use asynchronous logic extensively, not
> only for its asynchronous capabilities, but also for its inherent
> ability to automatically recognize its own malfunctions and trigger
> reconfiguration.
> 6. A new language with APL-like semantics would allow programmers to
> state their wishes at a high enough level for compilers to determine
> the low-level method of execution that best matches the particular
> hardware that is available to execute it.
APL hasn't been popular over the years, and it could have done most of this for a long time. On the other hand, you might look at the ZPL language. Not as high-level, but maybe more practical.
7. There are other items on this list, but they aren???t as easy to
> explain, and they may not be essential to achieve the manifest destiny
> of processors.
> Note that the Billions of dollars now spent on developing GPU-based
> and large network-based processors, along with the software to run on
> them will have been WASTED as soon as Manifest Destiny processors
> become available. Further, the personnel who fail to quickly make the
> transition to Manifest Destiny processors will probably become
> permanently unemployed, as has happened at various past points of
> major architectural inflection.
Consider that direct decendant of the 35 year old Z80 are still very popular, among others in many calculators and controllers. New developments might be used for certain problems, but the old problems can be handled just fine with older processors.
For many years now, the economy of scale of people buying faster processors to browse the web or run spreadsheets has supplied computational sciences (computational physics, computational chemistry, and computational biology) with cheap, fast machines. Machines that wouldn't have had sufficient economy of scale without those other uses. The whole idea behind GPU processors is that the economy of scale of building graphics engines for gamers can also be used for computational science.
Apparently the only conference around with a sufficiently broad
> interest and attendance to host discussions at this level is
> WORLDCOMP. This would provide a peer reviewed avenue of legitimation
> for Manifest Destiny research. I have talked with Hamid, the General
> Chairman, about hosting these discussions, and he is OK with it,
> providing that I can drum up enough interest. So, I need to determine
> the level of interest out there in a more distant future of computing
> that lies beyond just the next product.
Consider the latest deviation from traditional processor design, the VLIW Itanium. VLIW has been around for years, and never did very well. Some thought its time had come, but it is sinking just like the similarly named boat.
Conferences aside, please email me or post your level of interest, and
> please pass this on to any others you know who might be interested.
-- glen