microprocessor that improves as they progress

I'm not sure I would include HotSPot and other software-based JITs in technologies that allow a processor to improve at run-time. Transmeta's chips are included, though, as they at runtime improves performance of the only architecturally visible instruction set (x86).

Indeed. Transmeta decided not to let the native ISA be visible, so you can not do offline compilation from x86 to this, nor can you replace the architecturally visible ISA. IIRC, Transmeta made a prototype that had JVM as the visible ISA (translating at runtime to the same native ISA as the x86 chip), but that chip was never sold.

As others have mentioned, caches, branch prediction and similar technologies found in almost all modern CPUs will also make the processor improve performance over time, though not as dramatically as the Transmeta chips.

Torben

Reply to
Torben Ægidius Mogensen
Loading thread data ...

unless you're talking of overclocking there's no after-market, gross, speed boost available,

what is possible is to improve the performance by changing the software ( and in some circumstances the microcode) but this generally requires outside help - it's not an autonomous process.

another possibility is some sort of learning algorythm, but if efficiency is important it's usually best to get it right beforehand...

--
Bye.
   Jasen
 Click to see the full signature
Reply to
Jasen Betts

Has anyone worked with one of these clockless devices? I believe ARM Amulet is one such device. I think one would be interesting from a tinkering point of view but I'd hate to have to make one work in a system - particularly one with time contraints.

Reply to
Tom Lucas

yes it's possible but the genius hasn't appeared that has implemented it practically yet. we're still waiting on our eistein.

most people know that software can be optimized and made to run in parallel, so enough about that. it's not been done well, but eventually programs will self-modify and approach some sort of theoretical limit. with increasing chip speeds the pressure has not ever been there to make it happen, so nobody has bothered to do the math, etc.

what most people do NOT know about are the strides being made with fpga's, etc. here, the algorithms leave the realm of software only optimization and enter the world of hardware. the chip itself can change. with an fpga you can actually change the arrangement of logic gates on the chip to create new processors, or modify existing designs, etc, immediately, through software. so, in theory, a self-modifying program could discover that it has a number of tasks that could run in parallel so it modifies it's own hardware to create exactly how many miniature processors it needs to do the job most efficiently.

like i said, the capability is there, but the genius hasn't arrived yet who brings it all together. at the moment it is an untapped resource.

Reply to
purple_stars

Of course it's possible (as already answered). If you aren't talking about AI and processors improving themselves automatically (no learning), maybe reconfigurable computing would be what you're looking for. You put and RAM-based FPGA next to your processor and configure it to do some special tasks on-demand (i.e. if you need DES you configure it to do DES like a dedicated encryption hardware). If the task is recurring (you encrypt a huge file), you can ignore the impact of reconfiguring the chip on overall performace since the gain is far greater.

And there are also ideas for processors that are reconfigurable in a slightly different sense and so their architecture is optimised for one special C program (i.e. MP3 decoding) and these could be dynamically reconfigured during runtime (in FPGA). But it's ultimately left to the system designer to perform SW/HW matching optimisations. This could be taken even futher to multiprocessor designs, C program to system synthesys etc.

Check out NISC (No Instruction Set Computer) and other D. Gajski's ideas on the future of digital design.

- R.

Reply to
Roland

I do not think that it is just in search of a genius. It is also in search of proper economics.

I have some familiarity with FPGA implementations of general purpose microprocessors.

Overall, it looks like you give up at least 10X, and more like 100X, in performance, 10X in power, and 10X in area, for an FPGA implementation.

So your FPGA optimization of the existing microprocessor has to be

10-100X better - more performance, or better power - than an existing full custom design of a general purpose microprocessor, to be worthwhile. That's a big hurdle.

While I, you, and a recent college graduate, can probably create an FPGA design that is 2-4X more efficient in gate delays - even 100X for special problems - doing so across the board is hard, and harder still to overcome that initial hurdle.

And then... say that you have such an FPGA logic design that is 100X better than best full custom gehneral purpose microprocessor. Will it still be better than a full custom implementation of the RTL you put into the FPGA?

FPGAs only make sense if a) you can gain performance by putting it into hardware b) enough to overcome the performande lost by putting the hardware in an FPGA c) the market is big enough to justofy developing the FPGA d) but too small to justofy full custom There are a lot of such markets for FPGAs. Especially if development time is also an issue. But not everything fits this bill.

Reply to
Andy Glew

That doesn't mean that it can't be done reasonably.

--
Gemaakt met Opera's revolutionaire e-mailprogramma:  
http://www.opera.com/mail/
Reply to
Boudewijn Dijkstra

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.