microprocessor that improves as they progress

Just wanted to know. Is it possible for a microprocessor (so the software) to improve their performance over period of time? Can microprocessor improve its performance as they progress/continue to work? If not, what we need to do to achieve this? could this be implemented in silicon? Is it worth doing? Is there any research in progress?

Reply to
v4vijayakumar
Loading thread data ...

Write a better compiler. About 10 years ago, several of the SPEC benchmarks were boosted by a quite significant amount by clever compiler tricks. So my Pentium became 20% faster over night - at least if you looked at the SPEC results.

You also can make application programs faster over time, running on the same processor.

No, the "aging" of silicon will make your processor slower over time, not faster. You can do that only in software.

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
Reply to
Bernd Paysan

What you are refering to is known as either AI or heuristics - software that learns over time by interacting with its environment.

Luhan

Reply to
Luhan

Sure, by adding a cache.

Jon

Reply to
jon

Google "machine learning"

-- Joe Legris

Reply to
jalegris

Luhan wrote:

no. It is not about AI/heuristics/mach> Sure, by adding a cache.

neither cache memory.

Anyhow, it will take microprocessor one machine cycle to execute "move r1 r2". My question is, is it possible for a microprocessor to make more than one moves in a machine cycle or to make one move in less than one machine cycle.

Reply to
v4vijayakumar

Perhaps a more relevent question is "Why has my code suddenly developed timing problems?"

Reply to
Tom Lucas

Some virtual machines already improve program execution over time. I don't see why a programmable microprocessor can't do the same.

Yes. Many architectures provide a 'swap' instruction that does just that.

Generally speaking, simple instructions can complete long before the cycle ends, but then have to wait for the next clock transition. Unless you have a clockless architecture, of course.

--
Gemaakt met Opera's revolutionaire e-mailprogramma:  
http://www.opera.com/mail/
Reply to
Boudewijn Dijkstra

Vijay,

It seems you've asked two questions between your two posts. This more recent question is really just a matter of architecture and parallelism.

However, I do think that AI, or possibly reconfigurable computing is an appropriate answer to your original post. Aside from caching and inherent parallelism in the architectures themselves, effective improvement of hardware can be accomplished via the enhancement of algorithms and data sets (AI), or literally reconfiguring the hardware to accomplish the most relevant instruction-execution architectures for the current data (RC).

The line between hardware and software optimization is inherently blurry here, as it well should be.

Julian Kain

formatting link

v4vijayakumar wrote:

Reply to
Julian Kain

Sure. Many use zero cycles to do a swap. All it is is a pointer change. The data goes nowhere.

Certainly. The one I work on can complete five instructions per clock cycle.

Some processors take zero cycles for some ops (like your swap).

--
  Keith
Reply to
Keith

Because it would make the microprocessor excessively complex (if it isn't already), and bugs in hardware are considerably more inconvenient and costly to fix than bugs in software.

There are already enough bugs in current microprocessors (e.g. see ) without making it worse.

--
David Hopwood
Reply to
David Hopwood

Or maybe even branch prediction..

Jon

Reply to
jon

There are people who are using genetic algorithms to invent improved circuit designs for FPGA. So in theory a processor might be able to optimize its own design when it is not doing anything else using this technique. It could optimize both the software it was running, and the software that defined the hardware that defines it and that runs the software that optimizes itself.

But it probably would not be able to keep up with Moore's law and optimize itself very far or very fast compared to the rest of the industry. Although some circuit designers have reported more optimization than they expected this way.

Reply to
fox

There are people who are using genetic algorithms to invent improved circuit designs for FPGA. So in theory a processor might be able to optimize its own design when it is not doing anything else using this technique. It could optimize both the software it was running, and the software that defined the hardware that defines it and that runs the software that optimizes itself.

But it probably would not be able to keep up with Moore's law and optimize itself very far or very fast compared to the rest of the industry. Although some circuit designers have reported more optimization than they expected this way.

Reply to
fox

There are people who are using genetic algorithms to invent improved circuit designs for FPGA. So in theory a processor might be able to optimize its own design when it is not doing anything else using this technique. It could optimize both the software it was running, and the software that defined the hardware that defines it and that runs the software that optimizes itself.

But it probably would not be able to keep up with Moore's law and optimize itself very far or very fast compared to the rest of the industry. Although some circuit designers have reported more optimization than they expected this way.

Reply to
fox

That is the premise of the field known as "dynamic recompilation" in general. I believe that IBM and HP both have results in that area, but the only two uses that have got much notice at a public level are Transmeta's Crusoe and Efficion processors and (perhaps) Sun's HotSpot JVM. The degree to which performance actually improves, as a function of time, is probably less than what could be achieved with a profile-directed native compiler, but various deployment issues can prevent that from being an option.

Cheers,

--
Andrew
Reply to
Andrew Reilly

I'm not sure I would include HotSPot and other software-based JITs in technologies that allow a processor to improve at run-time. Transmeta's chips are included, though, as they at runtime improves performance of the only architecturally visible instruction set (x86).

Indeed. Transmeta decided not to let the native ISA be visible, so you can not do offline compilation from x86 to this, nor can you replace the architecturally visible ISA. IIRC, Transmeta made a prototype that had JVM as the visible ISA (translating at runtime to the same native ISA as the x86 chip), but that chip was never sold.

As others have mentioned, caches, branch prediction and similar technologies found in almost all modern CPUs will also make the processor improve performance over time, though not as dramatically as the Transmeta chips.

Torben

Reply to
Torben Ægidius Mogensen

unless you're talking of overclocking there's no after-market, gross, speed boost available,

what is possible is to improve the performance by changing the software ( and in some circumstances the microcode) but this generally requires outside help - it's not an autonomous process.

another possibility is some sort of learning algorythm, but if efficiency is important it's usually best to get it right beforehand...

--

Bye.
   Jasen
Reply to
Jasen Betts

Has anyone worked with one of these clockless devices? I believe ARM Amulet is one such device. I think one would be interesting from a tinkering point of view but I'd hate to have to make one work in a system - particularly one with time contraints.

Reply to
Tom Lucas

yes it's possible but the genius hasn't appeared that has implemented it practically yet. we're still waiting on our eistein.

most people know that software can be optimized and made to run in parallel, so enough about that. it's not been done well, but eventually programs will self-modify and approach some sort of theoretical limit. with increasing chip speeds the pressure has not ever been there to make it happen, so nobody has bothered to do the math, etc.

what most people do NOT know about are the strides being made with fpga's, etc. here, the algorithms leave the realm of software only optimization and enter the world of hardware. the chip itself can change. with an fpga you can actually change the arrangement of logic gates on the chip to create new processors, or modify existing designs, etc, immediately, through software. so, in theory, a self-modifying program could discover that it has a number of tasks that could run in parallel so it modifies it's own hardware to create exactly how many miniature processors it needs to do the job most efficiently.

like i said, the capability is there, but the genius hasn't arrived yet who brings it all together. at the moment it is an untapped resource.

Reply to
purple_stars

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.