Async FPGA ~2GHz

And (in some cases) the use of multiple carriers, rather than a single carrier, which simplifies everything else.

Not really. They are still completely applicable.

ADSL DMT uses OFDM modulation (Orthogonal Frequency Division Multiplexing, a multicarrier modulation).

Not really. 10GBase-T uses a modified 16-level pulse amplitude modulation, which is more closely related to 1000Base-T than to what telephone and DSL modems use.

Reply to
Eric Smith
Loading thread data ...

.... come on now ... spliting symantic hairs are we? Shanon's work is described by some as a law, even though most of us understand it's really a theory that has never been proven mathmatically. Ditto with "Amdahl's Law" and a host of similar predictions. Common knowledge proofs based on state of the art and general exceptance, generally are considered informally as a law ... with Amdahl's speculations being just one of many widely held belief's as a folk "law".

Hardly, except maybe for some precise mathmatical niches.

When I was taking engineering classes in the early 1970's one reather lengthy lecuture was on modems along with a detailed "proof" why modems would not get faster than 600 baud based on Nyquist Sampling Theorem, Shannon's work, and a few others. The rather lengthly presentation described spliting a 3K hertz bandwidth in half, for full duplex, choosing carrier freqencies that were not harmonics, and the minimum number of carrier cycles necessary to decode a symbol.

In retrospect, the proof contained some assumptions that have since been invalidated, mostly because of improvements in the phone line noise factors. With lower noise (cross talk) came the ability to design around a higher bandwidth (Shannon's Theory).

Reply to
fpga_toys

I thought we had been discussing Shannon's Channel Capacity *Theorem*. But I see you must be referring to something else. What exactly?

Allan

Reply to
Allan Herriman

John Lofton Holt: We got 1.93-GHz equivalent performance at 21 degrees Celsius and 1.2-volts Vdd. At minus 196 degrees Celsius we got 2.3-GHz performance at 1.2-V. Performance does taper off with increased temperature. At 130 degrees Celsius we got 1.4-GHz at 1.0-V. But even more important for us is voltage scaling.

As you linearly decrease the voltage you get a cubic improvement in power consumption. So at 0.6-V we got 400-MHz performance on our prototype but with an 87 percent reduction in power consumption.

eetimes: But surely CMOS doesn't work at 0.2-V and at 3.9-V a device in a 90-nm process would burn up?

John Lofton Holt: We've told you what we found. The chip did work down to 0.2-V and up to 3.9-V, but we did not test the chip for an extended period. Also it's true that the leakage current had increased by 50 percent when we brought the voltage back down from

3.9-V. It's also true that foundry SRAM cells would not work below about 0.8-V, but we've used proprietary SRAM cells.

Cheers, Jon

Reply to
Jon Beniston

Reply to
Peter Alfke

This was "Marketing at its Finest Hour" Peter

Reply to
Peter Alfke

Jon,

A small enough area of 90nm will run for a while on 3.9 volts. But it is very likely that those gates are all overstressed, and the fact that their leakage popped up means that the devices are at "end of life."

I liken this press release to one we made: (in 1999!)

formatting link

A demonstration of capability.

What we are all waiting for is the PRODUCT with a usable set of design tools, and some cool CORES and IP so that it does something USEFUL at a reasonable COST.

Right now this is a (proposed) solution looking for a problem (more likely looking for more funding).

And, I am completely sincere when I said "good luck."

Austin

Reply to
Austin Lesea

This came over the wire 5/1 from the CEO of Achronix:

"The fundamental hypothesis of the company is that nobody needs to know that internally the architecture is asynchronous - and that means in terms of EDA, in terms of design, and at the foundry. In many ways making the software appear synchronous has been a bigger challenge than the hardware. "

Let me paraphrase: "We are running 2GHz, but you don't need to know anything about the FPGA to use it. Oh, and by the way, the internal structure is asynchronous. The SW handles everything and it is the hardest part."

John - Yes, I do need to know what you are doing internally. And yes, normally in the FPGA world, the SW is the hardest part. And the fact that you are 'asynchronous' makes me, umm, more than a little skeptical.

Mike_la_jolla (mdini at dinigroup dot bomb)

Reply to
mike_la_jolla

These all sound plausible.

That's quite a good parameter point. ( even if the numbers do not track his 'cubic' claim )

Be nice to know what 'work' means at 0.2V. I can't see that being anything other than data retention ?

:) - It might have been prudent to keep that number 'in the lab', as clearly it was a stress point. Hopefully, it was the last test, or done on a 'spare' sample!

thanks - jg

Reply to
Jim Granville

In my book it is f x C x Vsquared. Where does the third power come from? Physics is physics, even when it runs asynchronously. Peter Alfke

Reply to
Peter Alfke

I'm guessing the third is due to the reduction in frequency.

Cheers, Jon

Reply to
Jon Beniston

Let's see:

1.93-GHz @ 1.2V 400-MHz @ 0.6V

(0.4/1.93)*(0.6/1.2)*(0.6/1.2)= 0.052, or 5.2%.

He mentions 13%, but that value will include static Icc, so it looks like, to a rough first iteration :

(5.2% Dynamic(calc) + 7.8% Static(inferred) = 13%(measured).

Assumed: Thermal equilibrium in their measurements.

However, because MHz has changed a better performance metric is energy. ( "cubic improvement in power consumption" is not the whole story, as it does come at a cost.. )

ie at 400MHz, it will take 4.825x as long to complete a multi cycle calculation (assumes a Burst/sleep scheme), and their Static Icc content has reduced the energy ratio to 0.62725.

[An ideal device with no Static Icc, and following the Physics, would have an energy ratio of 0.2509]

If you transfer that energy back to a battery, then the regulator choice becomes important.

Duty cycles and Idle.max ratios also matter, because that sets average die temperatures and Static Icc will suffer with 'C, and also MHz drops as well.

-jg

Reply to
Jim Granville

Peter Alfke schrieb:

LOL. f_max is a linear function of V. So you really get a cubic improvement in thermal design power. But of course, you also get less performance.

Actually, who cares about the time the energy is spent in for dynamic power? What really matters is Joule per operation which is C x V x V. The number of operations depends on your algorithm. Once you know how quickly you need a result you can select f to choose the amount of time you want to spent that energy in.

Kolja Sulimma

Reply to
Kolja Sulimma

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.