Power Consumption near Timing Failure Point

I know that an FPGA's power consumption is basically linear with clock frequency, but does anybody know what happens when the maximum clock frequency for the design is approached? Does power consumption remain linear in this region where failures and setup violations begin to occur, or would it then be expected to go up or down? Or is it totally design-dependent?

-Kevin

Reply to
Kevin Neilson
Loading thread data ...

Kevin,

It remains linear with increased frequency until the frequency gets so high that the transitions no longer go 'rail to rail'.

Generally, the point at which the signal does not go rail to rail is far beyond where the timing has failed, and there are things not meeting the required constraints.

At that point, power begins to decrease, as the V squared the signal charges and discharges to is decreased (so it goes down quickly -- almost like you hit a wall).

It is also design dependent,as each resource will have its own frequency where the rail to rail is no longer happening.

Aust> I know that an FPGA's power consumption is basically linear with clock

Reply to
austin

It would depend. If (eg) counters start skipping clocks, then power consumption would be expected to decrease, but only for the cells that were hitting their thresholds. ( so a small % of total Icc ) However, if something like a state engine starts short-cycling, then power consumption could increase, again slighty.

Why the question - are you thinking of using changes in Icc to flag timing failures ?

-jg

Reply to
Jim Granville

It might well be design and device dependent, especially if your design speed is near the max operating frequency of the device.

With CMOS processes logic speed falling back significantly at the upper end of the temp operating scale, combined with clock jitter, significant setup violations might create some significant metastability problems that aren't likely to be linear in power. Actually the extra dynamic power from the metastability events might well be self reinforcing from die heating slowing the circuit response more and increasing the number of metastable events and their duration.

certainly an interesting question :)

Reply to
fpga_toys

Totally design dependent. Generally, we see non-linear behaviour when you start crossing into the domain of timing failures -- but the degree of the change and where it occurs vs. the theoretical Fmax of the design varies considerably. It will also depend on your input vectors.

Regards,

Paul Leventis Altera Corp.

Reply to
Paul Leventis

We had a design for which power consumption was linear up to near (but less than) the max rated frequency, and then the power consumption went down. I don't know if it's because of a timing failure--the design being tested has no self-checking mechanism--but I wondered if that were a possibility. -Kevin

Reply to
Kevin Neilson

How much less than ?

Yes, I would be alert - certainly there is no process explanation for a decrease, it has to be caused by a change in toggling nodes. You could confirm it is a timing failure, by introducing another variable, like temperature, and seeing if the change-point moves. Simple items like counters, will fail with lower frequencies, and often with some 'noise' It is not a bad to overclock a design in test, to make sure you really DO have a timing margin.

-jg

Reply to
Jim Granville

Hi Kevin,

If you are observing non-linearity in dynamic power at speeds close to or below your max operating frequency, I would be worried.

The frequency of operation given to you by the timing analyzer is quite conservative relative to what you should observe on a single device in nominal conditions. For example, in Stratix II the timing assumes the transistor junction temperature is at 85C, that the voltage at the pins of the device is 1.15V (and lower than that at the transistor), and that they device you are using is at the very edge of timing for the speed bin it is in. Unless you are using a temperature forcer, have intentionally dropped the voltage, and somehow got your hands on a chip that is right at the binning limit, you should see faster operation in the lab.

How did you measure power? And how non-linear is the power you are seeing -- can you share some data?

One place where power becomes non-linear at relatively low frequencies is in the I/Os. Depending on your I/O loading, the caps can be big enough that you will be switching the I/Os before you've fully charged external loads. As you increase frequency, the signal swing is reduced dropping your dynamic power. In the core of the device, you'd need to be in the Ghz range before signal swing started to limit things.

When it comes to measuring power, if you are using a standard lab DC voltage supply and measuring the current out of it, you *will* see non- linear power unless you compensate the supply. Some supplies have a lead that you can connect to the board to sense the voltage as seen at the device. Without this, you will get IR drop through the cables connecting the supply to the board, and that IR drop will increase with clock frequency. The result is lower voltage at the pins of the device, which in turn reduces the device's power consumption and also will lower the maximum stable operating frequency.

Regards,

Paul Leventis Altera Corp.

Reply to
Paul Leventis

There is an extremely narrow window of time where metastability occurs in a flip-flop, and the window in which prolonged metastability occurs (something close to the clock period, for example) is infintessimally small. The chances of having a large number of FFs in a metastable state for any appreciable time is essentially nil.

As for "self-reinforcing from die heating", if the die is at

Reply to
Paul Leventis

Since the OP specifically asked about a window where setup violations occur, we are looking at metastablity from a different view point than what is the probability of a metastable event from a random asynchornous input. First metastability can be induced reliably in the lab for demostration to students ... we did this in the 1980's, and here is a current online reference

formatting link
on how to demonstrate it. Note the key variable in not time or probability, but a precise window of input voltage.

Metastability can occur any where we have a feedback path in logic, but it's best understood in terms of FF's. It's generally defined as an event when a FF continues to oscillate longer than a clock period. What's particularly wicked about this, is any amount of oscillation will result in additional dynamic switching power, including periods less than a full clock period which would create a metastable event by definition.

Now, if we consider just the switching interval of a CMOS logic gate output, we have a predictable voltage/time ramp generator between the two rails. The particular voltage of this ramp is predictable in time relative to it's clock (or driving input), and by varying the time, we can predictably produce inputs near/at the metastable input voltage for the FF.

This is the state we are > There is an extremely narrow window of time where metastability occurs

Given the above, please explain why?

Reply to
fpga_toys

fpga snipped-for-privacy@yahoo.com wrote: (snip)

The way I think of it is that most people want to put some logic between FFs to get something done. They then want to clock the circuit reasonably fast. The result is that the margin available for metastability to resolve might be 1/5 of a clock cycle. I don't think I would design closer than that. Adding one extra FF gives one complete cycle for it to resolve, or five times as long.

(snip)

If you are close enough to worry about setup times, then you have no margin left for temperature or voltage tolerance. Even without metastability the system could easily fail.

(snip)

-- glen

Reply to
glen herrmannsfeldt

Reply to
Peter Alfke

Thanks Peter ... nice to learn some nightmares are gone :)

Reply to
fpga_toys

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.