accumulator (again)

(snip)

A favorite statistical physics problem is calculating the probability that all the air molecules will move to one half of a room. There are many other problems with a very small, but non-zero, probability.

-- glen

Reply to
glen herrmannsfeldt
Loading thread data ...

Dear All,

I would like to thank you all for your contributions. I finally solved the = problem, that was not in the code as I immediately decided since i'm not ve= ry experienced in VHDL, but rather in my miss interpretation of the AD9058'= s datashet. I feel very stupid!=20

It was tanks to all your comments that I decided to finally rethink the pro= ject as a all and spotted the problem.

"God saves the internet and the good people that lives there"

jmariano

Reply to
jmariano

The drivers of metastability are probabilistic, yes. But given enough information you could certainly simulate the positive feedback loop that is a flip-flop.

I suspect that unless the ball that is the flip-flop state is poised right on the top of the mountain between the Valley of Zero and the Valley of One, that the problem is mostly deterministic. It's only when the after-strobe balance is perfect and the gain is so low that the FF voltage is affected more by noise than by actual circuit forces that the problem would remain probabilistic _after_ the strobe happened.

"Enough information", in this case, would involve a whole lot of deep knowledge of the inner workings of the FPGA, and the simulation would be an analog circuits problem. So I suspect that you couldn't do it for any specific part unless you worked at the company in question.

--
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.
Why am I not happy that they have found common ground?

Tim Wescott, Communications, Control, Circuits & Software
http://www.wescottdesign.com
Reply to
Tim Wescott

s to

m not

You keep talking about the critical path delay as if the metastable input is driving the critical path. There is only one critical path in a design normally. All other paths are faster. Are you assuming that all paths have the same amount of delay?

Regardless, all I am saying is that you don't need to use a path that has no logic to obtain *enough* slack to give enough settling time to the metastable input. But in all cases you need to verify this. As mentioned in another post, Peter Alfke's numbers show that you only need about 2 ns to get 100 million years MTBF. Of course whether this is good enough depends on just how reliable your systems have to be and how many there are. It is 100 million years for one unit, but for

10 million units it will only be 10 years MTBF for the group.

Rick

Reply to
rickman

e problem, that was not in the code as I immediately decided since i'm not = very experienced in VHDL, but rather in my miss interpretation of the AD905=

8's datashet. I feel very stupid!

roject as a all and spotted the problem.

Don't think of it as a stupid mistake, think of it as a "good catch"!

Rick

Reply to
rickman

I'm

ip,

s

I'd

to

es

h
n

ny

ttdesign.com

That's what probability is all about, dealing with the lack of knowledge. You don't know the exact voltage of the input when the clock edge changed and you don't know how fast either signal was changing... etc. But you do know how often you expect all of these events to line up to produce metastability and you know the distribution of delay is a logarithmic taper.

I won't try to argue about how many angels can dance on the head of a pin, but I have no information to show me that the formula that Peter used is not accurate, even for extreme cases.

Rick

Reply to
rickman

(snip, I wrote)

(snip)

No, but it might be that many have about the same delay. Well, my favorite things to design are systolic arrays, where there is the same logic (though with different routing, different delay) between a large number of FFs.

For any pipelined processor, the most efficient logic has about the same delay between successive registers.

Yes. One would hope that no logic would have the shortest delay, though in the case of FPGAs, you might not be able to count on that.

I have done designs with at most two LUTs between registers, and might even be able to do one.

-- glen

Reply to
glen herrmannsfeldt

Well, first, I wasn't trying to contradict you -- I just picked the wrong place in the thread to answer Hal's question.

And second, before you can know the necessary inputs to your statistical calculations, you need to do some simulating to see how long it takes for the state to come down from various places on the mountaintop.

The difference between a circuit that has a narrow & sharp potential peak vs. one that has a wide, flat, broad one is significant.

(One that had a true stable spot at 1/2 voltage would be mucho worse, but that's not too likely in this day and age).

--
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.
Why am I not happy that they have found common ground?

Tim Wescott, Communications, Control, Circuits & Software
http://www.wescottdesign.com
Reply to
Tim Wescott

(snip, someone wrote)

(snip)

Story I heard some years ago, the sharper and narrower the peak, the harder it is to get into the metastable state, but the longer it stays when it actually gets there.

-- glen

Reply to
glen herrmannsfeldt

Wow. That's counter-intuitive. I would think that the sharper the peak the less likely that the device would be stuck without knowing which way to fall.

--
Tim Wescott
Control system and signal processing consulting
www.wescottdesign.com
Reply to
Tim Wescott

(snip, I wrote)

First, remember that it is conditional on actually getting it to the metastable state.

I don't know if it is convincing or not, but consider balancing a knife on its edge on a table. You have a sharp and dull knife. Once you get the sharp knife balanced, it will make a deeper impression into the table and so stay up longer.

For the actual physics, there are some symmetries that require some correlations in the probability of getting into, and getting out of, a certain state. If you get it wrong, then energy conservation fails.

There is an old favorite, of putting a dark and light colored object in a mirrored room. (Consider an ellipoidal mirror with two spheres at the foci.) Now, consider the effect of black body radiation with a black and a white sphere. The black sphere absorbs most radiation (mostly IR light) but the white one doesn't absorb as much. Conservation of energy requires that the black one emit more black body radiation (that is where the name comes from). If not, the black one would get warmer, and you could extract energy from the temperature difference.

Note that this is why heat sinks are (usually) black. (To get a connection to DSP.)

Warm objects have more electrons in higher (metastable) states.

-- glen

Reply to
glen herrmannsfeldt

Well, one thing that I learned on this group is that metastability is not the most likely problem, it is time skew. If an unsynchronized input is fed to a number of LUTs scattered around the chip, it can have several ns of skew between them. The clocks have tightly controlled skew, so the unsynched input can be sensed differently at two locations. I ran into this on a state machine and it caused the state logic to go to undefined states. This was finally explained, I think by one of the guys at Xilinx, and that it can have a thousand times higher probability than true metastability of a single FF.

Jon

Reply to
Jon Elson

is

s

f?

is

o

gh

ock

e

ttdesign.com

Sorry if my tone sounded like I was offended at all, I'm not. I was just trying to make the point that you don't know the shape of the "mountain" the ball is balanced on and I doubt a simulation could model it very well. But that is outside my expertise so if I am wrong...

But I still fail to see how that shape would affect anything significantly. Unless it has flat spots or even indentations that were local minima, what would the shape change? It would most likely only change the speed at which the ball falls off the "mountain" which is part of what is measured when they characterize a device the way Peter Alfke did.

Still, even if there is some abnormalities in the shape of the "mountain", is that really important? The goal is to get the possibility so far out you just don't have to think about it. If the shape changes the probability by a factor of 10 either way it shouldn't make a problem. Just add another 200 ps to the slack and get another order of magnitude in the MTBF. Or was it 100 ps?

Rick

Reply to
rickman

g

We are still not talking about the same thing. The *max* delay in each stage will be roughly even, but within a stage there will be all sorts of delays. If you are balancing all paths to achieve even delays you are working on a very intense design akin to the original Cray computers with hand designed ECL chip logic.

Yes, that is the point, you need to verify the required slack time no mater what is in the path.

s
r

That would be good, but I don't know if it is very practical. To make that useful you also have to optimize the placement to minimize routing delays. I haven't seen that done since some of Ray Andraka's designs which are actually fairly small by today's standards. I can't conceive of trying that with many current designs.

Rick

Reply to
rickman

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.