Original (5V) Xilinx Spartan ?

Hi Peter,

Amazing!

What a pity!

I just said that, if you want, we can recapitulate the metastability stuff. During your absence we found the cure for metastability (Rick designed the circuit), we built, tested and patented a perpetual motion machine (Ray worked a lot here) and finally, we almost finished a prototype of a Time Machine. We are preparing a Dinosaur Hunt, do you want to join us?

Now, coming back to Earth. You said (@ Thinking out loud about metastability): "I have never seen strange levels or oscillations ( well, 25 years ago we had TTL oscillations). Metastability just affects the delay on the Q output." But Philip Freidin showed (@ Mitigating metastability) some pictures of the FF output during metastability that disagree. Do the Xilinx FFs have a different behavior?

One more question. You also said (@Thinking out loud about metastability): "Remember: Metastability causes an extra 3 ns of unpredictable delay once in a billion years... Seems to be an affordable risk.". What kind of input? What clock frequency?

Luiz Carlos

Reply to
Luiz Carlos
Loading thread data ...

Peter,

Forget my last question, I saw your post at "opinions are OK".

Luiz Carlos

Reply to
Luiz Carlos

Hi Austin,

I was just kidding with Peter. No ofense at all. I just said that, if he wants, we can recapitulate all the metastability stuff, so he does not need to be sad about not beeing here! See my other post.

As he come back from Portugal, I wrote in portuguese!

Luiz Carlos

Reply to
Luiz Carlos

I have a lot of respect for Phil, we are personal friends and have worked together for over 20 years. I think he used old TTL pictures.

300 MHz clock, ~50 MHz data, Virtex-IIPro. See TechXclusive on the Xilinx web.

Peter

Reply to
Peter Alfke

frequencies.

Is this correct ? Wouldn't the 3.3nS to 10nS increase in clock time, buy you (10-3.3)/0.5 lots of 'a million times' scalings ?

Of do you mean the time to trigger a Event, not fail due to one ?

What about this issue: With a CLK.Data stream, the CLK pulses that are not adjacent to the DATA edges, cannot have metastable events, so should not enter the scaling ?

The best model would seem to be a Data.Aperture AND a Clock.Aperture, (both very small, but I don't think they HAVE to be equal ) and when they overlap/come closer than a critical time threshold, the metatstable dice rolls. What happens after the roll, depends on how far away the next clock is (call this the settling tail)

Prediction stats would be an area-overlap basis, and assuming async signals ( non zero phase velocity ) the area product would be proportional to (Data.Aperture/Data.EdgeT) x (Clock.Aperture/Clock.EdgeT)

Typically, Data.EdgeT = Data H or L time Clock.EdgeT = ClkPeriod

This is average trigger/dice roll prediction, but the actual 'metastable dice roll profile' will depend on the phase velocity, and will have peaks much higher than the average.

What if your system hits/moves very slowly over this 'phase jackpot' ?

Here, area-mitigation stats are not much use, and you have to rely mainly on the settling tail to next clock ( and maybe a small amount on the natural system jitter ) IIRC Peter quoted 0.05fs virtual aperture time, and natural jitter is likely to be some few ps - certainly large relative to the aperture ?

An experimental setup designed to focus on this phase jackpot, would give interesting results, and allow peak estimates, as well as a higher occurance for more usefull Tail stats gathering. Summary : Best predictor model would have Data.Aperture, Clock.Aperture and a Settling Tail. Exact nature of the settling tail is system measurable over a range of a few decades, but extrapolation is dangerous.

Agreed. I still think from an 'average user' perspective, that a specific 'design cell' approach would help.

Also, from a technical detail viewpoint, implementing a 'regenerative latch triplet' [Pre-Latch + Flip Flop] or [Dual Flip Flop] in a single local space, removes routing delays from one metastable tail.

It does NOT 'fix' metastable behaviour, but it does encapsulate it, and move it to the best the silicon can provide, and eliminates the potentially variable routing delays. It also allows for future technical research and improvements to reduce the apertures, and the settling tail.

- jg

Reply to
Jim Granville

I checked with Spartan marketing ( I will never quote specific prices without their blessing ), and they confirmed my numbers:

The price per LUT/flip-flop is not constant. At the small end, the package cost drives it up, and at the high end, it is the yield loss that drives it up.

The sweet spot is around the 3S1000 with 15 360 LUTs/FFs. It will sell in late 2004, slowest speed grade in large quantities for $20. That's 0.13 cents per LUT/FF.

The number for the 2S400 is $10 for 3840 LUTs/FFs = 0.26 cents per LUT For the top-end 3S5000 it is $150 for 66 560 LUTs/FFs = 0.23 cents per LUT.

That means I was pretty close with my 0.2 cents.

Peter Alfke

=================================== rickman wrote:

Reply to
Peter Alfke

I'm assuming XC3S1000-4FG676C will cost $85.65 in 5000 quantities.

(This was a recent Xilinx quote.)

I also assume that the $20 figure is for seriously large quanities, but I'm still surprised there's that much of a difference.

Reply to
Pete Fraser

delay.

frequencies.

Jim, I have seen your name here before, but I don't know what your level of understanding of metastability is. So forgive me if I sound like I am talking down to you. I don't know if you are trying to discuss fine details of this topic or if you are new to the issues of metastability.

If you look up references about metastability you will find that the MTBF time scales linearly with clock and data rate, but exponentially with settling time. There is a constant for each part of the equation. These two constants are what characterize a particular FF design and process used to build it. Peter's comment is saying that if you allow just 3 ns settling time with his rates and parts, you will have an MTBF of a billion years. Certainly you can go longer and get MTBF times longer than the age of the universe. So yes, 10 ns would be way more than enough.

No, a metastable event will happen with a much higher rate based only on the rate of the clock and data. But it will have no impact on your circuit if you don't use the output until after the metastability has settled out. Given a time period this calculation determines how often the metastable event will persist and cause an error.

This is already considered in the calculation. That is why the frequencies are multiplied. The assumption is that the two rates are truely asynchronous and are not correlated in any way. The the chance of them happening in just the right timing relation is a function of how often each of them is occuring.

This may sound good, but it no different than the current model and would be much harder to measure. It is best not to think too hard about this, but rather to be a bit on the empirical side. That seems to be one way that Peter is very smart. His measurements seem very good to me and many others. It is no good to rationalize about things you really can't measure.

I think "phase velocity" is *way* over the top. Before improving on the current formula, it would be good to find something wrong with it. Is there anything about it that falls short?

All of this is really just a way to relate what is happening. Since the noise in the circuit is relatively large, I would expect tons more jitter in the "window" than the actual width. So really the fs window is just a concept, not a very real event.

Can you explain how this would be better than the current model?

Or you can just use the double FF approach and require a routing time for this path that is at least 3 ns less than the clock period. Again, simple, empirical and effective.

--
Rick "rickman" Collins

rick.collins@XYarius.com
 Click to see the full signature
Reply to
rickman

I recommend that you find an alternative and go back to your supplier for a better price. I have not gotten a quote for this part, but based on my experience you should be able to do better than $40 at your volume and package. You might be able to get below $30.

--
Rick "rickman" Collins

rick.collins@XYarius.com
 Click to see the full signature
Reply to
rickman

delay.

frequencies.

delay.

Sorry, was I that unclear ?

I think we are saying the same thing.

I was asking Peter for a clarify, only he can know what he meant.

I beg to differ. The best understanding come from finding models that are easy to explain, and can be used in the widest manner, and that also help guide (new) measurements and understanding. Being a designer, I am all for 'hard numbers'.

Above you state " The assumption is that the two rates are truely asynchronous and are not correlated in any way."

If the model cannot cope with other than this hypothetical ideal, that's rather 'falling short' ?

The concept of phase velocity is not way over the top, as it introduces the important concept/point of what to do, when 'truely async' does not apply, and also what to do, if you design needs PEAK rather than simple average tolerance. In some designs, that aspect will be important :

A system can have an 'average MTBF' number of some years, but still fail a number of times in one hour.

IIRC Austin L. gave a good real-world example ?

Agreed, that's why I called it a virtual aperture. The idea of Clock and data apertures also gives the correct dimensions to the answer.

See above. I don't see it as radically different than the current thinking, ( the tail model is the same, only I'd be more cautious about far-extrapolation ) but it does allow better handling of peak/average predictions, and it leads to real measurements to define these two.

But you have to know enough to take those steps, and it is still exposed, more than encapsulated. I'm thinking of the newest breed of graduate, and they represent more the average user than you or I :)

-jg

Reply to
Jim Granville

I meant to write a lengthy rebuttal and explanation, but rickman said it all. Thanks ! Peter Alfke

Reply to
Peter Alfke

I have a new idea how to simplify the metstable explanation and calculation. Following Albert Einstein's advice that everything should be made as simple as possible, but not any simpler:

We all agree that the extra metastable delay occurs when the data input changes in a tiny timing window relative to the clock edge. We also agree that the metastable delay is a strong function of how exactly the data transition hits the center of that window. That means, we can define the width of the window as a function of the expected metastable delay.

Measurements on Virtex-IIPro flip-flops showed that the metastable window is:

? 0.07 femtoseconds for a delay of 1.5 ns. ? The window gets a million times smaller for every additional 0.5 ns of delay.

Every CMOS flip-flop will behave similarily. The manufacturer just has to give you the two parameters ( x femtoseconds at a specified delay, and y times smaller per ns of additional delay)

The rest is simple math, and it even applies to Jim's question of non-asynchronous data inputs. I like this simple formula because it directly describes the actual physical behavior of the flip-flop, and gives the user all the information for any specific systems-oriented statistical calculations.

Peter Alfke, Xilinx Applications

Reply to
Peter Alfke

Quite agree.

eg: Take a system that is not randomly async, but by some quirk of nature, actually has two crystal sources, one for clock, and another for the data. These crystals are quite stable, but have a slow relative phase drift due to their 0.5ppm mismatch.

Now lets say I want to know not just the statistical average, but to get some idea of the peak - the real failure mode is not 'white noise', but has distinct failure peaks near 'phase lock', and nulls clear of this. Seems management wants to know how bad it can get, for how long, not just 'how good it is, on average', so we'll humour them :)

That's a "specific systems-oriented statistical calculation". Please demonstrate how to apply the above x & y, to give me all the information I seek.

-jg

Reply to
Jim Granville

Interesting. Let's say we have two frequencies, 100 MHz even, and 100.000 050 MHz, which is 50 Hz higher. These two frequencies will beat or wander over each other 50 times per second. Assuming no noise and no jitter, each step will be 10 ns divided by 2 million = 5 femtoseconds. That is 80 times wider than the capture window for a 1.5 ns delay. Therefore we can treat this case the same way as my original case with totally asynchronous frequencies. I think even jitter has no bearing on this, because it also would be far, far wider that the capture window. That means, this slowly drifting case is not special at all, except that metastable events would be spaced by multiples of 20 ms (1/50 Hz) apart. But that's irrelevant for events that occur on average once per year or millenium.

Now, you will never ever, under any circumstances, get a guarantee not to exceed a long delay, since by accident the flip-flop might go perfectly metastable and stay for a long time. It's just an extremely small probability, expressed as a very, very long MTBF. That is the fundamental nature of metastability.

To repeat, I like the capture w>

delay.

Reply to
Peter Alfke

I don't want to beat a dead horse, but I do want to make clear that the capture window model does not eliminate the frequency of the clock and data from the failure rate calculation. The basic probability of a failure from any single event is clearly explained by the window model, but to get a failure rate you need to know the clock rates to know how often the the possible event is tested, so to speak. If you double either the clock or the data rate, you double the failure rate.

--
Rick "rickman" Collins

rick.collins@XYarius.com
 Click to see the full signature
Reply to
rickman

I'm collecting empirical results - do you have any URLs, especially covering the 'double either' aspect ?

-jg

Reply to
Jim Granville

delay.

The asynchronous system produces even distribution across the sampling clock cycle. The synchronous system with arbitrary phase gives you a lumped distribution at the phase offset. The critical point to realize that you won't get a system to be consistently going metastable is that there is *significant* jitter in the sampling and data clocks relative to the metastability window.

Determine the distribution of the data relative to the sample point. The peak of this (gaussian?) distribution will be the worst-case error point. What percentage of that statistical distribution is within the 0.07 femtosecond window? This provides for the "worst case" for management or for engineers.

It may not have been as easy when the metastability window was much larger than the system jitter.

Reply to
John_H

Peter,

My understand> I have a lot of respect for Phil, we are personal friends and have

--

--Ray Andraka, P.E. President, the Andraka Consulting Group, Inc.

401/884-7930 Fax 401/884-7950 email snipped-for-privacy@andraka.com
formatting link

"They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759

Reply to
Ray Andraka

Can Philip Freidin clarify these photos ? ie devices (date codes & process?) under test, and clock / data conditions to stimulate the failures, and appx failure rates observed ?

-jg

From Philip's excellent FAQ:

Reply to
Jim Granville

[Apologies if everybody was hoping this horse was dead.]

Suppose you have an analog detector. It's output is 0, 1, or maybe when in transition. How do you turn that into a clean digital signal? Doing that is the same as solving the metastability problem.

A digital logic gate when operating near threshold is roughly linear. It's trying to decide if the input is above or below the threshold. A gate is really an analog circuit if you look hard enough, just saturated most of the time. But the times we are interested in are when it's not saturated.

--
The suespammers.org mail server is located in California.  So are all my
other mailboxes.  Please do not send unsolicited bulk e-mail or unsolicited
 Click to see the full signature
Reply to
Hal Murray

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.