d-flop critical timing

e.

f

ard

The classic situation is that the output looks like a diverging exponential - if it sits within 0.1% of the halfway point in the first nanosecond, it will be 1% away after two nanoseconds, 10% away after three annaonseconds a nd at one rail of the other fater four

Oscillate or ripple?

Not at that instant when the sample input gets to the output, but meta-stab le states tend to resolve relatively quickly, so it will mostly get to high or low pretty quickly.

For ECL D-type bistables the time window for metastability is pretty narrow , and the chance of hitting it correspondingly low.

The actual delay from clock edge to the aperture instant isn't going to be totally stable, so the plot of average Q is going to have a slope.

--
Bill Sloman, Sydney
Reply to
bill.sloman
Loading thread data ...

Yes, in all these discussions a kind of idealized point on the clock edge is being used as the reference point. My suggestion is that this point on the clock edge will be different for D=1, D=0 and past output states. So one single aperture instant may be too simplistic a model.

piglet

Reply to
Piglet

To the extent that recent history is going to change the temperature distribution with in the device, and perhaps some kind of "stored charge" the position of the aperture instant is never going to be exactly defined.

It may be worth finding out where it is in John's application - it may smear itself out less than than the jitter he is trying to measure ...

--
Bill Sloman, Sydney
Reply to
bill.sloman

On the few times I encountered meta-stability, I've never seen the output of a logic part hover between 0 and 1, although some internal circuit node may indeed do so. The usual symptom is that the part seems to 'change its mind'.

I've seen the effect in TTL FFs and in an ECL phase/frequency detector.

Jeroen Belleman

Reply to
Jeroen Belleman

I've seen the output of a flop "hang" (for microseconds, though we were looking for it), though the issue may be one of SSI vs. M/LSI, with the latter having gain stages after the flop, rather than naked flops.

Reply to
krw

Sure, you have a good name for something that doesn't exist.

Reply to
krw

The approximately three things that can happen are

  1. A hang at some level between Vcc and gnd, that persists for some nanoseconds, after which Q goes to one rail.

  1. "Changes its mind" settles to 1 or zero then switches to the other

  2. Oscillation, several to tens of cycles, after which it resolves.

Any of those are fine with me. They contribute to my averaged Q voltage. ECL will resolve in nanoseconds, and my clock period will be microseconds or more, so I'll never notice the rare metastability event.

--

John Larkin         Highland Technology, Inc 

lunatic fringe electronics
Reply to
John Larkin

I think we'll call it Tsamp, treating the flop as a 1-bit ADC.

I'm getting some great creative responses here:

  1. It's impossible to measure this time so don't do it.
  2. You are violating the setup/hold time specs and that's evil, so don't do it.
  3. The universe has temperature drift and noise and stuff, so there is no point measuring anything ever.
  4. There's no name for this time and it's not on the data sheets and not in academic papers so it doesn't exist so we don't want to hear about it.

Funny.

If I get actual data next week, I'll post it with a trigger warning.

--

John Larkin         Highland Technology, Inc 

lunatic fringe electronics
Reply to
John Larkin

No one said that. It's not *one* time. It's a range, just like (and for the same reason) Tsu and Th.

I wouldn't do it if I wanted the output to be meaningful across a million (or even a thousand) samples. You have a good reason to do it (you actually want metastability) and have a small number of samples.

No, just saying that there is no _one_ point.

There is a perfectly good name for it (actually, two/them). Why invent another name that no one will understand?

Good grief!

Reply to
krw

Your looking at Probabilities ;) Larkins Cat?

Cheers

Reply to
Martin Riddle

Larkin's pussy?

Reply to
krw

You should ask this question in the FPGA group. There are some people there who are very knowledgeable on this topic.

I believe there was a discussion where there were two schools of thought. One was that there was a time region where the behavior was chaotic. A small change in time would give an unpredictable result. In other words, there was no knife edge that would result in all earlier times giving one data value (after a potentially infinite delay) and all later times giving the other output value. Rather there were points in between where the output would oscillate and so the result could not be predicted.

I seem to recall someone saying that with "modern" FPGA design (meaning in the last 5 to 10 years) the FFs in FPGAs did not have such chaotic behavior, but I don't recall the reason.

It will be interesting to see the result. If nothing else, it should show the importance of the jitter of your inputs is compared to noise in your system. It's hard to get all switching power supplies out of the system these days and follow-on linear supplies don't do much to eliminate higher frequency noise. I'd be willing to bet the floor of the measurement is set by the power supply noise.

--

Rick C
Reply to
rickman

As usual JL wants to simplify things to the point where they work for him. There are some FFs which simply do not exhibit the behavior he is expecting. Rather than there being a clear demarcation between one output and the other, some devices show chaotic behavior. Even the probability of a given output does not transition smoothly between 1 and

  1. It can have many bumps and wiggles, hills and valleys which may or may not be averaged out with repeated sampling.

However, my understanding is that some parts are well behaved. So once he takes his measurements and calibrates his system with edges with known jitter we will see which type of FF he is working with.

--

Rick C
Reply to
rickman

Impossible. Metastability is in the nature of the beast.

Reply to
krw

Last time I checked, jitter was a statistical measurement.

So, we can add,

  1. There is no such thing as a statistical measurement.

  1. Jitter can't be measured.

This gets better and better.

--

John Larkin         Highland Technology, Inc 

lunatic fringe electronics
Reply to
John Larkin

interesting statistical behavior.

But you don't believe in statistics, so I'm sorry that I have distressed you.

--

John Larkin         Highland Technology, Inc 

lunatic fringe electronics
Reply to
John Larkin

No one has said that. What's been said is that your measurements tomorrow (at a different supply voltage, ambient temperature, etc.)

*or* with a different *component* can vary -- probably significantly.

Not only can the width of the window vary, but its relationship to the clock edge can shift, in time.

But, hey, you'll be able to put numbers on ALL of these factors for us, right? How long are you expecting to let the DUT sit in that temperature controlled environment to come to a steady state temperature? How are you expecting the *die* temperature to change based on duty cycle, operating frequency, etc? How tight are you going to be able to control the ripple on the supply pins to the device? (i.e., picoseconds represent speed-of-light delays across the die!)

Right. I(dd) is spec'd at 2A -- should be no problem running 40A through it! (We'll just keep "selecting at test" until we find ONE that works -- and HOPE it keeps working!)

No one has said that. What HAS been said is that a measurement TODAY won't necessarily mean *squat*, tomorrow. So, unless you're an ACADEMIC wanting to publish a "This-is-what-I-saw-TODAY;-your-results-may-vary" paper...

If you'd gone looking, you'd see it *has* been named and published and quantified (for various logic families). I did this decades ago; hard to believe all of those papers no longer exist in the virtual tree archive!

I think we'll be far more interested in the REPEAT experiments you perform over the next few *weeks* -- esp as you're hoping to use this data to evaluate OTHER devices/bits of kit.

[The 6 digit DMM claims my power supply is -5.19934 Volts. No need to check it again *tomorrow*!]
Reply to
Don Y

One school is _flat_wrong_. Metastability is here and here to stay. The faster the logic, the faster metastability _tends_ to resolve itself but it still exists and it's a probabilistic thing (as is noise).

Nonsense. That's the whole purpose of synchronizers - to give flops a chance to stabilize. It's all a probabilities game. Without a synchronizer, you may get one failure per day. With one, with some synchronizer clock rate, you may get one failure in a lifetime but you can _never_ eliminate errors caused by metastability.

Reply to
krw

You read about as well as Slowman, John.

Reply to
krw

Metastability will not bother my measurement. If anything, an occasional metastable event helps.

But if I run the experiment at 1 MHz, and the flop goes metastable one time in a thousand, and resolves in an average of 1 ns, the Q average measurement is distorted by at most 1 PPM. I can live with that.

It's interesting how many people agressively struggle against thinking about anything new.

--

John Larkin         Highland Technology, Inc 

lunatic fringe electronics
Reply to
John Larkin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.