pulse jitter due to clock

Hi to everyone, I'm developing some electronics to make a time measurement with a resolution of 25 ps. I'm using a dedicated ASIC to do so but I'm giving the signals to the ASIC through an FPGA. The way is very simple, basically I have some signals coming to my fpga which I will mask with some combinatorial logic and a configurable register so that I can allow some measurements or some others. The output of this "masking" will go to the ASIC. They assert (and here is the question) that a clocked device as an FPGA may add some jitter to the signals due to the substrate current overload (for the presence of the clock) that will lead to some 15 ps jitter over the signals. I don't know how they could resolve this value but I'm assuming they were telling the truth about numbers (at least, while I have some doubts about explanation of those numbers). Can anyone say something about this? Does it sound reasonable?

Al

--
Alessandro Basili
CERN, PH/UGC
Hardware Designer
Reply to
Al
Loading thread data ...

Al,

Passing a signal through an FPGA, and then expecting to resolve 25 ps is not a good idea.

The FPGA may add as much as thousands (yes thousands) of picoseconds of jitter: it all depends on the number of clocks running, their frequencies, if they are asynchronous or not, the number of CLB flip flops toggling (internal simultaneous switching), and the number of external IOs switching (SSO noise).

Additionally, since jitter is caused by anything being less than perfect, this also includes the power distribution network, and the signal integrity of all the traces (rise times, fall times, reflections, etc.).

The jitter floor for a FPGA that is doing nothing at all (signal in,, signal out) is probably around 35 picoseconds peak to peak. A completely synchronous design with everything done perfectly will probably come in at around 150 picoseconds, peak to peak jitter.

An ASIC is probably the last thing I would choose to do jitter measurement. As I have said, you do anything wrong (at all), and you will fail. Jitter is the result of converting amplitude variations into phase variations. AM to PM is the bane of our existence: it can not be prevented, only minimized. Miss one contributor, and you fail to meet your specification (and delay your project by many months).

To resolve the time you desire, it requires very high speed design (PECL), and virtually perfect power distribution, and signal integrity.

I hope others here on the newsgroup will provide you with some better guidance, as all I have done is explained the problems.

Austin

Al wrote:

Reply to
Austin Lesea

Hi Austin, I guess using differential signals is a good way to reduce AM to PM modulation. Is it true that the Virtex4 BUFIO regional clock is a truly differential signal from the BUFIO to the IOB clock pins?

I read

formatting link
QUOTE:- Each of these input pins or input pin pairs can connect to a BUFIO that drives a high-speed differential I/O clock network, which is dedicated to the I/O circuits and is ideally suited for source-synchronous data capture using the built-in serializer/deserializer (SerDes). END QUOTE

So, that's a cool thing. Did you guys do any measurements on the jitter performance of this? I.e. how much jitter is added to a differential data signal coming out of an IOB clocked by a BUFIO driven from a differential clock coming to the FPGA 'Clock Capable' pins.?

Cheers, Syms.

p.s. I think 1000ps is a lot of jitter even for an FPGA. Low 100's of ps is probably nearer to the mark.

Reply to
Symon

Let me guess an Acam part?

If you are trying to measure signals with 25ps resolution you have to be extremely careful with *everything* those signals pass through.

Passing them through as little as possible would be a good starting approach.

Reply to
nospam

Symon,

See below,

Austin

-snip-

V4 has an LVDS input, and LVDS output buffer. The signals are single ended inside the IOB, and IOB logic. Where they interface to the global buffers, they go differential again.

V4 improves the AM to PM over V2 and V2P, but it is still not perfect (there is that little bit of single ended still there to be influenced, and the differential balance is also never perfect).

-snip-

Yes, we have performed a great deal of characterization. And the clock capable pins, or even a plain IOB has no real difference in jitter performance.

-snip-

I agree. Just that we have seen cases where the customer did a number of things that conspired to ruin their day. And, we have seen cases where even with a great deal of jitter, all timing margins were still met, and the design still worked perfectly. For example, if you provide a forwarded clock (source synchronous system) with your data, the clock is likely to jitter around exactly at the same time and direction as the data, and the receiving chip has no trouble at all, even with completely awful peak to peak jitter!

I have posted before that in the past, if you connected every single CLB DFF to a global clock bus, and clocked all DFF at the same time (and they all changed state, as in a 0101.. pattern) the device would shut down due to the simultaneous nature of switching everything all at once collapsing the power rails.

Virtex 4 SparseChevron(tm) packages were the first family where you could do that, and the rails didn't collapse. Now, this is a pathological case (IMO), but it still makes my point: you can do things that will not work. We are here to help you with techniques that will work.

Reply to
Austin Lesea

For time measurements of this order, you'll need a *very* stable and low jitter clock source. I would suggest a PECL differential, or even perhaps a truly stabilised (thermally) oscillator.

There's an old saw that the measuring equipment should be at least 5 times better than the measurement, so you're looking at 5ps of jitter in the _entire_ measurement system. If, however, you can live with 25ps of total jitter in the measurement system, then it may be do-able, but you are aiming for resolution in the order of 20GHz (assuming one edge to be captured is a half of a cycle).

Keep in mind that you will have jitter introduced for:

  1. Clock / signal input indeterminacy. At some point, you have to capture your signal. The FF will switch somewhere in the indeterminate region. The slower your input source, the worse this is. You can't expect a FF to switch at the same level two consecutive times either, although usually adjacent clocks will switch at a close level. Estimation of that jitter depends on the technology you are using.

  1. Oscillator jitter. There will always be jitter on an oscillator. It might be low, but you can't get rid of it.

  2. Possible metastability. This afflicts all FFs, and although it can be worked around, you should be aware of it. (It's not a high level issue, but it does exist - there was a thread on it recently)

  1. PCB routing. All PCBs will exhibit deterministic jitter (which can be calculated). This will be made worse if you have vias on highspeed nets (which can be alleviated somewhat with differential techniques I won't go into here). Unless you are using waveguide or optical techniques (and even they suffer from jitter too) you'll have a low pass filter introducing jitter. Then there's track adjacency, impulse response of the power etc. Last, but not least, is impedance mismatch. There's always some, however small you may be able to get it.

I have seen a single via add 50ps of deterministic jitter on fast signals (edge rate about 10^4V/us) on FR4-13. I have no idea what PCB material you are using or intend to use, but keep this in mind.

Another issue of importance is which form of jitter is your biggest issue: Cycle to cycle? rms? peak to peak? long term (sometimes known as frequency drift)

Some food for thought.

Cheers

PeteS

Reply to
PeteS

Hi Austin, Thanks for getting back! Your reply surprised me; I now wonder just what does the diff clock routing bring to the party if not better jitter performance? BTW, are the regular global clock networks differential? Thanks, Syms.

Reply to
Symon

Could also be from MSC in Darmstadt, but as he has a CERN email address I am sure he is using the HPTDC developed at CERN. The HPTDC homepage has vanished, but we use it in one of our TDC boards:

formatting link

Slow input slopes create crosstalk in the HPTDC. Therefore it makes sense to have extremely fast LVDS input buffers in front of the chip anyway. If you use buffers with enable (or an AND-gate) you can control that from the FPGA to mask the signals. No need to route the signals through the FPGA.

You can contact us directly if you have more detailed questions regarding the HPTDC and FPGAs.

Kolja Sulimma

Reply to
Kolja Sulimma

PeteS schrieb:

This applies for serial datastreams were reflections from previous edges add jitter to the following edges. In time measurement applications the edges are extremly rare. Before the next edge any reflections will long have settles. Therefore you do not care about any pulse form modification as long as it is deterministic.

Kolja Sulimma

Reply to
Kolja Sulimma

Symon,

Well, yes they are differential across the chip.

And, what they accomplish is less jitter than if they had been single ended.

It is quite a battle: voltage goes down, distances get longer (for smaller wires), more stuff is switching, etc. Gains made may not appear to be substantial, yet without them, the result would have been far worse (no small gain, but a huge loss of performance).

Austin

Sym>

Reply to
Austin Lesea

Al,

My experience is that FPGA's definitely add jitter. The amount added depends on the device loading. I spent some time using test equipment to measure induced jitter. My observed numbers were no where near the 15pS you have quoted. The best I could ever discern was around 100pS. This was done about 1.5 years ago, so the technology has changed.

I read an approach on this board where someone suggested using a Virtex

4, with multiple inputs simultanously compared and a calibration procedure to lower the signal uncertainty.

Trevor

Reply to
Trevor Coolidge

I thought I just read something on one of Xilinx's "techXclusives" about one of the improvements of the Virtex 4 over the Virtex 2 series was that the global clock routing went from single ended to differential.

---Matthew Hicks

Reply to
Matthew Hicks

We did have some experience with Acam.

Exactly!

The big problem is that, as in all time measurements in physics, there will be a "trigger" configuration which will allow the time conversion. This will need to be implemented in an FPGA, because different "trigger" configurations will be needed. Because of that all the signals will come from an FPGA or from a combinational logic anyway. After that we can use all the drivers we want to minimize later sigma increase on the measurement, but still the source will jitter. My initial question was about the jitter increase due to the presence of a clock signal running through the FPGA, not that this clock source will have anything logically related to the output signals to be measured. We are getting signals from PMTs (PhotoMultiplierTubes), so they are single ended signals and there is no such a gain to convert them in LVDS signal and then convert them again to TTL inside the HPTDC. All these intermediate stages will drammatically add their sigma worsening the overall measurement.

I saw your PCI board with the HPTDC installed, wich type of LVDS drivers did you inserted in between NIM and HPTDC?

We are using a configuration such that to use the TTL port of the HPTDC and a fast comparator with a configurable threshold and an amplitude-time correction algorithm to correct the time-walk errors on different amplitude signals.

Regards

Al

--
Alessandro Basili
CERN, PH/UGC
Hardware Designer
Reply to
Al

I'm sorry Austin I didn't get your point at all. I'm not talking about delay (and I think you got this), so how can a signal-in signal-out add

35 picoseconds jitter? You said peak to peak but maybe I didn't explain what jitter is to me:

Given a fixed source that we know is stable in time (no matter how) and a signal produced from this source with some combinational logic and delay (like cables and I/O delay), the output distribution will be gaussian if we have a white-noise environment. The jitter I'm talking about will be (basically) the sigma of this distribution.

Is that mean that all these ASIC TDC you find around are just junk? They are ASIC, nothing more, just dedicated device to measure Time. There are basically two types of TDC AFAIK:

1) time expantion based: which is an amplitude measurement that is proportional to a time measurement 2) Calibrated standard-cell delay to shift in the value.

In the latest the measurement is quite more precise because typically the time-expansion circuitry is an analogue circuit which has a much worse stability then an integrated standard-cell delay.

Sorry I didn't understand this as well, what do you mean by

..

I do agree power distribution has a major effect on the time measurement, but that's why "calibration procedures" have been invented. You basically subtract (is a deconvolution operation to be precise, even though many physicists deny it) the environment noise from the meauserements. This operation is quite complicated because you need to insure that power consumption doesn't vary so that will affect the measurement.

Explaining problems is a great guidance, but I think I misused the word jitter and maybe I failed explaining my problem.

--
Alessandro Basili
CERN, PH/UGC
Hardware Designer
Reply to
Al

I have some experience with Acam product, but eventually we are using HPTDC developped at CERN.

I know, but unfortunately you need to pass through some logic to be able to mask some signals and to be able to recognize the correct logic combination you want to measure (i.e. the trigger configuration). To do that either you use discrete components (a lot of them if you are talking about hundreds of signals) or you use an FPGA to select the correct combination. After all that, depending on how your system is done, you need to distribute these signals to different boards, placed in different places, so you will need a TTL-LVDS and an LVDS-TTL conversion (maybe you can get rid of the second conversion if you are using LVDS input on your TDC). These pass-through are necessary to the logic of the measurement.

Of course all this logic is a combinational logic and has nothing to do with clock, but is it true that this logic cell delay will be changed by the presence of the clock inside the chip? even if this clock will not be connected to the combinational cell?

Thanks for your warning

Al

--
Alessandro Basili
CERN, PH/UGC
Hardware Designer
Reply to
Al

For any type of measurement of this order I will defenetely not use any clock source for different reasons:

1) power consumption, it is not reasonable to have a 20 GHz clock to measure 25 ps bin resolution, there are several other solutions which are quite less expensive in terms of power dissipation 2) low jitter clock source are very expensive and very difficult to verify. How would you trust your low jitter clock? from datasheet? and what if your clock has a bigger jitter because has a defect?

BTW thermal problems are not that issue, as long as you have the cure to measure temperature along your time measurement. With this information you can correct your values with a correct calibration procedure.

All these warnings are related to a synchronous logic approach, which is not what I will use, and which is not desirable at all in any time measurement to me.

PCB routing is not a major issue because with a good calibration procedure you can live with it.

Could you better explain what do you really mean by deterministic jitter in this case?

The more food we have the better results we get!

--
Alessandro Basili
CERN, PH/UGC
Hardware Designer
Reply to
Al

C'mon Al, CERN invented the WWW! :-) The first hit from Google explains it all!

I think the problem with FPGAs is that the leakage and thermal noise together with the internal single ended signals all conspire to add jitter to your signal. It doesn't help that the output rise time is, I guess, at best 500ps depending on what you're driving. You might like to consider a crosspoint switch solution. I think Mindspeed and Vitesse might have something you could use.

e.g.

formatting link

These parts are used in SONET (SDH where you are!) systems which have very tight jitter specs. Right up your street!

Good luck, Syms.

Reply to
Symon

So what? didn't get it.

Rise time doesn't involve jitter, as long as it is fixed. Maybe we should agree on what does jitter mean.

Maybe this is a good alternative, which maight have been used. Unfortunately the power consumption is something like 2.5 Watts for a

500 MHz BW chip. Consider that our DSP boards which holds 2 FPGAs, a DSP, a Ram chip and a Flash chip and run with 4 simultaneous link at 100 Mbits each will reach up to 1.2 Watts only. Unfortunately power budget is mandatory in our case, moreover we need have 5 channels per board and 20 boards, so it will soon reach a power budget that is not really acceptable.

Another main problem with this stuff is to find out reliability in a radiation environment, how will they behave? how long will they last? Maybe someone have some experience and did some rad-hard test with these type of chips.

Thanks for the suggetion anyway!

Cheers

Al

--
Alessandro Basili
CERN, PH/UGC
Hardware Designer
Reply to
Al

Well, actually you mightn't need to understand it; as Kolja posted, if your application is one-shot, so there's no ISI which would produce deterministic jitter in your system. Seriously, there's plenty of stuff online that explains the differences and sources between the different types of jitter.

OK, think about the receive end, as the signal rises through the switching threshold. Voltage noise adds more time jitter to a slow rising signal than to a fast rising one. That's my point and is why rise time is important.

Google gives:- The deviation from the ideal timing of an event.

HTH, Syms.

Reply to
Symon

Thinking about receiving end, what is easier to have: a much stable voltage or a faster signal? How much power will you need to have a faster signal? Then if you are worried about voltage noise, then I ensure you that time induced jitter is easily evaluated with the following formula:

T-noise = V-noise/rise-time

reducing rise-time has the same gain of reducing V-noise.

Moreover a faster rise-time can easily come with a variable propagation-delay, just because you moved the critical point on another component, which still has a voltage-reference to be crossed and an input stage to be loaded.

I don't think is quite an explanation, what does "ideal" means? Does it mean with "ideal" components? Still there will be a "jitter" depending on how you produce the signal and depending on how you "sample" the signal, ever heard about digitization noise? What "deviation" means? I hope this deviation will have a distribution, otherwise is not a deviation, but is better to call it delay. If all the components in the measurement chain will have a "deviation from the ideal timing of an event" you can still have an "ideal timing of the event" if all the "deviations" turns out to be delays.

The reference you pick this definition from is most likely the same one which says: "Jitter is composed of both deterministic and Gaussian"

well, this is much more confusing than other definition. Does deterministic jitter has a distribution? no? so why don't you just call it delay. Still if something _has_ a distribution it _can_ be deterministic the same, have you ever heard about notch filters? Aren't they cutting off a deterministic range of a noise spectra?

To my point of you I still believe we are not talking about the same thing.

--
Alessandro Basili
CERN, PH/UGC
Hardware Designer
Reply to
Al

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.