System-synchronous interface clocking between FPGA's

This may seem like an elementary question/application, but I'll bring it up nonetheless in hopes of getting a thorough understanding...

In our design, there are 80MHz system-synchronous interfaces between two FPGA's. There is a common clock source on the board with matched trace lengths to each of the FPGA's. The clocks then go into DCM's and the DCM 1x output clock is used to clock the IOB registers and also used as internal feedback to the DCM. Can we [almost] guarantee that the clocks coming out of the DCM's on the separate FPGA's are near phase-aligned, assuming matched trace lengths coming in? These are V4-LX160 parts. I was looking over the V4 user guide and couldn't find a fitting clocking application example. It seems it can never be fully guaranteed, since the DCM's deskew compensation on each of the FPGA's will certainly differ, not to mention small process variations. Since we have a 12.5ns period, I think we should have room in our timing budget to absorb these small phase differences. I will ensure that all the inputs and outputs utilize the IOB registers.

If anyone could reassure me that this design is relatively common and safe, and provide me with some information regarding the DCM output clock relationships on the separate devices, I will feel much better. I've definitely worked with these types of designs in the past, but never fully understood why things just work.

Reply to
bwilson79
Loading thread data ...

Matched lengths is good. You might consider a zero-delay clock buffer at the source to allow independent line termination resistors.

Close enough for most purposes.

Don't know, but I've done very similar boards with 3 or 4 matched clocks that worked fine.

-- Mike Treseler

Reply to
Mike Treseler

Do you *really* need the internals of the chips to be synchronized to the system clock? The specifications that are usually important are the setup/hold times of input data and clock-to out times relative to the system-synchronous clock. The DCM usually improves these timing margins to eliminate hold time and reduce the clock-to-out. The only reason I could think of to force the internals to the same clock would be to 1) build up a power-plane resonance, or 2) to have analog-like correlation between digital signals from multiple chips - generally not a good idea.

Perhaps your problem isn't a problem.

- John_H

Reply to
John_H

bwilson,

To insure the system remains synchronous, and aligned with the clocks as delivered to the devices, a DDR output should be used which is then routed right back to the DCM CLKFB input.

So, DCM CKL0 output, to a BUFG, to a DDR output IO, which is connected to an input IO, to a IBUFG, back to the CLKFB input pin of the DCM.

In this way, regardless of any variations on the devices, the difference between the DDR output, and the system reference clock input, is kept to under 100 ps at all times.

Leaving the feedback off of the IO pins (doing this internally), will remove variation in the device BUFG, but will not compensate for variation in the IO, and IBUFG's.

At 80 MHz, you may have a great deal of slack, but I provide you with this information, just in case you need better matching (which can be done as described, above).

Austin

Reply to
austin

Just make sure the sending and receiving flipflops are inside the IOB and you will be fine. You'll have 12.5ns minus some IOB delays and jitter uncertainty (say 500ps) to transport the signal over the PCB. You can probably get away with using a slow slew-rate setting.

--
Reply to nico@nctdevpuntnl (punt=.)
Bedrijven en winkels vindt U op www.adresboekje.nl
Reply to
Nico Coesel

At 80 MHz setup timing is really easy to meet with up-to-date FPGA's. If there is any issue on this interface it will be hold-time related. To meet hold time requirements, the receiving FPGA should use the flip-flops in the IOB with the delay element enabled. This is the default for input flip-flops unless you specify NODELAY. This guarantees a 0 hold time vs. the external clock pin. When using the DCM as a clock source, your hold time will actually be negative. This means that as long as the input changes after the clock edge, or even at the clock edge, your input flip-flop will correctly sample the input as the value from the previous edge. Any additional delay after the clock is "gravy". Using a relatively slow I/O standard like LVTTL will guarantee your hold time even with unmatched clock traces and part-to-part DCM variations. i.e. your minimum clock to out delay on one part will be in the handful of nanoseconds range and you input hold time requirement will be negative, so you can live with a few nanosconds (that's a lot these days) of skew. As you describe your system I would guess the clock skew to be well under 1 nS.

If you still don't believe this will work, add timing constraints to your inputs and outputs and take a look at the datasheet report at the end of your post P&R timing report.

I have made similar interfaces without using DCM or DLL. Normally using a global clock input pin ensures a minimum of delay from the pin to the global clock net, and minimum delay also translates into minumum part-to-part skew (part-to-part skew is a percentage of total delay). Since the incoming clocks are phase-aligned, and assuming you have no other sources of data using these clocks than the FPGA's, the internal delay is irrelevant as long as the two FPGA's are relatively matched, or have long enough minimum clock to output delays to deal with the part-to-part variance.

HTH, Gabor

Reply to
Gabor

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.