Xilinx DCMs, DDR, CLK0, and CLK180

I'm working on a design that's using multiple DCMs along with DDR i/o registers. The main input clock is 500MHz going into a DCM with the CLKIN_DIVIDE_BY_2 flag set so it's immediataly cut down to 250MHz. The DCMs used in the design also have CLKOUT_PHASE_SHIFT=VARIABLE.

To drive my DDR i/o flops, I'm currently using both the CLK0 and CLK180 pins (running at 250MHz).

The original designer flowed the DDR data though the chip using both clock edges, ie, the incoming "posedge" data went down a "posedge" data pipeline to flow back out the DDR output. The incoming "negedge" data went down a "negedge" pipeline. The original design did NOT use CLK0 and CLK180 for these two data paths. Instead, the designer used "posedge" and "negedge" in the Verilog. The major problem comes in the control and data signals that cross the two domains. Because of the potential duty cycle degradation, the P&R tools don't achieve timing closure consistently.

As an experiment, I changed the design to use CLK0 and CLK180 for the internal flops. I was: a) pleased when the P&R timing was better b) surprised when the design no longer worked. I figured I screwed up converting the pos/negedge vs CLK0/CLK180 conversions, so I tripple checked the design and didn't find anything. I backed up to a older working version, made the changes again being VERY careful in my conversion. Again, no luck. Timing number look good, the design doesn't work.

Is there something odd in the phasing between the the posedges of CLK0 and CLK180?

Anyone else run into something like this?

Thanks,

John Providenza

Reply to
John Providenza
Loading thread data ...

--

--Ray Andraka, P.E. President, the Andraka Consulting Group, Inc.

401/884-7930 Fax 401/884-7950 email snipped-for-privacy@andraka.com
formatting link

"They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759

Reply to
Ray Andraka

Howdy John,

I have to admit some ignorance here, but I won't let that stop me from later making some guesses about what is going on :-)

By duty cycle degradation, are you referring to what the DCM introduces, or something with regard to the clock tree?

The datasheets claims that the DCM "duty cycle precision" is +/-150ps, while the phase offset between DCM outputs is +/-140ps... and the jitter out of CLK180 is +/-50ps worse than just CLK0. So it seems like you are just going to be trading one problem for another problem here.

I can't say I'm too surprised at that. Or rather, I can imagine why that might be... if you open it up in FPGA editor, are there now distinct sections of the devices that seem to be run exclusively off CLK180 while other areas run exclusively off CLK0? If so, by diving up the clocks, you may have effectively helped the tools floorplan the design. Previously, everything was on a single clock tree, so it may have mixed the two pipelines willy-nilly, making timing more difficult to meet.

The only idea I have is to repeat Ray's mantra of being careful about going back and forth between domains that are on different outputs of the DCM.

Well, I take that back. Another idea is to floorplan the original design at the top level: divide the device in half, one for each pipeline, and see if you get noticeably better timing results.

Good luck,

Marc

Reply to
Marc Randolph

There is probably a difference in the net delay for CLK0 and CLK180 which applies a bias to your pos-to-neg and neg-to-pos domain transitions. If you have a poor quality clock with significant jitter, the transitions between those two domains can be marginalized further.

Using posedge and negedge of one clock gives you consistent net delay for the single clock reducing the skew in the two time periods.

Using the DLL to provide the clock *will* correct for duty cycle unless you tell it otherwise. This reduces the worries of poor duty cycle for the incoming clock.

The Xilinx timing analysis *will* include the estimates for the different clock net delays but - until version 6.1 of the tools - did not allow you to account for input jitter. One of the new features in the just-released v6.1 tools is the ability to proviide a jitter spec. At least that's what I've read - I haven't worked with it yet.

If your input clock was clean, the tools should have already accounted for everything. Check your clock quality to measure the jitter. What's most important is the spread after 1-3 clock edges, I would imagine, given the delay nature of the DLL which has no VCO. If the clock isn't great and you can measure the spread, check into the jitter constraint with the new tools; you might get better results.

And then there's always the initialization issues.... Do you have an explicit reset driving all your registers? My coding style tends to rely on power-up states with explicit INITs used to force the power-up logic where a don't-care will cause problems. When changing the code, registers without the INIT sometimes change power-up polarity leading me to discover they

*weren't* don't-care values.

Good luck sleuthing your mystery.

Reply to
John_H

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.