Virtex II DCM & ZBT SRAM

I tried the Xilinx support line Case # 503586 and haven't gotten a good answer so though I would try here.

I am trying to interface to a ZBT SRAM from a Virtex II and was trying to do like in xapp 136 which used 2 DCM's to generate a internal FPGA and an external board clock using external DCM feedback that are aligned. That configuration in simulation (5.2i sp3) shows the external clock is .5ns delayed from the internal clock. We actually need to use a third DCM FX output to generate the clock for the SRAM. When we do that the external clock is now leading the internal clock by 1 ns. I didn't understand why what clock feeds both DCM's would change the timing and since our timing is tight I need them to be closely aligned.

The external clock is output using a DDR FF and I used the DCM wizard which should of put in all the problem bufg/ibufg etc which the V2 users guide says are needed if it is going to compensate for the pad to DCM delay.

I also think only 2 DCM's are needed, 1 to generate the internal clock using FX and a second to generate the deskewed external clock. That configuration seems to generate the same timing as the 3 DCM version.

Anybody know the correct solution?

Also does anybody know an easy way on the board to measure the alignment between an internal and external clock? Bringing the internal clock out to a pin would require knowing the actual loaded buffer delay which I wouldn't think I could get very accurate. If I can accurately measure it and its not a simulation artifact I should be able to tweak the phase shift to make it align. I would think that should then be stable across boards.

Couple other questions, has anybody seen the hold time of the V2 IOB flip flops documented?

For the DCM is is ok to feed the locked from one into the reset of the next or do we need to hold the reset for at least 3 clocks? Different documentation shows different ways and the direct connection didn't work under 6.1 simulation.

Thanks, David Gesswein

Reply to
David Gesswein
Loading thread data ...

I don't have any experience with ZBT SRAM, but in general terms I've been using the DDR mechanism to generate outgoing source-synchronous bus clocks and it's worked like a charm for frequencies approaching 200MHz. Of course, careful board layout is a part of this as well.

Use FPGA Editor to bring the internal clock out to a pin as close to the external clock as you can (to reduce board/layout/measurement errors). FPGA Editor will give you routing delay information. I think it's accurate to the pin.

next

I think it's OK, but I've never done it that way. I'm paranoid and flip-flops are cheap and plentiful.

--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Martin Euredjian

To send private email:
0_0_0_0_@pacbell.net
where
"0_0_0_0_"  =  "martineu"
Reply to
Martin Euredjian

ZBT interface is intended to use a single system synchronous clock. They require setup and hold time on the data after the clock to them and the data is valid quite late in the cycle on SRAM read and they don't provide a clock back to register the data. They also have tri-state requirements for the bus turnaround. For the output side using inverted clock and data DDR type would work but properly registering the input using that type of scheme isn't as easy which is why I was trying the single internal/external aligned clock.

I have only found how to get max delay from the tool and wasn't sure how much the actual will differ. We will see.

Thanks, David Gesswein

Reply to
David Gesswein

David,

The precessing clock offset is most likely due to the default intentional delay inserted into the internal DCM feedback path to give zero hold time at the IOB inputs; this advances the clock as seen directly at the DCM outputs.

Try setting the DESKEW_ADJUST attribute on the DCM to SOURCE_SYNCHRONOUS, which disables the internal delay element. ( but be wary of I/O timing changes if that DCM drives anything other than the forwarded clock )

See pages 4-5 of XAPP259 for a good description of the delay logic.

This is one of those "DANGER WILL ROBINSON!!!" subjects :

- use a reset delay circuit, and make sure it's clocked by something other than the output clock of the DCM you're attempting to reset :)

- if you're using external feedback into a DCM to do board deskew, the deskew DCM requires an even longer reset delay because the feedback clock IOB is disabled until config completes, so the master DCM can't startup properly

- be sure to monitor the LOCKED, CLK_IN, CLK_FX status bits to decide when to reset the master ( and it may not hurt to independently check that the pre-DCM input clock is present and accounted for with some sort of watchdog logic )

- any slave DCMs need their own startup logic that fires once the master has locked, and make sure you check the status of ALL slave DCMs to decide when it's time to punt and reset the whole shooting match.

- once you think you have it working, disconnect and reconnect the input clock a few times and fix anything that fails to recover properly.

Possibly Helpful Answer Records:

14743 What is the DESKEW_ADJUST attribute? 15350 What does the DESKEW_ADJUST constraint do? 10972 DCM - What do the various status pins represent? 14425 Resetting after configuration is strongly recommended for a DCM that is configured with external feedback 11067 ModelSim Simulations: Input and Output clocks of the DCM and CLKDLL models do not appear to be de-skewed 13024 How are Tdllino and Tdcmino calculated for a DLL/DCM?

have fun, Brian

Reply to
Brian Davis

FPGA

When you define a probe it gives you a routing delay.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Martin Euredjian

To send private email:

0_0_0_0 snipped-for-privacy@pacbell.net where "0_0_0_0_" = "martineu"
Reply to
Martin Euredjian

oops, I left off one with a good description of manual IOB timing calculations required to correct reported DDR setup times:

18079 The input setup/hold requirement reported by TRCE seems excessive for my design; is the value correct?

Brian

Reply to
Brian Davis

Howdy David,

I'm not quite clear on what you are using the FX output before, but I've used ZBT SRAM's on a number of designs over the past couple years.

Here is how I do it: feed the input/reference clock into two DCM's. The CLKFB of one is just the output of the global buffer. The output of the other DCM goes off chip (using a DDR FF doesn't get you anythig... the deskew function takes the delay out). Put two resistors on the output pin (keeping the resistors as close as possible to the FPGA), and route from one resistor to the ZBT. The other resistor routes to a GCLK input pin and feeds to the CLKFB pin. As long as you keep the two "long" traces (the outputs of the two resistors) close to the same length, you won't get reflections, and the DCM will remove the skew so that the rising edge of the clock arrives at the ZBT around the same time as the rising edge occurs inside the FPGA.

As another poster mentioned, if you use external feedback, Xilinx recommends (or used to) that you hold the second DCM in reset for a long while so that the clock has time to propagate off chip and back into the feedback pin.

As for how you can compare the output clocks of the two, do you really need to? What you care about is alignment of the ZBT clock and the address or data bus transitions coming from the FPGA (which are a function of the internal clock).

Good luck,

Marc

Reply to
Marc Randolph

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.