I have implemented my own SDRAM controller in a Virtex II component in order to use SDRAM modules Sodimm-PC133 (133 MHz frequency).
My problem is that this block seems to work very well with MICRON Sdram modules, but it is not fully stable with SMART modules. It seems to be the burst reading which causes some bit errors (not many, we have at worst 25 bit errors on 32Mb files).
I think the FPGA block is OK, routing timings are correct, and I think my problem may be on SDRAM timings. I used 180° phase of my DCM to generate control signals and bring back datas, in fact I work on the falling edge of the SDRAM clock. I have tried to work on the rising edge but then results are much uncertain !
So my question is : Do you had some timing problems when controlling a Sdram ? On which edge do you work ?
It's quite common to find that top-rank manufacturers test and grade their devices more conservatively.
Obviously you have the SDRAM data sheet, and you use that timing to determine timing required at the FPGA pins. Are you using worst-case values from the data sheet?
Don't forget that every pin into and out of the FPGA, *including the clock*, suffers pad delays - have you checked all the pad timings? Often they are the slowest part of an FPGA design. Have you correctly accounted for the delay between external clock and internal FPGA clock? That delay doesn't matter when deciding how the FPGA operates internally, but of course it will affect any external timing that's relative to the clock.
I have tried to apply your advices, but I still have the some bit errors on my SDRAM modules.
Jonathan > The SDRAM datasheets announce a Data presence from -2.1 to 2.7ns on rising edge. The fpga synchronizes Data directly in the IOB but how can I know the pad timing ? Nevertheless I have tried to change the IOB attributes (FAST, 24mA, DELAY=NONE, LVTTL). The SDRAM clock is much better when I have 24mA, but the result is the same :(
Manfred > I have checked the voltage, 3.280V with 0.130Vpkpk when working, I have tried to rise up the voltage to 3.400V. the board has a ground plane, short wires < 10cm, bypass capacitors for the fpga and the sdram. I have a good LA and I can connect it to the fpga. I have tried to see the SDRAM datas. I can see my bit errors in the SDRAM read bursts, but not in single access writing mode.
"Verilog USER" > I already have a Digital Clock Manager (DCM) on my
2x clock. I think one of its function is to deskew the clock. Do you mean I must have a feedback from the clock ? I mean another pin which is dedicated to feedback the SDRAM clock.
I have only one DCM that generates the clock for the internal controller and the external sdram. Do you think this may cause a problem ? I have checked at the oscilloscope and sdram clock is synchronized with the fpga oscillator. Is that mean the DCM works correctly ?
I spent many time to this problem and I have tried many configurations without success. Could you eventually show me how you have defined your pads, your clocks and your timing constraints ?
It seems to me that you are not really familiar with the functionality of the DCM: Yes, you need to drive the CLKFB. The DCM is a "servo"-controller. It inserts the right amount of delay into its outputs, such that CLKIN and CLKFB coincide in time. (It's kind of like an op-amp, where the amplifier makes sure that there is no voltage between the two inputs. The DCM makes sure that there is no delay between the two inputs. But in either case, this only works when you close the feedback loop.) I think you need to analyze your timing "with pencil and paper", to figure out the best approach. Just trying it out with and without inverters is never going to get you a reliable design. The DCM has terrific capabilities, including phase shifting with 50 picosecond increments, but you must first study its description. One tip: Read the Spartan-3 description. Its DCM is practically identical with the one in Virtex-II, but the text is newer and better, in my opinion.
Peter Alfke, Xilinx Applications =================================
Once you've "pen and papered" your timing as Peter suggests, you might want to look into the "OFFSET OUT" constraint in the constraint guide. This allows you to give minimum "clock to off-chip" delays. The router will then take into account clock skew AND pad delays. To constrain your inputs, use the "OFFSET IN" constraint. If your delays are already minimal, it may not help, but at least you'll know. Of course, you still have to take all board delays into account yourself, which is why it's important to pen-and-paper first.
More knowledgeable people please correct me if I'm wrong, because I've been struggling with this myself, but my understanding of the situation is... The DCM needs a feedback to deskew. What it does is change the phase of its output clocks until its input clock and feedback clock are in phase. What this means is that if you feed the SDRAM clock into the feedback, and the oscillator clock into the clock input (look into using the "FEEDBACK" constraint for clocks that go off-chip before feeding a DCM CLKFB pin; I haven't seen a difference in my designs, but it seems like a good constraint to have, and it forces you to be aware of the feedback delay), the DCM will shift its CLK0 output (and all other multiples) until (if your board feedback path is properly matched) the clock edge at the SDRAM pins is in phase with the oscillator clock. For this to be true, CLK0's edge has to be generated [Feedback path delay] before the oscillator edge. Of course, this is fine if you stay whithin this single clock domain, but you have to be careful about reading back from the SDRAM. The data should be getting generated in phase with the oscillator, but it takes time for it to reach the chip. This is compounded by the fact that your next CLK0 edge comes less than one period after the SDRAM clock (at the memory pins) which generated the data.
-- to email me directly, remove all _N0SP4M_ from my address --
Offset constraints have been around for a while, not new. Anyway, if you register your I/O at the IOB, then the offset constraint isn't going to do anything for you except tell you when a flip-flop got pushed out of the IOB or that you set the drive strength/slew rate wrong.
SDRAM can be tricky, especially if you don't have external terminations. Higher slew rates and drive strengths can result in some nasty reflections that will sink even the most carefully executed FPGA design. Use the minimum drive strength consistent with your timing analysis. If possible use external terminations on the lines to the SDRAM (you can use DCI, but I've found that in addition to pushing the limits on package power dissipation, it is also slows the I/O down too much for SDRAM, especially without doing stuff with the DCM).
PO Laprise wrote:
--Ray Andraka, P.E. President, the Andraka Consulting Group, Inc.
Sounds to me like you are on the edge of your timings. Which could spell disaster in production. Not all memory modules from the same manufacturer will hae the exact same chips, especially as time passes.
The last few boards that we have brought up with SDRAM (both SDR and DDR), we have not only done the paper & pen timing analysis, but we have also verified the timings on the board. We do not do this with a logic analyzer, as this tends to effect the signal timing, regardless of how "good" the analyzer is.
The Xilinx DCM will allow you to shift the clock in 50ps increments, so you can affectively build an analyzer into your SDRAM controller. We make sure that the phase relationship of the outputs is correct by design, from doing basic timing analysis and reading the data sheets. We verify the timing for the capture of the data read back from the SDRAM by doing a memory test, incrementing the DCM phase, and repeating, until all DCM phases (or a reasonable subset) have been exhausted. Basically, there will be a first DCM phase at which the memory test passes, and a subsequent first phase at which the memory test fails. You will see MANY phases where the test passes if your design is okay (power distribution, layour, etc.) Set the DCM phase to the middle of this window, for optimal results.
The best pattern to test for input phase is an increasing address checkerboard pattern. This will make the data pins on the FPGA alternate every other clock cycle during a burst. The test should burst write the entire checkerboard, and then burst read the entire pattern back. This may seem obvious, but I have seen software programmers write all kinds of meaningless patterns that really tested nothing.
On some of our production boards we perform this test at power up to dynamically set the phase of the DCM. This might make sense for your application, given the use of memory modules, and their inherent replaceability.
Regards, Erik Widding.
Birger Engineering, Inc. -------------------------------- 617.695.9233
100 Boylston St #1070; Boston, MA 02116 -------- http://www.birger.com
I think I have found the problem !! In fact my fpga was generating too much Refresh commands, I had a period of 1.6µs instead of 15.6µs. I did this to make sure datas will be good, but the fact is that it is not the right way !
Peter > I already have a feedback for my DCM block. Otherwise I think it will not work. What I was trying to say is that I already saw loops that were external to the fpga, in order to deskew the external wires that go to the Sdram clock pin. But like Pierre-Olivier said, if we have one period of latency there is not any delay issue, even at
So thank you all trying to help me it is so greate to have such support when we are in trouble, so much interesting suggestions have been said here, and fpgas are very capricious when we begin using them (even if this time it was not its fault :)