Thoughts about memory controller problems

Hi,

I tried to boot Linux on my FPGA-prototyped SoC system. The Linux image is running from SDRAM which is controlled by a Dynamic Memory Controller, but the Linux boot always stop at somewhere around(not at the same point each time): Mount-cache hash table entries: 512 CPU: Testing write buffer coherency: ok

I thought this could possibly point to some obscure timing issues of the memory controller, as 1) Modelsim simulation didn't point to any memory controller issues 2) the memory controller is on a different board from the actual SDRAM 3) tools like Synplify and ISE are not reliable from my limited experience 4) latches might also cause problems.

I'm asking for thoughts about what can go wrong when prototyping a memory controller into an FPGA. Please kindly offer your insights with memory controllers especially if you had similar problems before.

Thanks a lot,

Reply to
jack.harvard
Loading thread data ...

Set your SDRAM clock to "as low as possible to get correct refresh of the SDRAM". If it then works, you have a timing problem. If it doesn't work, you have another problem.

I have a Suzaku FPGA module. This has SDRAM, an Ethernet MAC (LAN

91c111), and a parallel Flash, all sharing the same external bus. Therefore, the FPGA contains an opb_sdram controller, an controller for the MAC, and a controller for the Flash, and a MUX to send the right signals from the right controller to the external bus according to whatever the CPU is accessing at any given time. This works, but the access to the MAC was too slow, so I added another core of mine to handle direct copy of packets from the MAC's internal buffer to a BRAM (the CPU can then parse the packet from BRAM which is much faster than re-reading it from SDRAM). The point that will be of interest to you is that, with the external MUX now having 4 inputs instead of 3, xst had to add another level of muxing, and all hell broke loose. It worked before, but this was luck since the timing constraint that would have specified that the time spent going through the MUX without going over the SDRAM's setup time requirements was missing. Plus I had no way to specify this constraint since I don't know the time it takes for the signal to propagate in the PCB traces. So I had to hand-place the stuff to get correct timing. So if your SDRAM is on another board, I'd say it smells of timing issues. Make it run at 1 MHz or something (but first check if that's allowed in the datasheet) and you'll know. Xilinx tools can apply timing constraints to whatever happens inside the FPGA, but they have no knowledge of the delay that may happen on your PCB traces, unless you tell it...

How long is the signal path ?

Alternately you could use clock feedback with the Xilinx sdram controller, where the controller sends the clock to the SDRAM, the clock then comes back in another trace and is used as an input, so the controller can take the prpgagation delay into account.

Reply to
PFC

The things that can go wrong are legion. The #1 problem you can have is that read data arrives at the FPGA input pins at an unknown time and on an unknown clock edge and possibly near a clock edge and your controller needs to be able to accurately deskew all the datalines. You say that the Modelsim simulation works, but do you have the exact PCB trace delays and IOB delays programmed into the simulation? Likely not. And worst-case delays won't help you any, because the worst-case delay might actually work well whereas the *actual* delay (which, in the case of IOBs is probably much better than worst-case) might not. Having the FPGA and SDRAM on different boards (?) also sounds like a problem. Most controllers are only going to operate over a narrow range of trace delays.

-Kevin

Reply to
Kevin Neilson

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.