Changing refresh rate for DRAM while in operation?

Hi,

I'm trying to control a SDR SDRAM (Micron 64Mbit chip) using an Altera DE2 board. I've gotten the hardware interface squared away (thanks everyone for your help!).

Now it's the tricky stuff. Any one have an idea how I can change the refresh rate while the RAM is in operation?

I have the DRAM interface built using the SOPC builder that comes with Quartus II using the NIOS II system.

I know you can change the refresh rate during the build but I need a way to change the refresh rate during operation. The only thing I can think of is maybe change the clock speed? I have it running off a

50Mhz clock....

Thanks, Eric

Reply to
sendthis
Loading thread data ...

The most obvious question would be 'Why?'

That will limit your options (as would probably most other vendor IP DRAM controllers).

A simpler way would be to simply have a DRAM controller that has an explicit 'Refresh Request' input that would cause the controller to perform a refresh. Then connect that input up to any programmable timer or other logic that you would like to use. Changing the clock rate would be far down on my list of ways to accomplish your goal....but again, it begs the original question about why you would want to change the refresh rate dynamically at all.

KJ

Reply to
KJ

Why?

Reply to
David Spencer

Assuming he has a good reason to change it, the safest thing to do would be to call a routine in flash to change it.

Reply to
Jim Stewart

Since the only purpose of the refresh circuitry is to avoid the memory dropping bits, it should already be running at the slowest possible rate, and speed reduction will be harmful, while speed increase will do no good. So this is not a good idea.

What are you trying to do?

--
 Chuck F (cbfalconer at maineline dot net)
   Available for consulting/temporary embedded and systems.
Reply to
CBFalconer

Although it's not expressed in DRAM specs and you wouldn't want to rely on it, the effect of reducing refresh rate is to increase the access time. I'm not up-to-date with DRAM technology, but my experience with devices 30 years ago was that you could turn off refresh (and all other access) for 10s or more without losing the contents, provided you weren't pushing the device to its access time limits.

So, it's not impossible that reducing refresh rate would have a use (albeit outside the published device spec). But, as you suggest, it would help if he would just tell us what he's trying to do.

Mike

Reply to
MikeShepherd564

Although that may well be the case for asynchronous DRAMs (because the reduced charge in the memory cell capacitor would mean that the sense amplifier took longer to register the state), this would not be the case for SDRAM since this registers the outputs a fixed number of clocks after the access starts. If the underlying access time increased by too much then the data would just be wrong.

Reply to
David Spencer

For certain addressing patterns, the refresh can be eliminated alltogether, when the addressing sequence is such that all (used) memory cells are naturally being read, and thus refreshed, within the required time. Peter Alfke

Reply to
Peter Alfke

Sinclair ZX? at least some old Z80 homecomputers used refresh by video scan

Antti

Reply to
Antti

Yes, and it's a completely ridiculous way to do it. The added cost of making frequent additional row accesses is far greater than the cost of the necessary refresh.

A DRAM row is effectively a cache. When you access a row, you read the whole row into the DRAM's row buffer as a free side-effect, and can then make very fast column accesses to anly location in the row. It's preposterous to throw away that massive free bandwidth just to save yourself some refresh effort - unless you're trying to design a $80 home computer/toy in the early 1980s.

In those days, the video buffer was a sufficiently large fraction of the overall DRAM that it was reasonable to lay out the video memory so that every row was automatically visited by the video scan, giving a refresh cycle every 20ms (16.7ms in the USA). That was out-of-spec for many DRAMs of the day (8ms refresh cycle) but in practice it worked in almost all cases - and the manufacturers of those computers had a shoddy enough warranty policy that they weren't going to worry about a handful of customers complaining about occasional mysterious memory corruption on a hot day.

--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.
Reply to
Jonathan Bromley

That happens in a couple of common cases...

Running video refresh out of DRAM Running DSP code Running memory tests :)

I once worked on a memory board that worked better (at least as measured by memory diagnostics) when the refresh was clipleaded out. (We had a bug in the arbiter.)

--
These are my opinions, not necessarily my employer's.  I hate spam.
Reply to
Hal Murray

For SDR SDRAMs, the refresh period depends on the density. Highest density parts need twice the refresh rate (about 7.8 uS vs 15.6 uS). If you sensed the part size, or used a DIMM or SO-DIMM with a PROM for configuration, you may want to set up the refresh rate (once) after the FPGA is running. A full-fledged SDRAM controller could also set up other parameters based on a configuration PROM. This is not something that needs to be dynamic for any given system. You wouldn't swap out DIMMs with the power on. However it can be more useful than requiring a different configuration load for the FPGA depending upon the installed memory.

Reply to
Gabor

Reply to
Peter Alfke

for

If I recall, the Apple II also refreshed its RAM this way, too.

-Dave Pollum

Reply to
Dave Pollum

Ooh, I can be much more aggressive than that! And it certainly wasn't directed at you.

Nor is it; the absurdity comes from bending the addressing so that only a small part of each row is sequentially accessed, thereby wasting the massive increase in memory bandwidth that can be achieved for sequential-access applications by using the row buffer as a cache. My spleen was being vented at some designers of old computers (as alluded to by Antti, not you) who used video scan to access every row of DRAM on each video field, thereby unnecessarily burning-up memory bandwidth (which was in short enough supply on such machines) in order to save the trouble of doing refresh properly...

--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.
Reply to
Jonathan Bromley

Indeed so; a fair point. And you could perhaps also argue that the cost of row access, as a fraction of a data access, has increased quite dramatically over that time.

That too is an interesting point. My own experience of that sort of video controller was that they typically caused lots of processor stalling while video data was being fetched, but it may have been different for other designs.

True, but then you are *really* wasting bandwidth by doing more row accesses than necessary.

I guess you could, by juggling the use of address bits sufficiently cunningly, arrange that row accesses by video scan would *just* provide enough refresh to satisfy the data sheet spec.

I've seen many different variants on this: block refresh during frame blanking, for example. They all seemed pretty unpleasant to me at the time, and still seem so now - although, of course, no-one needs to do that sort of dirty trick any more (do they? please?)

--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.
Reply to
Jonathan Bromley

Jonathan Bromley wrote: (snip)

Processor speed has increased somewhat faster than DRAM speed.

When RAM cycle time was faster than processor cycle time.

Any access to the row will refresh the whole row. If you address it such that sequential characters are in different rows then it is refreshed much faster than the frame rate.

-- glen

Reply to
glen herrmannsfeldt

I disagree (softly), having designed several memory controllers, I always found it easier to just insert a READ DATA command into the DRAM when a refresh was needed, rather than insert a refresh command. The timing differences between refresh and a loosly coupled string of READS is such that one can refresh ahead with READs easier and then be in a position to absorb a longer string of demand requests by not using the REFRESH commands. Thus while running at the slowest overall rate, one can bunch and distribute the refresh mechanics to better interleave same with the demand memory requests and gain something.

But I will state the overall performance differences are a fraction of the refresh overhead anyways.

That is the real question.

Reply to
MitchAlsup

The bandwidth is there for the designer to use how they wish. It also only actually matters, if that bandwidth is the bottleneck in the total design.

eg I have done designs using interleaved video access, which removes flicker, and makes the system appear to be dual-port. On your yardstick, because the bandwidth is not 100% used, this is a bad design ?

-jg

Reply to
Jim Granville

Such as for a video processor. I've done several that used no refresh.

Reply to
Ray Andraka

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.