Addressing scheme in Block RAM

Hi People,

I have generated a duap port RAM using a Xilinx Core generator . Port A is 32 x 32 and Port B is 8 x 128

The 32 bit port , Port A , is interpreting the addresses in the row order

00 01 02 . . . 1F

I had expected the 8 bit port to also interpret the addresses in the row order. I have done the simulation of the DPRAM and this was it responded as expected.

03 02 01 00 07 06 05 04 ......................... 7F 7E 7D 7C

But now comes the weird part . I implemented my design on a Xilinx Virtex - 2 Pro FPGA , the 8 bit port is interpreting the addresses in the column order ,. I have observed this using debug data as well as using Chip Scope Pro, i.e.

60 40 20 00 61 41 21 01 62 42 22 02 63 43 23 03 . 7F 5F 3F 1F

Are there any synthesis constraints that can prevent this from happening . All the documents and application notes say that the 8 bit port also should be addressed in the row ordering fashion. Could any one suggest why I might be having this problem

Thank You Venu

Reply to
Loading thread data ...

Reply to
Peter Alfke

Peter's reference may point you to the same conclusion you started with. The only thing I could see as a problem would show up in simulation as well: using bit-reversed addresses such as "reg [0:10] Addr8;" with confusion for bits 0,1. (The sample dimensions are for an 18 kbit BlockRAM.)

If you use [10:0] style addressing, you're correct that a write to address 9'h0 on the 32-bit side of 32'h12345678 should give you read values on the first 4 addresses on the 8-bit side of

address 11'h0: 8'h78 address 11'h1: 8'h56 address 11'h2: 8'h34 address 11'h3: 8'h12

If your target is BlockRAM, consider instantiating the BlockRAM primitive rather than the Coregen module to see if the problem clears up. The RAMB16_S9_S36 primitive would give you the aspect ratios you need.

If your target is distributed memory, there's a slight chance that the Coregen isn't "doing the right thing" since the asymmetric ports may be rarely used with the CLB SelectRAM.

You could also check the post-route simulation to see if it matches the live silicon.

- John_H

Venu wrote:

Reply to

I saw a problem recently where the behavior of the narrower port changed depending on which coregen was used. I know one of the IP modules was called "Dual Port something" and the other was more general and included dual port support. If that's not enough to put you on the trail I can try to find the exact names.

Ben Jackson AD7GD
Reply to
Ben Jackson

Reply to
Peter Alfke

Check the Xilinx documentation. There are exhaustive descriptions of what addresses to what when you're using dual ports with differing aspect ratios.

Reply to

As suggested by John, I replaced the BRAM generated by Xilinx Core Generator with a Xilinx Primitive RAMB16_S9_S36. I did not change any logic surrounding the memory modules. Now my memory is being addressed in the ROW ORDER fashion. ( i.e. as expected from the Xilinx Documentation )

I am not sure if the problem is with the address bit ordering , because then the error should have shown up in both the implementations . i.e. with the Core Generator as well as with the Primitve.

The only problem with using the primitive is that now I am using a memory block of 1024 x 8 as opposed to the 128 x 8. That is eating up a lot of the resources which I need in other modules.

Thanks Venu

Reply to

Sounds like the Coregen needs a webcase filed against it.

Memories are pretty easy to infer; if you want dual-port distributed CLB SelectRAM, you should be able to get a 160 LUT solution instead without using Coregen.

Do you use XST? Synplify? Other? Synplify has their own memory generator wizard these days, too.

Are you Verilog? VHDL?

it should be quick for someone to give you a module that compiles to LUTs rather than BlockRAM if you are tight on your system memory.

Reply to

Reply to
Peter Alfke

BRAMs are atomic 18kbits resources, any parts of them you do not use on either of their two ports are lost/wasted. With 1024x8 and 256x32 ports with overlapping ranges to produce a 8kbits 4:1 multiplexing memory, you are still wasting half a BRAM that nothing else in your design will ever be able to access.

Unless you need to sacrifice address bits to improve timings, using 100% of a BRAM usually requires very little extra effort and resources, although it can sometimes be obscene overkill. It would be nice if V6/S4 introduced 1kbit BRAMs/FIFOs (1x1024 to

32x32 aspect ratios), at least in the first and last BRAM column where they could conveniently be used as flexible IO FIFOs. These would neatly complement ISERDES/OSERDES & all.
Reply to
Daniel S.

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.