Hi I am trying to infer a Xilinx dual port block ram with differen address and data widths. I want to infer it in my code and then ge Synplify to recognise it. I can do this if the address and data are th same but dont know how if they are different.
I'm completely missing out on what possible use it would be to have different address widths.....memory needs to be addressed, if you give it a partial address just what do you think should come out? Simple example would be a 3 bit address on one port and a 2 bit address on the other. On the one port you can go from 000 to 111 the other from 00 to 11....so what do you want coming out of the memory when the two bit address is set to 00? The data that is located at memory 000? Or the data from 100? I think if you'll ponder on this for a bit you'll realize that different address widths make no logical sense at all.
In any case, what you need to do is size your dual port memory to a single address and data size. Next put a wrapper around it that
Instantiates the dual port memory
Adds whatever logic you require to define the mapping between the different data bus sizes.
Imagine a 64-bit RAM with two ports. One port has 6 bits of address and 1 bit of data; the other has 1 bit of address and 32 bits of data. On address port A, you put a value 0-63 and you get the single corresponding bit of the contents. On address port B, you put a value 0-1 and you get either bits
0-31 or bits 32-63 of the contents. Generalize to whatever widths you want.
I'm not sure why none of the synthesis tools support this, but it's true, they don't. I've always ended up instantiating something to get this behaviour. :-(
For some reason when reading the original post I was reading that what was needed was independent control of both address and data over the multiple ports implying a certain number of memory bits accessible from port A and another (different) number from port B.
two > I'm not sure why none of the synthesis tools support this, but it's true,
Not sure what support you think you're not getting. Memory can be inferred from plain vanilla VHDL with synthesis tools. Data bus sizing (and the implied address bus sizing) is a wrapper around that basic memory structure and gets synthesized just fine...so it is supported.
If what you mean by 'not supported' is that there isn't a pre-defined instance that you can plop down into your code and parameterize, then you're going into the Mr. Wizard territory which leads you to vendor specific implementations. Avoiding the vendor specific stuff is usually a better approach in most cases. To have vendor independent useful modules like this, these modules should be standardized. This is exactly the type of thing that LPM attempted to do. LPM languishes as a standard though because it didn't get updated to include new and useful modules. Presumably this is because the FPGA vendors would rather go the Mr. Wizard path and try to lock designers in to their parts for irrational reasons rather than enhance standards like LPM so that designers can remain vendor neutral at design time and let the parts selection be based on rational reasons like cost, function and performance.
Yup, that's what I would assume (since nothing else makes sense :-))
The *functionality* is supported, but the optimal mapping to the technology is not. Or wasn't last time I looked. If I write that plain vanilla VHDL, I have never seen a synthesis tool create an asymetrically-ported RAM from it; I always got a RAM with a multiplexor on the output (or worse, a bunch of DFFs).
In many cases, I would agree, so long as you don't end up crippling your design's performance as a result, and spending money on silicon features that you're not going to use. After all, they were put there to help you make your design as efficient as possible (which managers usually like).
Certainly making sure that vendor-specific functions are isolated in the code so they can be swapped out at will is a sensible practice. As is making a careful risk assessment whenever you consider using a feature that only one vendor or device family supports.
With all due respect, I think you presume too much. There are many problems with wizards and core generators for things like RAMs and arithmetic elements - mostly, they are the wrong level of abstraction for most designs. Nevertheless, IP cores from FPGA vendors serve two major purposes. Firstly, they help designers get the most out of the silicon in those cases where synthesis tools are not sophisticated enough to produce optimal results. Secondly, they allow designers to buy large blocks of standards-compliant IP - such as error correction cores, DDR controllers, and what have you - instead of designing them in-house.
I'm not denying that there is a risk of vendor lock-in, but I'd dispute that it's the motivating factor for vendors to develop IP. Certainly when members of the IP development team that I belong to here at Xilinx sit down with the marketing department and discuss roadmaps, the questions that come up are always "What are customers asking for? What is good/bad about our current product? What new features do we need?", not "How can we ensnare more hapless design engineers today?". :-)
Perhaps I do, since I don't work for the FPGA vendors I can only speculate or presume.
Maybe. I find there lack of a standard on the 'internal' side to be the bigger issue.
I agree, they are good at that. I don't believe that a unique entity is required in order to produce the optimal silicon. Once the synthesis hits a standardized entity name it would know to stop and pick up the targetted device's implementation.
And just exactly which standard interfaces are we talking about? DDRs have a JEDEC standard but the 'user' side of that DDR controller doesn't have a standard interface. So while you take advantage of the IC guy's standards to perform physical interfaces, you don't apply any muscle to standardizing an internal interface. The ASIC guys have their standard, Wishbone is an open specification, Altera has theirs, Xilinx has theirs.....all the vendors have their own 'standard'.
Tell me what prevents everyone from standardizing on an interface to their components in a manner similar to what LPM attempts to do? The chip guys do it for their parts, the FPGA vendors don't seem to want to do anything similar on the IP core side. This doesn't prevent each company from implementing the function in the best possible way, it simply defines a standardized interface to basically identical functionality (i.e. it turns read and write requests into DDR signal twiddling in the case of a DDR controller).
Can you list any 'standard' function IP where the code can be portable and in fact is portable across FPGA vendors without touching the code? Compression? Image processing? Color space converters? Memory interfaces? Anything? All the vendors have things in each of those categories and each has their own unique interface to that thing.
I was only suggesting that it was an incentive...which you seem to agree with.
The user community "pressures" the FPGA (and other IC) evndors to come up with better and cheaper solutions. That's called progress. We love it!
We respond with new and improved chip families. We get some help from IC processing technology, i.e. "Moore's Law", especially in the form of cost reduction, and a little bit of speed improvement (less with any generation). We also have to fight negative effects, notably higher leakage currents.
Real progress comes from better integration of popular functions. That's why we now include "hard-coded" FIFO and ECC controllers in the BlockRAM, Ethernet and PCIe controllers, multi-gigabit transceivers, and microprocessors. Clock control with DCMs and PLLs, as well as configurable 75-ps incremental I/O delays are lower-level examples. These features increase the value of our FPGAs, but they definitely are not generic.
If a user wants to treat our FPGAs in a generic way, so that the design can painlessly be migrated to our competitor, all these powerful, cost-saving and performance-enhancing features (from either X or A) must be avoided. That negates 80% of any progress from generation to generation. Most users might not want to pay that price.
And remember, standards are nice and necessary for interfacing between chips, but they always lag the "cutting edge" by several years. Have you ever attended the bickering at a standards meeting?...
Cutting edge FPGAs will become ever less generic. That's a fact of life, and it helps you build better and less costly systems. Peter Alfke ===========
None of that is precluded, I'm just saying that I haven't heard why it could not be accomplished within a standard framework. Why would the entity (i.e. the interface) for brand X's FIFO with ECC, Ethernet, blah, blah, blah, not use a standard user side interface in addition to the external standards? Besides facilitating movement (which is not the only concern) it promotes ease of use in the first place.
I agree, those are good examples of some of the easiest things that could have a standardized interface....although I don't think you really agree with my reading of what you wrote ;)
I said standardized not 'generic'. I was discussing the interface to that nifty wiz bang item and saying that the interface could be standardized, the implementation is free to take as much advantage of the part as it wishes.
My point was to agree on a standard interface for given functionality not some dumbed down generic vanilla implementation of that function.
To take an example, and using your numbers, are you suggesting that the performance of a Xilinx DDR controller implemented using the Wishbone interface would be 80% slower than the functionally identical DDR controller that Xilinx has? If so, why is that? If not then what point were you trying to make?
I don't think any of the FPGA vendors target only the 'cutting edge' designs. I'm pretty sure that most of their revenue and profit comes from designs that are not 'cutting edge' so that would give you those 'several years' to get the standardized IP in place.
Stop bickering so much. The IC guys cooperate and march to the drumbeat of the IC roadmap whether they think it is possible or not at that time (but also recognizing what the technology hurdles to get there are). There is precedent for cooperation in the industry.
Again, my point was standardization of the entity of the IP, not whether it is 'generic'.
But not supported by anything you've said here. Again, my point was for a given function, why can't the interface to that component be standardized? Provide an example to bolster your point (as I've suggested with the earlier comments regarding the Wishbone/Xilinx DDR controller example).
This is actually a fairly common usage model for the Xilinx dual port RAMs. It lets you, for example store two words per clock on one port and read them one word per clock on the opposite port at perhaps a faster clock rate. The data width and address width vary inversely so that there are always 18k or 16K bits in the memory (18K for the widths that support the parity bit). For example, if you set one port for 36 bit width, that port has a depth of 512 words. If you then set the other port for 18 bit width, it has a 1K depth, and the extra address bit (the extra bits are added at the lsbs) essentially selects the low or high half of the 36 bit width for access through the 18 bit port. Similarly, a 9 bit wide port is 2K deep and accesses a 9 bit slice of that 36 bit word for each access, with the slice selected with the 2 lsbs of the 9 bit wide port's address.
I've found the easiest way to deal with the dual port memories is to instantiate the primitives. Xilinx has made it far easier with the virtex 4 which has a common BRAM element for all aspect ratios with generics on it to define the width. Previously, you needed to instantiate the specific primitive with the right aspect ratios on each port. I found it easiest to develop a wrapper for the memory that uses the width of the address and data to select the BRAM aspect ratio and instantiate as many as are needed to obtain the data width, that way the hard work is done just once. This is especially true with the older style primitives.
For the smaller "building block" components, I'd say that's not much of an issue at all. After all, how many different interfaces can you think of for an accumulator or shift register? Most of the differences between vendors seem to be superficial (e.g. naming) at that level. The problem I have with these low-level blocks is that they go against the basic principle of abstraction; instead of hiding complex functions behind simple interfaces, they do the exact opposite. And they hold back designers by perpetuating the "TTL 7400" design mentality.
I think that would be great. Of course, vendors' in-house synthesis tools are unlikely to support that kind of system except for portability between their own device families.
I don't think anything prevents it, other than whatever all-pervading force there is in the universe which prevents people from agreeing about things. :-)
The OpenFPGA initiative have a working group on core interfacing. There's LPM (obsolete IMHO). There's OCP. There's no shortage of people proposing ideas for standard interfaces, but there is a shortage of time, money and energy to do anything about it. I think you'll find most engineers in favour of standardization to some extent, but there's no one single driving force for adoption. Some people also see it as a barrier to innovation. (It isn't, but still, some people see it that way.)
No. But I'm willing to bet that any engineer worth their salt would be able to write the appropriate glue logic to convert one to the other without working up a sweat.
In some cases, for example if a customer is using a processor-based system design environment such as Platform Studio (X) or SOPC builder (A), the "proprietary" interface to (say) a DDR SDRAM controller is hidden away, to a great extent, because the tools provide a system-level abstraction.
One thing that greater standardization would do is make it much easier for third-party IP core developers to create and sell vendor-agnostic IP. If that really is a viable business model nowadays...
Here's another thought - in many industries (e.g. consumer electronics and eletricals) the quality of the interface is a big differentiating factor for the purchaser. Why shouldn't this be true for digital interfacing standards too? A customer might have a preference for a CoreConnect-based system over an AMBA-based system, or vice versa, based on which of the interface's features are relevant to their needs.
BTW I'm not sure how much of this I really fervently believe in, just trying to illuminate the issue a bit.
I would only say it's a question inertia, rather than malice. On a related note, what do engineers hate more - risk of vendor lock-in, or breaking of backwards compatibility?
The main reason I don't instantiate memory primitives is because of the restrictions on address and data types (to SLV). I usually have an integer subtype for the address (the array index when inferring memory), and store anything from integer to enumerated (i.e. state variables) to whole records in them. You can't do that with the primitives, without putting a wrapper around it, and then when you try to examine the contents of the memory during simulation, they're all just bits, and you have to manually convert back to your record, enum, etc. Inferring memory from arrays, especially in applications where the memory is not tied to some sort of pre-existing data and address bus, also allows a more functional descriptive style, rather than an implementation-specific style. I focus on the behavior first, then tweak the implementation to get the performance I need (if any tweaking is needed).
The syntesis tools have started recognizing sub-word write enables in inferred memories, which allows inferring memories with wider data paths on the output than the input, now they just need to recognize muxing on the output to allow inferring memories with wider inptuts than outputs.
The few times I needed an SDRAM interface, I had to write it myself. Everyone has example interface cores and such, but mostly they're tailored to talk to a generic microprocessor bus. If your logic doesn't work that way, then you spend more time fighting the generic back-end interface and it ends up being faster to write the whole module yourself.
It's designed to ease the job of the person writing the host driver, and, more important, the end user stuffing the card into his no-name PC. Making an interface generic enough to be usable over a wide range of disparate uses isn't trivial. USB and FireWire are the same way: complexity for the engineers allows simplicity for the users.
You'd rather we go back to jumpers or DIP switches for I/O card base address select? Why should the end user care about where the card lives in the address space?