"Gate" = ???


I have been asked, many times, how an ASIC "gate" compares to a FPGA "gate."

Now don't just groan, and hit ignore, bear with me (if you have an opinion or feel like a comment).

An ASIC "gate" (in my feeble mind) is 4 transistors, arranged as a NOR, or a NAND. From that basic element, you can make everything else, or at least represent the complexity of everything else.

Now take a FPGA. Look at the LUT. Take the 4 LUT in Virtex 4. It is

16 memory cells. Is that 32 "gates"? What happens when you use it as a 16 bit LUTRAM, or SRL16? Isn't that closer to 64 "gates"?

If I use the LUT as a 2 input NAND gate, then it is one "gate" and I have to use some LUTs as small gates, so I obviously can't count all my LUTs as 64 gates!

Take the DCM. How many "gates" would it take to do that?

The DSP48. How many "gates"?

So, I have always decided to stay away from any serious engineering evaluation of "gates" vs "gates" as being a no-win discussion. But is it? Is there no real comparison that can be made?

Obviously, people use FPGAs. And, they use ASICs. Sometimes they do one, and then replace it (oh my!) with another. Is a 2 million "gate" ASIC equal to a XC4VLX25? or a XC4VLX200? Or, not even the largest FPGA we can make (XC5VLX330)?

I have seen customers "fit" their 2 million "gate" ASIC into a LX25, so from just that one customer's point of view, 2 million "gates" could be realized by 24,000 4-LUTS, 24,000 DFF's, 1.3 Mb of BRAM, 8 DCMs, and 48 DSP48 blocks. That is about 6 million bits of configuration bits.

Is the answer a "range" from 2 million gates (depending on who is doing the conversion) can sometimes go into FPGAs that range in size of 5:1?



Reply to
Austin Lesea
Loading thread data ...

If someone is feeling bored it should be possible to download all functioning packages from opencores and synthesize them for hardware and software and look at the difference.

That said, the number of gates an ASIC design will occupy will also differ depending on your optimization goal. (Area, power or speed)


Reply to
Andreas Ehliar

As I understand it, for CMOS it is four transistors in any shape, but as you say, they can make a NAND or NOR gate.

A single LUT RAM, I would probably say closer to 128, including the address decoders, but when implementing a large RAM it would average closer to 64.

I haven't written this for a while, so I can probably say it again. It probably makes more sense with at the current sizes that it used to.

What you need is a scalable design (or designs), yet one that is reasonably representative of real designs. (I used to design systolic arrays, which scale fairly well. Most designs probably don't.)

Given a scalable design, you increase the scale to the biggest that will fit in a given device, compute a reasonably equivalent gate count (in terms of CMOS) and use that.

It might almost work in terms of a modern CPU. Set up the design such that the data path width is variable (I believe that is easy to do in verilog). Not that anyone will ever want to use a 97 bit wide processor, but as a measuring tool it should work.

(It might be that one should optimize for width divided by clock cycle time, to fairly penalize designs based on routing through slow pathways.)

-- glen

Reply to
glen herrmannsfeldt

Hi Austin, remembering the good old XC3000 or XC4000 which had only CLBs and IOBs it was easy to give a rough estimation of a gate equivalent. The usefulness of this number could be put in question, whatsoever.

Todays FPGAs are different. Not only that there's Blockram and other large macro-functions (CPUs, DSP-Cells, Multipliers...) even the LUTS can be used in more than one way.

People tend to reduce complex measures to simple numbers. e.g.Clockspeed for Microprocessors or Pixelnumbers for digital cameras. Both are as useful (or not) as a gate count for FPGAs.

So, why not calculate these numbers to feed the marketing people and publish the calculation algorithm for the adult custumers who want to take a look behind the bare numbers.

The experienced custumers don't need these numbers anymore and don't care anyway. They know how to find the right chip for their designs.

A "gate count range" would not be useful. The 100% increase of numbers (since you have two numbers now) would be too much for marketing and simple minded custumers("Two numbers? But which one is true for me???").

As you and mk already mentioned, there are some designs which prove any numbers wrong in both directions. But these designs are just peaks in the wide field of the average applications. So it is not important to the average custumer.

A simple solution for the publication of gatecounts may look like this:

The XYZ1234 FPGA has a mean equivalent gate count of xxxx k-gates.*

. . . ____

  • Equivalent gate count depends strongly of the users application. Gate count calculation facts can be downloaded from
    formatting link

Best regards Eilert

Reply to

Austin, of all the people to kick over a beehive, I least expected you to be the one to do it :-). What did you go and do that for?

As you know, the answer is "it depends". It is heavily dependent on the application as well as on the skills of the FPGA designer. But then, I'm not telling you anything that you didn't know. I think it is fair to say it ranges from 10:1 to 1:10 depending on the design and the respective skills using the particular medium of the designer(s).

Reply to
Ray Andraka

Ray, and all who replied,

Thanks. I know, I know. I gooogled and found a long history of marketing gates, and a very amusing article which talked about "building system" gates (BS gates for short).

The reason? It seems there is a tempest in another teapot.

SEU vulnerability is said to be 1000 FIT/million gates, or 1000 FIT/million 6T cell (pretty much the same 4 active transistors) at 90nm, and even worse at 65 nm, if you do nothing to improve the hardness of your design (add capacitive loading, clever layout, SOI, extensive well taps, clever circuit techniques, etc.)

Since ASIC standard cell flow has no tools to predict single event effects, like upset rate, and we only have papers from vendors (Fujitsu, Sony, TI, Intel, etc.), it is hard to say just what the soft error failure rate is for ASICs, other than it is getting worse as they get to smaller geometries (don't take my word for it, go look it up).

Now, over here in FPGA land, we have been working for almost 7 years to make our upset rate smaller with each generation, and testing the chips to prove it (Rosetta).

What's the deal? Well, we just started doodling on a piece of paper, and using a 90nm standard cell ASIC (or any cell based ASIC flow) is 20 to 50 times MORE likely to fail than our 90nm FPGAs. Wow. Who would of thunk it.

The 20 to 50 is assuming the "ratios" of "gates" that I have seen and heard.

In the FPGA, of course, you can provide ECC for BRAM (so can ASIC's, but only if you make that decision in advance), and you can provide duplication, or triplication, or duplication in time, etc. And, in the FPGA, there is a tool to do automatic correct by construction TMR (Xilinx TMR Tool is the only one in existence). Nothing is there for the ASIC designer as a toolkit, all mitigation has to be done the really hard way, in advance, by hand, and hope it works when you test it (if you test it).


Reply to
Austin Lesea

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.