FPGA vs ASIC area

I was just wondering *roughly* how much area is required for a desig

targetted to an fpga in comparsion to a standard cell asic? I'm reall looking for a ball park figure here - 2x, 5x, 10x, 50x, 100x the are requirements? (I undersatand its heavily dependant on the particula design

-- totallyadminwww.totallychips.com - VHDL, Verilog & General Hardware Design discussion Foru

Reply to
totallyadmin
Loading thread data ...

: I was just wondering *roughly* how much area is required for a design : targetted to an fpga in comparsion to a standard cell asic? I'm really : looking for a ball park figure here - 2x, 5x, 10x, 50x, 100x the area : requirements? (I undersatand its heavily dependant on the particular : design)

And on technology...

--
Uwe Bonnes                bon@elektron.ikp.physik.tu-darmstadt.de

Institut fuer Kernphysik  Schlossgartenstrasse 9  64289 Darmstadt
 Click to see the full signature
Reply to
Uwe Bonnes

This seemingly simple question does not have a simple answer. There are too many variables. Modern FPGAs are leaping ahead of ASICs in the use of the tightest technology ( 90 nm today). Some FPGA structures are inherently as efficient as in ASICs (BlockRAM, I/O, transceivers, hard microprocessors). In other respects FPGAs can be far less efficient in their silicon use. But FPGAs benefit from a very regular and tight chip-layout, and they are manufactured by the multi-millions (as opposed to most ASICs). And finally: Silicon is cheap, silicon area is not decisive, the total manufacturing and distribution cost is! Don't count the square microns, count the dollars.

Peter Alfke

Reply to
Peter Alfke

Another large hunk of chip cost is testing. And lets face it, testing a large, complex FPGA takes time on an expensive tester. An ASIC that would fit on an FPGA will be a lot cheaper to test. Of course Xilinx has that program that only tests FPGAs to the users test vectors, so I guess that can help limit that part of the total cost.

--
Rick "rickman" Collins

rick.collins@XYarius.com
 Click to see the full signature
Reply to
rickman

(snip regarding FPGA vs. ASIC in silicon used.)

What I hear is even more important with current technology is NRE costs, such as masks. Stories are of mask sets in the million dollar range.

-- glen

Reply to
glen herrmannsfeldt

I doubt that very much. I am convinced that FPGA testing is simpler and cheaper than ASIC testing. The secet is in the reconfigurability. We do not just apply external test vectors. We reconfigure tens and hundreds of times, and we have a lot of self-test engines that run in parallel inside the chip. And we can afford to spend a lot of engineering on our generic test methodologies, since they are amortized across >1 billion dollars in annual sales. ASICs have to develop new test methods for each design.

But: What was the original question really about ? I think it was a meaningless "cute" question. Peter Alfke

Reply to
Peter Alfke

Here are some numbers: ASICS are only for extreme designs: extreme volume, speed, size, low power Cost of a mask set for different technologies:

250 nm: $ 100 k 180 nm : $ 300 k 130 nm: $ 800 k 90 nm: $1200 k 65 nm: $2000 k plus design, verification and risk.

We in the FPGA business really know the price of ASICs, for (think about it) we really design and produce circuits as if they were ASICs. Our saving grace is that we sell them in large numbers to many customers. Peter Alfke

>
Reply to
Peter Alfke

This is very dependent on the design (and process). One good question is also, can you achieve the needed performance with FPGAs without huge effort in optimising the design. FPGAs look very promising in data sheets but sometimes the reality is completely different.

I have one good example I have seen. One big 0.13u design was scaled down

1/4, with 1/4 clock frequency to test an ASIC. This already needed 6* V2 6000 chips, and the 1/4 clock frequency was very difficult to achieve.

Some structures seem to be very hard for FPGAs. For example complex muxing with normal synthesis can produce huge structures in FPGAs. One could optimise those with some hand constructed LUT structures. But that is insane. That code is not anymore portable across chip families or to ASICs. And the time and effort to make things like that is huge.

Also power consumption is one real problem. If you would build an product with

10+ biggest FPGAs the power consumption would be huge (and PCB area). And if the communication needs between the chips is large, also the PCB is going to be full of signals and very hard to make.

Just food of tought. There are places for FPGAs, and sometims ASIC is the only sane alternative for a sellable product. Prototypes are completely different, you don't need profit from them :)

--Kim

Reply to
Kim Enkovaara

Hmm. Take those figures with a pinch of salt. At the higher geometries, you can definetly get much cheaper than that.

Cheers, Jon

Reply to
Jon Beniston

Really? The same techniques have been used on all the ASICs I've worked on: Scan test and RAM BIST. With ATPG s/w it is pretty easy to do to as well.

Cheers, Jon

Reply to
Jon Beniston

And we in the ASIC business really know the price of FPGAs...:-)

Given that an ASIC is by definition a close fit to the appication and an FPGA is a less good fit (but possibly in a more advanced technology) the answer to the original question (with lots of caveats) is probably around 5x-10x difference in silicon area (and power consumption!), probably around 2x-5x difference in unit price because of the undeniable economies of scale for FPGAs.

So it's simple -- developing an ASIC has higher cost/risk/timescale and the result is less flexible but with higher performance and lower power -- and cheaper *if* you buy enough of them for the lower unit price to make up for the higher NRE costs. This is engineering, not marketing.

As the cost of traditional ASIC development goes up with more advanced technologies, the break point for total revenue over lifetime at which a project can justify this is moving up -- nobody in either the FPGA or ASIC business can deny this, which is why the number of such design starts is falling.

To close the gap FPGA companies are promoting strategies such as "hard FPGAs" (effectively metal-programmed FPGAs) to reduce power/area/unit cost (but increase NRE cost/time) , and ASIC companies are promoting strategies such as "structured ASIC" (effectively metal-programmed ASICs) to reduce NRE cost/time (but increase power/area/unit price).

Since both often include hard-coded blocks such as mutipliers/RAM/ROM there's obviously some convergence going on here...

Ian Dedic Chief Engineer, Mixed Signal Division Fujitsu Microelectronics Europe

P.S. I'm only really arguing with Peter's use of "extreme" here!

P.P.S. On the technology issue, if your pockets were deep enough you could have had your very own 90nm ASIC well before any 90nm FPGAs emerged -- complete with your very own bugs, of course...:-)

Reply to
Ian Dedic

Those figures are for pure ASICs, what are the costs for structured ASICs?

Reply to
General Schvantzkoph

Jon,

Peter is right: ASIC testing rarely gets more than 95% coverage. The best is about 98% coverage.

We can get an arbitrarily high coverage by just increasing our patterns (99.9%+) for 0 added silicon cost. ASICs can not do that.

To get any better, they either have to add more logic for BIST (30%+ of a Pentium IV is BIST logic) which increases area and cost and decreases yield, or just be happy with the coverage of the scan chain (which is not all that good).

Each BIST or scan chain is unique, and software, test vectors, etc. muct be developed each time anything is new or different.

FPGAs have 0% extra area for BIST (they are 100% BIST with a different bitstream!).

You must understand that the 405PPC, MGT, DCM, and other "hardened IP" are just like ASICs, so we already know everything there is to know about ASICs, their design, and testing. In fact Xilinx the 3rd largest 'ASIC' manufacturer in the world (behind IBM and NEC -- Gartner/Dataquest 'ASIC/FPGA Vendor Ranking 2003').

FPGA vendors may be the last stronghold of full custom ASIC design left in the world. ASIC houses are mostly standard cell, or structured (basically same thing), with little or no full custom.

Our customers tell us that if they want to play with the latest and greatest technologies and designs (like 10 Gbs MGTs), they need to use our FPGAs, because the ASIC cells are a generation behind.

Aust>>ASICs have to develop new test methods for each design.

Reply to
Austin Lesea

On Thu, 30 Sep 2004 13:23:31 -0700, Peter Alfke wrote:

Peter,

While all of those structures that you mention are as efficient as an ASIC, and probably more efficient because Xilinx can afford to spend more to optimize them, the fact remains that most of the resources on an FPGA are unused in any particular design. Some resources, like multipliers, are only used for certain types of applications. I've been using Xilinx FPGAs since the 3000 series, I've never needed a multiplier once in all of those years. Some resources, like the PPCs, are useful but no one needs as many as many as Xilinx puts on a chip. The Virtex2P has up to four PPCs, who needs four? Virtually every embedded system that I've worked on has one processor, generally an IBM 405 or equivalent, but none has ever had more than one. Xilinx put four on the Virtex2P because silicon guys made the decision, not system designers. Three out of four of the PPCs are just taking up space and burning power, they are useless. Some resources are useful for every design, like Block RAMs, but most of the time you don't use all of them. On Spartan3 there isn't a lot of Block RAM available so you tend to use most of the RAMs, although here you generally reserve several for ChipScope, but the big Virtex (2, 2P, 4) class parts there is so much RAM that you are unlikely to need all of it. And some resources are hugely under used by there very nature, i.e. the interconnect. Most of the area in the FPGA sections of the FPGA is devoted to programable interconnect. Only a few percent of the interconnect resources can be used in any particular design, that's the nature of the beast. If you have a mux in a crosspoint switch that has 8 inputs, 7 out of 8 inputs must go unused. There isn't any way to get around this in a programable device, if you cut down on the interconnect resources the part becomes unroutable. An ASIC that uses metal to make connections has a 50 to 1 advantage over and FPGA in this area. The place where FPGAs win big is NRE, an FPGA design costs millions less than an ASIC design. Even if you take out the cost of the mask set there is a big difference because you don't have to spend as much on verification on an FPGA design as on an ASIC. With an FPGA you can go into the lab with a design that's almost right because fixing the bug is cheap, with an ASIC you have to have a design that's completely right before you build it because you can't fix it later. Clearly if you are building a small number of systems, a few thousand or less, you want to use FPGAs. If you are building a huge number, hundreds of thousands or more, you want an ASIC. In between is a grey area that has to be evaluated for each system.

Reply to
General Schvantzkoph

GS,

Well, for the first mask set, those are the costs.

For subsequent masks for an individual metal layer, they can range from $10K (upper metal) on up to as much as $350K (poly).

So how many layers get changed to make the strcuture?

Aust> >

Reply to
Austin Lesea

(snip, someone wrote)

(snip)

I don't know about structured ASICs, but before there were FPGA's there were ordinary gate arrays. As a cheaper way to build an ASIC, companies would make arrays of transistors such that only the metalization layers needed to be added to build an ASIC. Maybe one or two metal layers. Gate arrays don't allow the variability of transistor size that other technologies allow, but the cost savings makes a big difference. I believe that early SPARC, among others, were built using gate array technology.

-- glen

Reply to
glen herrmannsfeldt

Do FPGAs typically have significant 'hidden' test structures on board? Did they in the past?

I'm pretty much Xilinx in my DNA, but as I recall Altera used to advertise a spare-column arangement to improve yield, as in DRAM. I guess this would be more or less impossible with ASICs, and maybe impossible with modern FPGAs.

Reply to
Tim

Tim,

We do have hidden structures, and you might say they are here for test, but they are often there for other reasons.

Most commonly the designers want to see something, so we make it accessible (ie it is connect able, but we do not tell customers how to do it, and the software does not support it).

Spare columns are something else entirely, if a column is bad, then by a laser fuse (done at test time on the tester) the column that is bad can be replaced by a column that is good.

Altera pioneered this usage in their products, and have many patents on it. As far as it goes, it adds slightly to the area, and provides higher yields (depending on the process yield -- see below). The issue with it is that the timing models have to account for the worst case repair, and variability in timing may result if you do not have enough slack in your design (ie you cut it too close).

Now granted, any marginal design can cause grief, so the claim of variable behavior is really just FUD. The claim that the worst numbers have to be the numbers is real. But is also not useful, as who cares?

ASICs also use column replacement (memory arrays) as they also want to have better yields. In fact most memory products have some form of redundancy. The EM poly fuse was specifically developed for that purpose so they did not have to zap laser fuse elements.

Laser fuse elements are pretty large, and special equipment is needed.

A poly fuse can be programmed by voltages and currents, so could be made to be programmed by a tester through normal means of providing signals and voltages. Poly fuses are not all that reliable, so you really need two into an gate to be sure that you can get at least one to program. They are also large, as well.

Regardless, having a good NV RAM cell, or a fuse cell is beneficial to ASICs, FPGAs, memories, etc. as redundancy can be controlled if it is deemed to be beneficial.

Often assumptions about yield are made to justify redundancy. That is one of the most dangerous gambles you can take, as foundries are highly motivated to improve their yields. So if the method of redundancy is improving yields when intrinsic yield is poor, it turns out it is decreasing yield (because of the wasted space) when yields become good.

Hope that helps.

Aust> Aust>

Reply to
Austin Lesea

I don't know much about

Reply to
Jon Beniston

GS,

Well, I looked at this all day, and felt I had to say something.

See below,

Aust--snip---

No argument here. In fact, somewhere between 3 and 7% of the memory cells are actually used to determine the user logic pattern. That is why the SEUPI (single event upset probability impact) factor can be from

10 to 100 (the factor that calculates on the average how many single event upsets from cosmic rays are needed to actually create a fault in the user pattern).

Some resources, like multipliers, are

Not using all four? Wow. Are you not in tune with the times. If PPC's are free, why not use them? As it turns out, we also had our doubts, but leave it to the creativity of the engineers out there: if it is in the chip, and can be used, it will be used. In a recent customer visit, the sales folks said "here is a customer that doesn't use any PPC's." We go in, and it turns out every design now uses every PPC. Amazing.

Virtually every embedded system that I've worked on has one

Sure, that is because they are a big expense. If they are free, then guess what? Folks use them.

Xilinx put four on the Virtex2P because silicon guys made the

Wrong. Folks with some vision and understanding made the decision.

Three out of four of the PPCs are just

True, but it is remarkable how useful even BRAMs can be when you do not think they have any uses. See Peter's usage of a BRAM as a state machine, or other uses to replace logic with a big LUT.

On Spartan3 there isn't a lot of Block RAM available so

True. I still would like to know when enough is enough. I think V2P has too much BRAM. We may have gone overboard. Be the first time in history that anyone anywhere had too much memory. If true, it will be

66 point type headllines in all of the press: "Too Much Memory on FPGA!"

And some resources

Yes, that is obvious. It is also the reason why folks were absolutely certain that FPGAs would never get anywhere at all. Whoops! Were they wrong, or what?

An

Maybe 20:1. Don't get so excited. No one will ever cause ASICs to go away, but the number of ASIC design starts is diminishing steadily every year. Even structured ASIC starts are an insignificant factor.

The place where FPGAs win big is NRE, an FPGA design

Even though you should (spend money on verification). People do not.

With an FPGA you can

Again, fixing it as it is shipped is not a way to run a company. Not cheap if you ship 10,000 that you later have to retrofit (reprogram). But if it provides a competitive advantage, people will take advantage of it.

Clearly if you are

There are systems being sold in the hundreds of thousands that do use FPGAs. The reasons? Sometimes their market is still evolving. Sometimes their product has incremental features available later in time (steady source of revenue, not a bad business model?). Sometimes they just don't have the time or energy to make an ASIC (too busy working on the next product as taking time to save some pennies might mean they go out of business). Sometimes they have tried to make an ASIC, and failed. This last case is one that is now becoming more common. ASIC design is really tough (we know!!!). Especially if you need ultra-deep sub-micron technology.

Reply to
Austin Lesea

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.