Why No Process Shrink On Prior FPGA Devices ?

I'm wondering what intrinsic ecomomic, technical, or "other" barriers have precluded FPGA device vendors from taking this step. In other words, why are there no advertised, periodic refreshes of older generation FPGA devices.

In the microprocessor world, many vendors have established a long and succesful history of developing a pin compatible product roadmap for customers. For the most part, these steps have allowed customers to reap periodic technology updates without incurring the need to perform major re-work on their printed circuit card designs or underlying software.

On the Xilinx side of the fence there appears to be no such parallel. Take for example, Virtex-II Pro. This has been a proven work-horse for many of our designs. It takes quite a bit of time to truly understand and harness all of the capabilities and features offered by a platform device like this. After making the investment to develop IP and hardware targeted at this technology, is it unreasonable to expect a forward looking roadmap that incorporates modest updates to the silicon ? A step that doesn't require a flow blown jump to a new FPGA device family and subsequent re-work of the portfolio of hardware and, very often, the related FPGA IP ?

Sure, devices like Virtex-5 offer capabilities that will be true enablers for many customers (and for us at times as well). But why not apply a 90 or 65 nm process shrink to V2-Pro, provide modest speed bumps to the MGT, along with minor refinements to the hardware multipliers. Maybe toss in a PLL for those looking to recover clocks embedded in the MGT data stream etc. And make the resulting devices

100% pin and code compatible with prior generations.

Perhaps I'm off in the weeds. But, in our case, the ability to count on continued refinement and update of a pin-comaptible products like V2-Pro would result in more orders of Xilinx silicon as opposed to fewer.

The absence of such refreshes in the FPGA world leads me to believe that I must be naive. So I am trying to understand where the logic is failing. Its just that there are times I wish the FPGA vendors could more closely parallel what the folks in the DSP and micro-processor world do ...

Reply to
tweed_deluxe
Loading thread data ...

tweed_deluxe schrieb:

it's just virtually impossible.

if you make a process schrink on some MCU for example after the schrink the datasheet can remain almost the same with minor changes.

now if we would technology shrink some FPGA family then the amount of work to be done for new 'characterization' of the silicon is enourmous.

and if you know the mask set pricing then you can easily understand that this is not an option for any FPGA vendor.

some pincompat could be achived between families, maybe, but if you want to have V2Pro in 'shrinked' technology (eg cheaper) then it want happen.

Antti

formatting link

Reply to
Antti

So let me conclude this to make sure I understand:

  1. You cannot shrink FPGAs because when you shrink them, you have to update all the design PAR files to match the new timing. 2. Characterizing the timing on these internal lines is a pain in the butt. 3. Hence, nobody wants to invest that money when they could be spending their time getting the timing right on their latest designs.

Sound right?

Reply to
Brannon

Reply to
Peter Alfke

Brannon,

That, and more.

Back when going from 1u, to .8u, to .65u, etc. was as simple as just making a mask where everything was smaller, shrinking was just good business. Cheaper parts, maybe even faster parts, same functionality.

But now, making something smaller is a complete re-design, with all circuits getting completely re-simulated, and redone. And finally the layout has changed such that a plain shrink would violate all the design rules.

Basically, not an option anymore.

The last shrink we did was 0.18u to 0.15u in Spartan 2E for cost reasons (years ago). It involved a lot of work, but just slightly less than a completely new product, so it made sense.

Aust>> now if we would technology shrink some FPGA family then the amount

Reply to
Austin Lesea

Reasonable question

The FPGA market is not growing all that quickly, so the funds are not available for this.

You will also find that the design life of FPGA products is shorter than DSP/microprocessors, plus they cannibalize sales of their 'hot new' devices, as well as confuse the designers.

Sometimes, there are physical barriers, like changes to flip chip, and whole-die bonding, that mandate BGA. There, backwards compatible has to go - and that's the key reason for doing this.

All those factors, mean this is unlikely to happen.

What they CAN do, is try and keep ball-out compatible, over a couple of generations, but I'm not sure even that relatively simple effort is pushed too hard ?

-jg

Reply to
Jim Granville

They actually kinda do die shrinks, but they give the new die a new name and alter a few other things. Example, a Virtex 2 becomes a Spartan 3, you get the idea...

For each step in the process technology they make the trade offs that make sense for those geometries (e.g bigger memory). They first release a high priced wiz bang part with one name, then they follow up with a lower price smaller die using the same process and give it a different name.

tweed_deluxe wrote:

Reply to
kayrock66

No more die shrinks, since ant smaller process needs a different supply voltage, and thus is an incompatible part.

Also Spartan may have been a versi> They actually kinda do die shrinks, but they give the new die a new

Reply to
Peter Alfke

Thanks for the replies folks, I appreciate it.

These are answers I suspected but wanted to hear from the horses mouth.

I think the second paragraph of my post is the crux of what I was getting at and Jim's comments are particularly germane. Is the aspect of maintaining pinout compatability across a few device generations a poor business case as well ? It would appear to be so ...

I understand the "Porsche" design philosophy and the apparant need to remain on the bleeding edge (with the silicon) in order to create and/or sustain a position of market leadership. It's one particular facet of a business model and, sometimes, it makes for wonderful magazine advertisements. For now, it seems that the company with the biggest baddest FPGA is a pre-requisite for making the most money. (Although placing and routing a fully loaded V4 LX200 on a Windows box is an exercise in extreme patience :) )

Porsche. I realize that these automobile analogies can be taken far out of context. But there are times when I can't help but desire a little more "Toyota" and a little less "Porsche" from the big FPGA outfits.

The rapid evolution of the silicon and underlying change in FPGA feature sets does impose challenges (i.e. consequences). The need to significantly re-work existing hardware that seeks a modest tech re-fresh is self evident .... and a rubbing point for ordinary average joe's like me.

However, the stresses placed on the tool-chain developers cranking out ISE, EDK, SysGen, and related IP must be formidable. It probably also makes things interesting for the FAE staff and support services. The latest "gee-whiz" device will (and does) pre-empt soreley needed improvments to the design tools as well as refined integration and interplay between them. Integration that truly renders platform FPGA design fluid, user friendly, bug-free, and productive. I'm not saying that Xilinx hasn't made key strides in merging the EDK, ISE, DSP, and other flows. It is the basis for many shining moments in our shop. But, we've been users since day 1 and its got a long long way to go before my co-workers and I go through a day without muttering a four letter word :).

Maybe all of this banter is/was simply about wondering where to put the eggs in the FPGA basket. Asking if there is possibly more money to be made by offering a "lesser device" or "more comprimised design approach" that, increases the investment in other areas to yield visibly superior tools, more FPGA IP, a quantum leap in productivity, ease of upgrading to future silicon devices etc.

Regards,

Chris

Reply to
tweed_deluxe

Reply to
Peter Alfke

Managing product costs do as well. Heavy re-engineering costs, new regulatory certifications, multiply stocked SKUs for warranty replacement, cross version updates because of component changes, and a miriad of like problems make product life management a nightmare in the fast moving FPGA world. Just parts cost reduction, combined with fewer product rev costs, makes a whole lot of sense ... and the basic thrust of the OP's arguments.

To sell Xilinx parts to and end user market, it's not just mask costs that affect volume. The ripple changes down the customer chains are many more real dollars than high mask costs .... just in regulatory recertification, build and life management costs.

Reply to
fpga_toys

pin compatability is just customer support, how about a 1 pin high implies a self program from a small hardwired rom, which gets enough of the chip off the ground, to work as a programmer for itself and others. some of that extra space :-)

internally they don't have to be the same, just roughly the same, as i'm sure there will be extra logic area.

or how about a single sided io series, with 2 edges of for for corners, then a scale down is just more logic mapped to fewer pins. and extra die copies per cut chip.

it just needs an interface mapping layer (ie new standard size pads, to old shrunk size pads (hyper buffers? or Capacitive resource.).

and could someone put some analog low power fast comparators on please??

cheers

jacko

formatting link
a 24 blue block CPU element (16 bit)

Reply to
jacko

: Higher performance requires radical innovation and real cleverness : these days. : Peter Alfke

Such as this?

formatting link

JPL and Northrop Grumman built a 5k gate 8 bit CPU running at 20GHz by using superconducitng logic on a chip, it needs helium cycle cryogenics to hit 4.5k, but on the other hand it doesn't generate much heat being superconducting...

I'd have thought gate arrays would make an excelent tool for investigating the technology...

cds

Reply to
c d saunter

We have had that since the beginning, 20 years ago. It is called "Master Mode Configuration"

Peter Alfke, Xilinx

Reply to
Peter Alfke

Peter Alfke schrieb:

no - I think this is more like one of my past "idea for xilinx"

FPGA has built in hardware loader for __small__ rom. this rom contains the logic to implement the actual loader, be it compact flash or nand or whatever. easily doable. just make a small part of the FPGA to become alive first. allowing the rest of the FPGA to be configured from the 'bootstrap ipcore'.

nobody is doing it - but without that, the RAM nased FPGA configuration solutions are still kinda PITA.

sure as Xilinx is now bringing back the parallel flash solutions from XC2K into S3E and Virtex-5 it becomes better, but the bootstrap idea would still be the kicker!

Antti

Reply to
Antti

This is way off-topic, but I would like to expand on Antti's comment.

I'm planning on using my S3E sample pack board in this way. The FPGA will be configured from the end of memory, while the software for the embedded processor runs from the beginning of memory. The supplied FLASH memory can easily handle both the configuration and the program. The nice part of this configuration is that I don't need to waste resources on a an internal BRAM based ROM in my design, and can use all the BRAM's for RAM.

It would be nice, however; if the ability to boot from a NOR flash could be expanded to NAND flash. Since many applications already include a large NAND flash memory, this would allow you to use higher density memories, while still dumping the separate configuration memory. I realize that normal NAND flash is a bit more complex to access, but there are now several vendors that supply NAND flash memories that automatically make the first page readable in a pseudo-microprocessor mode, and for the same reason - to allow a processor to bootstrap from flash without a separate BIOS ROM.

Even if there was a requirement to use a NAND flash that auto-accessed the first page, it would be a nice improvement.

Reply to
radarman

Hi cds, JPL and Northrop Grumman project is amasing. But you showed the slide made in 2002. 4 years have passed, what is the latest advance? Do they reach their goal? or don't get enough financial support and the project was aborted?

Weng

Reply to
Weng Tianxiang

Weng,

One thing that I think has been ignored by this thread, and yet is probably the most important point, is that the semiconductor industry has a roadmap.

That roadmap defines exactly what will happen, for as far into the future as they are capable of either guessing, or hoping. Which is pretty far.

This is one of the reasons why the industry has been so successful: there is no risk (really). Everything from lithography, to wafers, to chemicals, gases, reaction chambers, implanters, metals, packages, has been set out for you. There is a "goal" of what is needed, and you can go and execute to that goal. Yes, occasionally they ask for something that can't be done (yet), like the Hi-K gate dielectric (unobtainium?). But generally, the fab industry and its ecosystem is a well "regulated" technology monopoly.

If the technology is completely defined, then there is no way to build anything that is not on the roadmap. Doing so, is doomed to failure.

There are many examples of this, by the way.

First comes CMOS, then comes DRAM, and finally comes flash. Even the sequence of arrivals of the flavors of each technology node is completely pre-ordained.

To even suggest that you would like to have a different thickness of a single metal layer is completely heretical: it may happen once, but since it isn't in the roadmap, it will not happen again.

Even those who own their own fabs are so constrained. They must buy their equipment from the same people that are supplying the "roadmap." Thus even if you want to do something "out of the box" you are unable to find the equipment to do it. Our a "best known method."

As the dimensions crash into quantum mechanics limitations, it will be interesting to see if the roadmap diverges, or if every "backroad" is just as highly constrained as it is today.

My bet is that the roadmap will be with us for a long time, as it is a proven method to nurture, supply, and execute, in semiconductors.

Austin

Reply to
Austin Lesea

Sounds like conspiracy theory :). I'd guess the driving force is the market. Technology avenues are always opening up. Which ones get pursued? Only the ones that have a good chance of getting widely adopted. A technology or fab process that is only applicable to one very specific problem...probably won't become mainstream.

Xilinx probably doesn't do its own fabrication, which means they need to work within the system.

BTW what's next on the roadmap? What stocks should I invest in now? :)

-Dave

--
David Ashley                http://www.xdr.com/dash
Embedded linux, device drivers, system architecture
Reply to
David Ashley

hi

i was thinking hardwired reset of device sets up via reset and set as a i2c microprocessor, which serial loads from something like a 256Kbit (or larger)

24AA256/24LC256/24FC256 EEPROM.

after the load, the circuit is clocked in to configure the fpga, and it provides automatic on chip i2c interface. which along with a few fast comparator inputs, and some RAM blocks, LUTS and a few mul blocks would be a nice two chip solution for many applications.

having a manufacturer specific load up chip with resulting larger area may not be good for an efficient bootstrap, and you loose the option to have user code and data in the EEPROM, and a high level macro specification of the logic interconnect.

having such a low cost standard boot would be a boon for fpga demand.

cheers.

p.s. don't forget the electron bolus on chip which extracts power potential from inward spiral corriolis acceleration of high mobility electrons in n-type spiral, using substrate zener effect for voltage stabilization. (They work better when smaller)

Reply to
jacko

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.