FPGA Devices' stability and process parameters

I found the discussion "Why No Process Shrink On Prior FPGA Devices" interesting and would like to add some words regarding working stability of devices with shrinked geometry:

Beeing a consultant engineer, I work for various companies and in different fields of application and thus often come across problems with modern devices like RAM, FPGA and MCUs regarding their stability: One can observe that smaller process geometrie quickly leads to a lower tolerance against beam- and radiation influences, causing e.g. EMC problems. Therefore, some companies have to spend much time and money in searching for devices which are resistant enough to meet their requirements. Doing so, "older" families of devices with a larger scaling are sometimes prefered!

Do we run into problems when continously shrinking technologie and finally remove older devices (her maybe FPGAs) from the market ?

Reply to
alterauser
Loading thread data ...

Reply to
John_H

alterauser,

On the contrary, the smaller geometries also result in a smaller cross section, and less probability of upset.

The "problem" is that the device size may shrink, but then people put on more devices!

Thus, the susceptibility to upset MUST (by design -- there are ways to lessen the cross section that designers need to use now, and geometry) be kept smaller than the increase in memory or logic size.

For example, if at 0.15u, you have ~365 FIT/Mb for the configration memory (real numbers, by the way, for Virtex II xc2v6000), and the device is ~ 20 Mb, that makes for a total of 365 X 20 = 7300 failures per billion hours (really upsets, not every upset causes a "failure" so it is better than that --- see all the stuff we have on SEUs on our website).

Then, at 90nm, we have ~50 FIT/Mb for the configuration memory (again, real number), but we now have 60 Mb for the largest device (xc4vfx140, or xc4vlx200). That is again 50 X 60 = 3000 FIT/device.

Bottom line, we are "winning" the race to reduce soft failures due to cosmic ray upsets faster than the technology allows more density. This is the Xilinx pledge: we MUST get better faster than the shrink allows us to get worse.

And, if you problem is solved by a 2V2000, or 2VP20, or 4VLX25, then your actually failure rates per device are getting much smaller, as these devices are all about the same density. Virtex 4 is ~ 6 to 8 times better than Virtex II in soft error failure rate (less failures).

Then, as a totally different subject, there is "Total Dose" which is only a concern to devices that experience ionizing radiation all the time (medical equipment, nuclear reactors, space, etc.).

Total dose is an area I really don't want to talk about.

The total dose issue is a bit touchy, as the US governement realizes that most (perhaps all) 90nm technology is very tolerant of total dose, and they need to change their rules in order to deal with reality.

I suggest you do some research, and follow up with RADECS, SELSE, MAPLD, and other conferences, and the papers that are being published.

EMI/RFI was not, and is not an issue in terms of affecting the devices as far as I am able to learn. We have our FPGAs in MRI machines, and railroad engine controls: both extreme magnetic and electric field applications. Just don't create a loop in the external pcb wiring that is a problem, and the device takes care of itself (too small!).

The rest of the industry is having a hard time keeping their soft failure rate below 1,000 FIT/Mb, so that is why we are so willing to publish our results.

Austin (virtexicdesigner)

Reply to
Austin Lesea

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.