Reliability CPLD/FPGA vs Microcontroller

I am doing some research on the reliability of microcontrollers software in comparison to hardware description languages for PLDs (CPLD/FPGA).

Another interesting point is whether there are general benefits of one hardware regarding reliability, e.g. in an automotive environment.

I read about certification problems if a SRAM based FPGA is programmed every system start and that Flash or Fuse based systems are preferable. I also read that CPLDs (Flash) in general are more robust than FPGAs.

Can you confirm/confute this?

Thanks for your help.

Falk

Reply to
Falk Salewski
Loading thread data ...

I expect that the statistics needed to prove anything will be hard to find.

Reliability is a system issue. The concern is performance degradation with time. Stressors includes vibration, thermal cycling power supply variations, and variations of the sequence and phasing of asynchronous inputs. Both fpgas and microcontrollers could fail in the face of any of these.

Certification requirements one of many system specifications. Most systems use flash of some sort to initialize RAM of some sort in the face of the Stressors listed above.

You need facts, not opinions to confirm such a statement.

-- Mike Treseler

Reply to
Mike Treseler

Not sure what Xilinx have to offer but certainly Altera Cyclone and Stratix FPGA parts have a built in CRC check which is run continuously via an internal clock at up to 100MHz. When the FPGA boots it is loaded with configuration and a configuration CRC calculated externally. The FPGA continuously calculates a CRC from its internal config RAM contents and if (never known it) it disagrees with the pre-programmed CRC the chip reboots. On a Stratix part I am using on a military project the reboot takes

Reply to
Slurp

I know one instance where your last sentence is false. I use both Xilinx CPLD's and FPGA's in a particular project, all 5 V parts, so XC95xx and original Spartan. After having some devices popped in the field, I dug through all the data at Xilinx until I found a well-hidden document on the ESD tolerance. I forget the exact numbers, but the ESD withstand on the XC95xx CPLD parts was WOEFULLY low, and half that of the FPGA. As these parts were in parallel on the same data bus, field experience pretty much agreed with the comparison, at least. Some improvements in the ground bonding has helped, but certain users still blow out the CPLDs on occasion. I've only had one or two Spartans get blown in that whole time.

Jon

Reply to
Jon Elson

Where I work, we aren't allowed to directly connect FPGA or CPLD pins directly to external connectors, save for on-board test points (like Mictor connectors). Everything goes through external buffers or registers. Yes, it does add latency, but it does protect hard-to-replace BGA's from damage.

Of course, I work on military hardware, and reliability is a major factor. While most things are replaced at LRU (chassis) level, there are some systems where the customer is allowed to replace individual boards. Usually, this happens in a customer repair facility, and is done by military technicians, but still - it pays to go the extra mile.

The other factor is that every board costs so much, that they are almost never thrown away, and instead reworked. It is much simpler to replace a buffer chip than a BGA.

It is more expensive, but if you are worried about damaging boards with ESD or want to hot-slot safely, it's worth it.

BTW - we use SRAM based FPGA's for everything except space applications. There, we use fusible-link devices from Actel or ASICs. A typical system will load dynamically over VME or PCI from a host controller, rather than local configuration memories - but that really shouldn't be a factor. (we do it to simplify inventory issues where a board may be sold to different customers)

We do occasionally need a PAL or CPLD to implement something that just needs to be off-chip. A good example is controlling the PCI/VME based FPGA configuration process. (specifically, we use them as SVF players) We generally use flash-based devices for that, since they generally only need to be updated once - and speed isn't usually a concern.

As far as I can tell, the SRAM FPGA's have been working just fine across a very wide spectrum of environmental conditions for a long time. Their reliability is actually quite good.

Reply to
radarman

Thanks for your reply!We also had problems with CPLDs dying according to probably to high voltages (lab course with students). We are using Spartan FPGAs in combination with busswitches as interface circuits now and have no problems any more. However, if you are using many I/O lines this additional protection needs some PCB space... seems like an advantage to microcontrollers. We let students work with Atmel ATmega16 and none of them died during the last year. And they did a lot to them...

Regards Falk

"radarman" schrieb im Newsbeitrag news: snipped-for-privacy@t31g2000cwb.googlegroups.com...

Reply to
Falk Salewski

Falk Salewski schrieb:

This all depends on the type of errors you are talking about. To get an overall estimate will be really difficult.

E.g. in automotives a big issue are real time constraint violations when many things happen at once. You can easily specify the timing of most hardware implemented algorithms with a granularity of nanoseconds because there is real concurrency in the implementation. For uC it is hard to get below tens of microseconds.

Also, error detection and correction on ALUs, busses and memory is just not available for commercial uC, while you can easily implement it for your FPGA circuit. In theory a uC using all these techniques would be more reliable, but if you can not buy it.... (BTW: I talked to Bosch about that topic, and apparently the volume of their orders is not big enough to have Motorola design such a uC for them.)

Formal model checking and property checking are becoming mainstream for hardware development but are hardly ever used for software development.

These are all factors in favor of FPGAs that are often not considered, but I am sure that you come up with many reasons why uCs are more reliable. (Less transistors for example)

Kolja Sulimma

Reply to
Kolja Sulimma

radarman schrieb:

I thought in military applications reliability is more important than cost. For standard buffers I would argue that you get a much higher failure rate with the buffers than without. You have three times the number of solder joints and much more parts after all. Also, many buffer chips are less robust then FPGA pins. Some don't even have protection diodes. Of course if you use special ESD protection buffers all this changes. But some passive protection to the FPGA pin might give you the same effect.

With the right tools it is not really more complicated to replace a bga or an SOIC. Local IR-heating, pulling the chip, cleaning the board, placing a new chip, local IR-heating again. Cleaning takes longer because there are more pads. But that's about it. I doubt that the cost of replacing the BGA is more than 5% of the cost of isolating the defect.

Kolja Sulimma

Reply to
Kolja Sulimma

It is probable that the buffer, although offering more pins to cause faults (Military boards will be x-rayed and each solder joint inspected don't forget) offer a level of protection that FPGA pins can't. A typical "Interface" buffer chip has a higher drive strength, better ESD Protection thru bigger geometry and the "real" outside connections have ESD diodes and the proper interface for the conditions, including current limit, voltage control, hot-pugging support etc.

Simon

effect.

Reply to
Simon Peacock

Trust me, it is more complicated than that, but there are plenty of both legit and questionable reasons for going with external buffers.

For one, we are typically driving very long cable harnesses or large backplanes with lots of fan-out. While an FPGA pin might be able to do it, we are guaranteed performance with the external parts. There is also the fact that a technician can reasonably replace, or probe, a buffer chip - while a BGA repair requires a trip back to the factory. Then, there is debug and integration. Our integration and test cycles are already too short to allow for a two-week trip back to the factory for rework.

Also, even at just 5%, the buffers are cheaper.

Reply to
radarman

"Kolja Sulimma" schrieb im Newsbeitrag news:4448ed30$0$18265$ snipped-for-privacy@newsread2.arcor-online.net...

Thanks for your reply! I am also of the opinion that applications realizing hard real-time parallel functionality are easier to verify on a device allowing real parallelism. Possible integration of error detection and correction functionalities in FPGAs are also a big plus, in my opinion. Finally it seems that the aspect MCU vs. FPGA regarding reliability is, again, application dependent.

Falk Salewski

Reply to
Falk Salewski

What are the allowed failure modes ? All of them ? That includes alpha particles, fast protons, thermal cycles, vibrations, supply and signal issues, electric and magnetic fields, the lot. Plus how failure prof is the design. How can it handle unexpected values. While in some points 90nm technology is more sensitive, it is not that an acre of 2N3055 doing the same would be more reliable.

Rene

--
Ing.Buero R.Tschaggelar - http://www.ibrtses.com
& commercial newsgroups - http://www.talkto.net
Reply to
Rene Tschaggelar

Kolja Sulimma wrote in news:4448ed30$0$18265$ snipped-for-privacy@newsread2.arcor-online.net :

"[..]

Formal model checking and property checking are becoming mainstream for hardware development but are hardly ever used for software development.

[..]"

There is a difference between be "[..] I am also of the opinion that applications realizing hard real-time parallel functionality are easier to verify on a device allowing real parallelism. [..]"

Of course implementing parallelism with real parallelism is easier, but verifying something whether it is implemented with true parallelism or interleaved sequential code should take the same effort no matter the implementation: check whether the inputs and the outputs match.

Reply to
Colin Paul Gloster

"Colin Paul Gloster" schrieb im Newsbeitrag news: snipped-for-privacy@docenti.ing.unipi.it...

I still believe that verifying parallel structures on a PLD is easier than on a CPU. Imagine a program, that has to handle certain communication interfaces (CAN, RS232,..) and has to measure some real-time signals at the same time. In case of a PLD these modules could be checked separately, since no dependencies according to a single CPU are present. In case of a CPU based system this dependencies are crucial (in real-time systems) and a lot of test efford is spend to examine these.

Reply to
Falk Salewski

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.