FPGA's as DSP's

I have heard that FPGA's can have a much larger throughput, dollar per dollar, than a special purpose DSP chip because of their parallelism. Anyone here have any experience or pointers on this topic?

Reply to
lm317t
Loading thread data ...

formatting link

It depends. If the processing requirement can be met with the use of a DSP uC (like TI's family of DSP micros), then those microprocessor based DSP engines are lower power, and lower cost.

If the processing can not be done by a microprocessor (they are too slow), then using the massively parallel capabilities of the FPGA is often the ONLY solution.

As well, being massively parallel, and fast, sometimes the FPGA can process many different streams of lower speed data, and replace multiple DSP microprocessors, also resulting in a better cost tradeoff. Often, the FPGA will save a lot of power in replacing hundreds of microprocessor DSP chips.

Austin

Reply to
austin

What about googling ?

formatting link

I quess a important difference between FPGA and DSP is programming. DSP is more for the "procedural" Software Engineer. FPGA is more for the "parallel" Hardware/Logic design engineer.

And yes, you may get much more MACs(Multiplyaccumulate/second)/dollar in a FPGA, but at higher development effort. You dont't write a procedural signal processing routine then, you design a parallel running logic circuit and also need to care about timing. Although utilizing a modern DSP may also get difficult and you have to care much about pipelining.

And with a DSP you get lot's of IP already on chip like DMA Controller, SDRAM interface etc. With a FPGA you need to do much more than reading the DMA doc and setting up it's registers to have a SDRAM/DMA interface making use of the FPGA's power, although there may be also some IP.

I guess dedicated DSP chips will specialize to niches where they are surrounded by application specific mixed signal (like A/D) and specialized circuits (like FLASH) to give a low cost system on chip solution. FPGA's with DSP capabilities will take over the high performance more general purpose DSP maket like for example video processing when the extra effort in development pays off or IP and development tools enables it.

All this for the lower Quantities where it is not covered by ASICS or Custom IC's.

Reply to
filter001

My MS thesis was based on FPGA's which give a very high throughput for compute intensive applications. I designed double precision floating point division and square root units which gave me throughtputs > 100 MFLOPS after pipelining extensively. The sequential version would run with an approx. throughput of 1 MFLOP.

It really depends on what your application is.

Reply to
FPGA

I did, I wanted to see if anyone here had experience in that field. I had to design fixed point IIR and FIR filters in hardware in a DSP arch class. It was able to run at over 50 MHz.

With the performance per dollar gains, there must be a way to use some of this parallelism in a higher level way that would be useful in DSP applications.

How about the ones at opencores.org, I have only used one of their cores, but it did its job.

I guess it really boils down to development tools and IP availability. There is just all this parallelism availible in an FPGA. It would be interesting to be able to use it. I guess its just easier to buy a blackfin and use some of the free libraries.

There are quite a few niche markets where I think the dollar gains may be there like in the realm of professional audio and video processing, where there the item sells for large $$ but quantity is less than 2k. I'd have to do some number crunching to see just how much.

Reply to
lm317t

Search under "systolic array".

Marco ________________________ Marc Reinig UCO/Lick Observatory Laboratory for Adaptive Optics

Reply to
Marc Reinig

lm317t > > > > DSP is more for the "procedural" Software= Engineer.

You may start a new Company and get into the Reconfigurable Computing / Reconfigurable Processor Array Market. Right now it misses broader acceptance, maybe also because the radical new "paradigm". Programming under this new "paradigm" is so different from how the average engineer has been taught (all his life) to solve technical problems. And it's not the "hierarchical/centralized" way of procedural thinking inherent to most of us.

Yes, but as long the IP is not device specific (optimized) enough, it's feasibility may be questionable.

Guess the easy way still is what DSP's about (today).

And if you think about a software tool that would assist/automate the job of mapping some signal processing routine to a FPGA you'll encounter some difficulties, likely to get less significant with time and available computing power / software.

runtime

A already complicated C-Compiler for a DSP will be i guess at least

100 times faster than compiling some DSP code into a netlist, feeding this into vendor specific FPGA P&R tool and then checking/examining timing results and eventual adaptive iterating. Debugging/simulating showing similar difficulties as you'd have to do it with a general purpose logic simulators possibly using more computing power than a specific DSP simulator.

algorithms / computing power

High level DSP programming with FPGA targets is still missing widely accepted suitable languages, software algorithms and computing power and - most important - enough people willing to aim their work at this.

And some minor Problem is that you are bound to the vendor's proprietary P&R tools and you'd need to embed them in that tool.

For simpler DSP tasks and the "hardcore engineer" DSP with FPGA is already a everyday business.

Reply to
filter001

^^^^^^^^^^^ My experience is that for a given data rate, the FPGA solution will use about 20% of the power of a microprocessor solution. That is mainly because all of the control and unused data path overhead of the microprocessor is stripped out for the unrolled pipeline used in an FPGA. That assumes the FPGA fabric is being used efficiently, which means closer to the top end of the clock envelope so that the smallest FPGA that can practically handle the task is used. That keeps the static dissipation from eating your power savings.

My general guideline to customers is that if it can be done with a single microprocessor, do it there because the parts are cheaper, the design tools are more mature, and the talent required to program them is far more plentiful and cheaper. When your process starts to exceed the capacity of a single microprocessor, the FPGA starts to become more attractive.

Reply to
Ray Andraka

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.