floating point arithemetic on fpga

hi im doing a project to implement single and double precision floating point arithemetic units on Altera FPGA.Can someone please arrange me or tell me any link from whr i can find the VHDL coding for the same.im a beginner in the VHDL field.

Reply to
shaz.pecobian
Loading thread data ...

What do you mean by arithmetic units? Normally in FPGAs, we implement only the function that is needed for that part of the circuit, not a complete arithmetic unit like you'd find in a microprocessor. Most likely, you won't find something that exactly meets your needs and you'll have to roll your own. You also said nothing about performance or size requirements. Floating point isn't all that difficult to implement, just costly in terms of amount of logic, especially for the adds and subtracts. I suggest you pick up a book such as Israel Koren's Computer Arithmetic that discusses floating point number systems as a starting point so that you understand the hardware that is necessary for floating point operations.

Reply to
Ray Andraka

If you're looking for source code then you're not *doing* any sort of project at all - you're copying one.

--
Mark McDougall, Engineer
Virtual Logic Pty Ltd, 
21-25 King St, Rockdale, 2216
Ph: +612-9599-3255 Fax: +612-9599-3266
Reply to
Mark McDougall

hi im going to develop adders,multipliers,dividers as they r somewht tough but added advantage of high precision etc.and thenplanning to make an fir filter based on them......i have got the idea from a no. of research papers who have implemented thm...i am new to this field and i have generated some code for this also(for swapping and alignment before actual addition) but getting constraints of i/p o/p pins etc.so due to being new ,i cant generate some code that is efficient enough requiring less pins or memory..

Reply to
shaz.pecobian

It doesn't really make sense to use floating point for a FIR filter. The only reason to do so is if the scale of the input is unknown, which in most cases means you haven't done your homework.

Here's the reason for my assertion: A FIR filter is a sum of products, each product being the product of a constant and the input (delayed). Addition requires the addends to all be scaled so that the radix point is in the same position for every addend. Floating point arithmetic accomplishes this by right shifting the smaller addend, discarding (and usually rounding) the lsb's shifted off the right end of the addend. Therefore the precision of the sum is the same as the precision of the larger addend. If you extend this to a sum of many addends, the total sum again has no better precision than the largest addend (and in fact will usually have less due to right shifts needed to prevent overflow). If you implement the adder structure found in an FIR filter as floating point, you are needlessly denormalizing and renormalizing between each adder, which greatly increases the complexity of the circuit with no real advantage: your precision is limited to the width of the mantissa. The floating point multiplier is only slightly more complicated than a fixed point multiplier. The FIR filter coefficients are generally constants (adaptive filters are the exception), so the floating point multiplier for the FIR filter taps is essentially a fixed point multiplication of the mantissa and a fixed add to the exponent. You can reduce the precision of the multipliers in a fixed point filter if you normalize the coefficients, multiplying the delayed inputs by the mantissa of the coefficient and then hardwiring a shift at the multiplier output to account for the coefficient's exponent. The only fly in the ointment is if the input varies over a very wide range, which is not the case for most DSP applications.

Reply to
Ray Andraka

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.