Fixed-point Math help

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
Before I start Goolging my brains out I thought I would ask the group:

I'm looking for any information to on implementing float-point algorithms in
fixed-point math.

Thanks.



Re: Fixed-point Math help
Quoted text here. Click to load it
You've searched Embedded.com?

If not:

Why?  Do you really mean floating point, or do you mean fixed-point
non-integer?  If you really mean floating point, what toolset are you
using that doesn't have a perfectly good floating point library to use
-- or are you coding in assembly and not C?

For the most part all the processors I've used in the last 8 years have
had decent floating point support.  The exceptions have been the 186
using the Borland tools that required a patch from US Software, and the
196 with the old Intel tools that crashed the ICE any time you attempted
floating point operations.

--

Tim Wescott
Wescott Design Services
We've slightly trimmed the long signature. Click to see the full one.
Re: Fixed-point Math help
For high volume applications, fixed point is usually chosen over
floating point because of the reduced die size, cost, and power
requirements of the processor. I think 90% of DSP processor sold are
still fixed point for this reason. Emulation of floating point math on
a fixed point processor is usually not an option as the throughput is
reduced by 100x or more. In this case, even if the application still
could run with the reduced throughput, it may still be converted to
fixed point math so that the processor clock could be dropped from say,
40Mhz to 1 Mhz (reduce power, lower EMI etc). For low volume
applications, is easier to use floating point.


Re: Fixed-point Math help

Quoted text here. Click to load it
I've used floating point on production code for one of four reasons:

1.  For startup, to read parameters out of EEPROM and calculate the
appropriate fixed-point parameters in a way that is easily maintainable.

2.  For scientific instruments with complicated math and without a need
for said math to happen quickly.

3.  In secondary processes that were not time critical, but where
maintainability was enhanced by using floating point.

4.  Because, much to my surprise, floating point on a TI '2812 is only a
few times slower (rather than 100x) than fixed point math.

--

Tim Wescott
Wescott Design Services
We've slightly trimmed the long signature. Click to see the full one.
Re: Fixed-point Math help
4.  Because, much to my surprise, floating point on a TI '2812 is only
a
Quoted text here. Click to load it

Sorry but there is no way floating point emulation on a TI 2812 is only
a few times slower then fixed, if so TI wouldn't need to make floating
point DSP's anymore! Floating point on the 2812 (or any fixed point
dsp) is about 100 times slower then fixed point.


Re: Fixed-point Math help

Quoted text here. Click to load it
Benchmarks, please?

I _do_ have to fess up that I have only compared it to an integer
package that includes bounds checking, which seriously slows down the
integer computation -- and we didn't use it on that processor for
anything other than what I've already advocated; the "real" computation
happened in fixed point.

--

Tim Wescott
Wescott Design Services
We've slightly trimmed the long signature. Click to see the full one.
Re: Fixed-point Math help
TI doesn't publish floating point emulation specs for its fixed point
processors, that I know of anyhow, it depends on the complier, If you
have a C complier  just write a simple fixed point rountine and compare
it to a emulated floating point

here is benchmarks for a microchip 16 bit dsp (single cycle integer
multiplys/adds) that I posted earlier, probably similar in performance
to a 16 bit TI chip
http://ww1.microchip.com/downloads/en/DeviceDoc/51459b.pdf


Re: Fixed-point Math help

Quoted text here. Click to load it
No, I'm asking _you_ for the benchmarks that _you_ are using to back up
your ever so strongly voiced opinions.

_I_ know how fast the damn processor is -- when I benchmarked it against
my fixed point math package I nearly fell out of my chair.  It's ratio
of floating point math vs. fixed point math is between 20x and 50x
better than a Pentium.

With a fixed point math package that does 1r15 arithmetic, in ANSI C,
with saturation, a Pentium is about 20-50 times faster than it is with
floating point.  The '2812 runs neck and neck.  It certainly doesn't do
floating point as fast as pure integer math, but it certainly _does_
knock the socks off of anything else I've had occasion to use.

Frankly I would have responded as you did if I hadn't done the
experiment myself.

So, upon what benchmarks are you basing your claim?

--

Tim Wescott
Wescott Design Services
We've slightly trimmed the long signature. Click to see the full one.
Re: Fixed-point Math help

Quoted text here. Click to load it
point
you
compare
performance
up
against
ratio

with
do

The topic at hand is relative performance between emulated floating
point vs fixed point math executed on the same processor, not sure why
your talking about Pentiums vs whatever, that totally irrelevant to the
discussion. I can only guess you didn't understand the .pdf file I
referenced, or we are talking about two different topics. The reference
says that a floating point add takes 122 cycles, vs 1 cycle for fixed
point. This is one example of the 100 to 1 ratio I'm talking about. My
personal experience with Analog Devices fixed point DSP's indicates two
orders of magnitude difference between fixed point and emulated
floating point, but I just thought you would be more interested in a
published benchmark that backs up my claim, thats all.


Re: Fixed-point Math help

Quoted text here. Click to load it

I tend to agree that one shouldn't rely on P-II comparisons, for example, as a
means of comparing integer vs floating point performance when discussing the
general turf of embedded processors.  Nor should one compare apples and oranges.

The OP was asking about how to IMPLEMENT floating point routines using fixed
point, by the way, and this branch here is decidedly moved away from anything
helpful there -- though perhaps still interesting.  I'm not sure how Tim's segue
comment addressed this (seems to me it was arguably on topic to suggest
searching google, but otherwise boils down to telling someone that they don't
need to know how to implement floating point because the support is already
there 'so why ask' at all...)

I agree with your comment, "fixed point is usually chosen over floating point
because of the reduced die size, cost, and power requirements of the processor."
In our case, cost was certainly a consideration though not a large one (at
first.)  However, power requirements and dissipation were vital issues for us as
well as getting the necessary processing done on time (of course.)  I don't
think die size directly mattered, though I'm sure that an excessively large
package would have been a problem then.  Package size *is* more important for
some applications I work on, so that puts a more direct pressure on the die, of
course.

I cannot agree with your rejoinder, though, that "Emulation of floating point
math on a fixed point processor is usually not an option as the throughput is
reduced by 100x or more."  First off, my very first application on the ADSP-21xx
from Analog Devices dealt with values that are important to maintain over a very
wide dynamic range.  At least 16 bits of precision had to be maintained for some
6-8 orders of magnitude, for example.  Some kind of floating point features were
essential, even while using a very cool running DSP like these were at the time.
(What kept us from using TI integer DSPs at the time was a different issue that
was inescapable with their parts and required for the application.)  So floating
point on a fixed point processor wasn't only an option, it was vital.

One of the things that is glossed over in your comment here is that floating
point processing doesn't have to be used 24/7 by the application.  If it were
always the case that the CPU was bottlenecked doing floating point continually,
then yes -- for a given performance level you'd probably be better off with a
DSP supporting floating point if you needed floating point in that way, rather
than using a super-high speed integer processor and emulating it at a similar
rate.  But if what you need is modest bursts of floating point operations as
well as very low power requirements and low cost, etc., and when you could well
use the boost of a MAC or fully combinatorial barrel shifter to help it along,
then an integer DSP is probably quite reasonable.  The price of adding floating
point in hardware is usually a continuous drain and excessive power consumption,
if you don't need it all the time, and that adds unnecessary cost both to the
processor and all of the surrounding circuitry and dissipation support required.
Further, if your application requires a wide dynamic range, some kind of
floating point support remains.

So, floating point is not only an option on integer processors.. sometimes, it
is a requirement for them.

But you've make me slightly curious.  I use ADSP-21xx integer DSPs routinely
(not moved up to BlackFin, though) and my experience using the barrel shifter
with integer operations for floating point purposes hasn't been as bad as 100:1
versus fixed point for operations that reasonably might be considered similar in
precision (but not in dynamic range, of course.)  But I write my own code and do
NOT use libraries nor do I use C, and I use the full capability of packing
instructions.  Can you be precise about what you are comparing here so I can
consider some specific cases just for my own sake?

Jon

Re: Fixed-point Math help
Fixed point assembly vs floating point  C code is what I am comparing.
I suppose your own floating point routine can beat the 100:1 quite
easily, but I need IEEE compliance (so simulations I run on a PC are
identical to the results when run in the embedded processor). Blackfins
are nice, I moved up to the SHARC recently, a pleasure to code in
assembly.


Re: Fixed-point Math help
On 15 Dec 2004 19:21:14 -0800, "bungalow_steve"

Quoted text here. Click to load it

If that is your problem, why do you use IEEE floats on the PC
simulations ?

Using C++ it would be quite easy to overload the ?,-, *, / etc.
operators using your own floating point routines using your own
floating point format.

Paul
  

Re: Fixed-point Math help
Because I'm not the only one running or controlling the PC simulation.
Frequently a customer will come to me asking to implement a industrial
controller whose control logic has been tweaked for the last ten years
on a simulation done on PC using IEEE floats, they also give me a set
of test vectors generated from the simulation that I must use to verify
the operation of the controller. I have two options, take their
existing code and cross compile it to the target processor which
supports IEEE or cross compile it to a target processor that doesn't
and hope for the best. Is a risk reduction decision.


Re: Fixed-point Math help
Quoted text here. Click to load it
You misunderstand.  Please actually read my posts before you argue with
things I did not say.

That's _your_ topic at hand, and I understand you, I for the most part I
agree with you.  If you read my retraction where I remembered (and
fessed up) that I was comparing a whole integer package that does slow
things down with floating point you'd realize that.

_My_ topic is that however it's done the '2812 in specific is _very_
good at emulated floating point -- probably not 1:1, but I believe it's
way better than 100:1.  That's why I didn't waste my time reading the
paper about the DsPIC (unless you're trying to point out that it's
relative performance is as good as the '2812?  Do you have benchmarks?).

I quoted the speedup (or lack of slowdown) between the '2812 and the
Pentium because the Pentium has your floating point hardware AND IT IS
SLOWER than the '2812.

So far you've quoted the ADI part and the Microchip part, but you
haven't addressed _my_ topic, which is that the '2812 IN SPECIFIC has
better floating point vs. integer performance than anything else I've
personally worked with -- including the Pentium, which has floating
point hardware and should blow it away.

--

Tim Wescott
Wescott Design Services
We've slightly trimmed the long signature. Click to see the full one.
Re: Fixed-point Math help
Yes it seems we are both talking about different topics, limitations of
newsgroup communication I suppose. See ya.


Re: Fixed-point Math help
Il 14 Dec 2004 10:05:09 -0800, bungalow_steve ha scritto:

Quoted text here. Click to load it

Maybe it's 100 times slower in the worst case, for example a MAC operation
done in assembly with the MAC-specific hardware or in C with double
precision math. But for "generic" operations, if you compare fixed-point
32-bit C code and float C code, the "few times slower" statement looks very
familiar to me. The same applies with C5400 family.

--
asd

Re: Fixed-point Math help

Quoted text here. Click to load it
This is my experience -- and I failed to point out that I was comparing
MAC-less integer arithmetic to floating point.  With a MAC, of course,
integer arithmetic is way faster.

--

Tim Wescott
Wescott Design Services
We've slightly trimmed the long signature. Click to see the full one.
Re: Fixed-point Math help
No, I'm  talking about a simple add is 100 times slower. Your saying
floating point is a "few times slower" then fixed point. Ok, I assume a
C5400 performs a 16 bit add in 1 cycle, so your saying in 2 to 3 cycles
(i.e., few times slower)  it can perform the overhead of a subroutine
call, denormalize/normalize and take care of all the special conditions
and return a 32 bit result? Sorry, I can't see it, do you have an
assembly listing of a C5400 floating point add routine?


Re: Fixed-point Math help
Un bel giorno bungalow_steve digitÚ:

Quoted text here. Click to load it

It wasn't by mistake if I wrote "32-bit fixed point" and "generic
operations" and "C code". It's unfair to compare 16-bit fixed point with
float (float gives you much more resolution), and it's unfair to make
comparisons just by using sum operations.

I've just made a bechmark with a 2810 (100 MHz clock, code executed in
RAM):

long a;
float f15%0.3;
long l15%0300L;

for( a=0; a<10000000L; a++)
{
    f += f/10.5;
}

for( a=0; a<10000000L; a++)
{
    l += l/10500;
}

The long loop duration was 8 seconds; the float loop duration was 35
seconds.

--
asd

Re: Fixed-point Math help

Quoted text here. Click to load it

Don't do that.  Understand the problems/algorithms and then implement them
directly in fixed point.  Sometimes you really do need to carry the
scale around with every computation, but that's acutally pretty unusual.
More usually, you can work with some notion of "full scale" and a
corresponding "noise floor" and just leave it at that.  If you try to
translate directly from floating point you're unlikely to really
understand the numeric issues and the resulting code will be inefficient
as a result.

Cheers,

--
Andrew


Site Timeline