estimating CPU load /MFLOPS for software emulation of floating point

In an upcoming hardware design I'm thinking about using a CPU without a floating point unit. The application uses floating point numbers, so I'll have to do software emulation. However, I can't seem to find any information on how long these operations might take in software. I'm trying to figure out how much processing power I need & choose an appropriate CPU.

I have plently of info on MIPS ratings for the CPU's, and I figured out how many MFLOPS my application needs, but how do I figure out how many MIPS it takes to do so many MFLOPS?

Does anyone know of any info resources or methods?

Thanks for any help! Chris

Reply to
Christopher Holmes
Loading thread data ...

Lots of the latter, but the former are mostly in people's heads or on paper. Old paper.

If you want to emulate a hardware floating-point format, you are talking hundreds of instructions or more, depending on how clever you are and the interface you use. If you merely want to implement floating-point in software, then you can get it down to tens of instructions. For example, holding floating-point numbers as a structure designed for software, like:

struct (unsigned long mantissa, int exponent, unsigned char sign)

is VASTLY easier than emulating IEEE. It's still thoroughly messy.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

If you absolutely must use normalized FP (a la IEEE) it could be hundreds or even thousands depending onthe CPU resources and the cleverness of the code. Look and un-normalized FP or even integer. Normalization results in non-deterministic timing. Of course, if your CPU doesn't have hardware multiply, then all your math timing is non-deterministic ;-)

Very few things really need FP - the algorithm designers are just lazy. A 32 bit integer has better than 1 ppb (1 part per billion) resolution. Most things in the real world (like ADCs and DACs) aren't anywhere near that.

Bob

Reply to
Bob

Why would you muck about with a separate sign, rather than just using a signed mantissa, for a non-standard software implementation? Does it buy you something in terms of speed? Precision, I guess, given that long is only 32 bits on many systems, and few have 64x64->128 integer multipliers anyway. The OP didn't say what the application was, so it's hard to say whether more than 32 bits of mantissa would be needed.

Frankly, he's almost certainly going to be able to translate to fixed-point or block-floating-point anyway, and not bother with the per-value exponent field. That's what all of the "multi-media" applications that run on integer-only ARM, MIPS, SH-RISC etc do. Modern versions of these chips all have strong (low latency, pipelined) integer multipliers, so performance can be quite good.

Cheers,

--
Andrew
Reply to
Andrew Reilly

Floating point multiplication and division is not much worse than doing integer multiplication or division with operands of similar sizes. Only an extra addition/subtraction is involved.

However, floating point addition and subtractions are nasty, since you first have to denormilze the smaller value and then perform the addition/subtraction in the normal way. Especially after subtraction, you often have to find the most significant bit set and do the denormalisation, which can be quite time consuming.

However, even if you would have to normalize a 64 bit mantissa with an

8 bit processor, you could first test in which byte the first "1" bit is located and by byte copying (or preferably pointer arithmetic) move that byte to the beginning of the result. After that you have to perform 1-7 full sized (64 bit) left shift operations (or 1-4 bit left/right shifts) to get into correct positions. Rounding requires up to 8 adds with carry.

Even so, I very much doubt that you would require more than 100 instruction in addition to the actual integer multiply/add/sub operation with the same operand sizes.

An 8 by 8 bit multiply instruction would reduce the computational load considerably.

Paul

Reply to
Paul Keinanen

Hi, such an open ended question is impossible to answer. It takes forever on a typical 8 bit micro. 16 bit is much quicker but still slow. It's possible to emove fp operations from most applications, try that first. You can easily measure performance, on the hardware with some test routines, try that second.

Reply to
CBarn24050

It buys some convenience, and probably a couple of instructions fewer for some operations. Not a big deal.

See "scaling" in any good 1930s book on numerical analysis :-)

Regards, Nick Maclaren.

Reply to
Nick Maclaren

(snip regarding software floating point)

The 6809 has an 8 by 8 multiply, but the floating point implementations I knew on it didn't use it. I looked at it once, and I don't think it was all that much faster to use it.

-- glen

Reply to
glen herrmannsfeldt

There was a time when you had no choice. You should also decide on the precision levels needed in the FP system. Many years ago I decided that my applications could be adequately handled with a 16 bit significand, and the result was the FP system for the 8080 published in DDJ about 20 years ago. The actual code is probably of little use today, but the breakdown may well be.

That was fairly efficient and speedy because the 8080 was capable of 16 bit arithmetic, and it was not hard to extend it to 24 and

32 bits where needed.
--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer

And speaking of emulating IEEE 754 float operations, speed and code size go south in a big hurry if infinities, denormalized numbers, NaNs, and rounding are handled properly. Add some more adverse impact if double-precision float is implemented instead of or in addition to the usual single-precision float.

Regardless, MFLOPS will be measured in fractions and quite small fractions at that. Any relation between MIPS and MFLOPS will be purely coincidental.

Reply to
Everett M. Greene

Those are rare cases -- affect code size, yes, but only a small effect on speed.

I would expect them to be linearly related.

Mike Cowlishaw

Reply to
Mike Cowlishaw

What is "block floating point"?

Reply to
Christopher Holmes

Regrettably not :-(

That has been stated for years, but isn't true. Yes, it is true, if measured over the space of all applications on all data. No, it is not true for all analyses, even excluding perverse and specially selected ones. It isn't all that rare to get into a situation where

5-10% of all floating-point calculations are in a problem area (i.e. underflowing or denormalised), despite the data and results being well scaled.

Yes and no. They are only if the characteristics of the machine remains constant. As branch misprediction becomes more serious, MFlops degrades relative to MIPS.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

But every operation pays the price of checking for the rare values whether they are present/occur or not.

Not across processor families...

Reply to
Everett M. Greene

Not really:

All the special values (NaN, Inf, Zero and Denorm) can be handled (at least approximately) with a simple test of the exponent field, before falling through with the normal case.

Since the Denorms all would be included in the normal 'Special_exponent()' test, the overhead is only in the fixup part.

Terje

--
- 
"almost all programming can be viewed as an exercise in caching"
Reply to
Terje Mathisen

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.