Damned floating point

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
I assume the answer is "yes", but is it normal behavior in an x86  
processor to get slightly different answers when you add up the same  
short vector of doubles at different times?

(Very slightly different -- basically so that sum1 == sum2 isn't true,  
but not noticeable in any other way.)

I assume this has something to do with how the floating-point processor  
state is treated when a calculation is done.  Does anyone know the  
details?

TIA.

--  

Tim Wescott
Wescott Design Services
We've slightly trimmed the long signature. Click to see the full one.
Re: Damned floating point
On Fri, 26 Feb 2016 12:58:07 -0600, Tim Wescott

Quoted text here. Click to load it


Floating point addition is not commutative.  Never has been, never
will be.  IOW, (a+b)+c is *not* the same as a+(b+c).

In short its a problem with the rounding of intermediate results.
Consider a three digit decimal FP system, and you add 100, -100 and
.01.  If you compute (100 + -100)+.01, you'll get .01.  If you compute
(100+.01) + -100, you get 0, since the (100+.01) rounds back to 100.
It's actually fairly easy to get into a situation (like the one I just
illustrated) where the result is *entirely* noise.  People doing long
sums (matrix multiplication, for example), often take considerable
pains to try and minimize the accumulated error, including carefully
controlling the order in which things are added.

In general, there are few cases where two FP numbers can be
meaningfully compared for equality.  Usually the correct approach is
to do something like "fabs(a-b) < epsilon", although picking epsilon
can be a bit tricky.

"What Every Computer Scientist Should Know About Floating-Point
Arithmetic":

https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

Re: Damned floating point
On Fri, 26 Feb 2016 13:24:02 -0600, Robert Wessel

Quoted text here. Click to load it


Can't believe I wrote commutative...  Argh.  FP addition does not
follow the usual rules of associativity.  OTOH, there have be machines
where elementary FP operations have not been commutative (Cray-1 FP
multiplication, for example, where a*b often didn't equally b*a).

Re: Damned floating point
On Fri, 26 Feb 2016 13:24:02 -0600, Robert Wessel wrote:

Quoted text here. Click to load it

In this case, though, I'm computing score = a + b + c.  Strictly in that  
order.  Twice, with other floating point calculations in between.  And  
getting different answers.

--  

Tim Wescott
Wescott Design Services
We've slightly trimmed the long signature. Click to see the full one.
Re: Damned floating point

Quoted text here. Click to load it

That's not at all unusual. There are extra guard bits in the FPU, and if  
an intermediate value at any times gets put in a CPU register or memory  
it can affect the rounding. I'll 2nd the idea of reading the  
docs.oracle.com link. (and never* test floating point values for  
equality) (*for limited definitions of never)

Re: Damned floating point
On Fri, 26 Feb 2016 13:57:50 -0600, Mark Storkamp wrote:

Quoted text here. Click to load it

Until now I had only ever done it to test if a value had been copied  
(i.e., a = b).  It seemed safe...

--  

Tim Wescott
Wescott Design Services
We've slightly trimmed the long signature. Click to see the full one.
Re: Damned floating point
On Fri, 26 Feb 2016 13:42:52 -0600, Tim Wescott wrote:

Quoted text here. Click to load it

Make sure you are not mixing floats and doubles.  If you are casting
ints to floats be careful with your cast ordering. Are you messing with
a,b or c between the 2 calculations.  There was a test we used to run
on Prime Computers that went something like:
float f1=1.0
for (i=1;i<1000;i++) {
f1=(f1*100.0)/100.0;
if (f1 != 1.0) {
printf("fail at %d\n",i);
break;
}
It would fail after a few hundred loops.  f1 would be something like
.99999998 or such.  Modern CPUs are much better but a floating point
compare can still fail due to rounding errors.

Search on "floating point comparision" for some examples

--  
Chisolm
Republic of Texas


Re: Damned floating point
Are a & b & c identicle both times? Maybe rounding 1/2 up half the time  
and 1/2 down the other half...

Hul

Quoted text here. Click to load it





Re: Damned floating point
On Fri, 26 Feb 2016 13:42:52 -0600, Tim Wescott

Quoted text here. Click to load it


If you're doing this on the x87 FPU, Mark is right, but his
explanation is a bit wrong.  Normally the x87 does everything with 80
bit temp reals, and it can lead to odd results if in some cases the
compiler decides to store a value back to memory (in which case it'll
use the defined format, 64 bits for a double, and then you'll lose
some precision that might have been there the first time it was used).
So if you have:

   a =...
   b =...
   c =...
   score = a + b + c;
   ...other code...
   score2 = a + b + c;

The first sum might be using values of a, b, and/or c still in
registers from the calculations that produced them in the 80-bit
format, but by the time you get to the second sum, those values may be
gone, and the code will reload the saved values of a/b/c from memory,
which will have been stored rounded to 64-bit format.

There are ways to set the precision of the FPU (so instead of 64 bits
of precision you get 53 or 24), but compilers often don't do that
(MSVC gives you _control87() so you can do it yourself).  Your
compiler might also have a "strict IEEE math" option, which will force
stores and (re-)loads to avoid the issue (IOW everything gets forced
to work in 64-bit format), you might be able to set that ("-fp:strict"
in MSVC).

If your compiler has an option to do FP with SSE2, that will also
eliminate the variable precision problem (and that should be true of
all compilers generating x86-64 code).

Also note that the two sums do *not* have to be performed in the same
order, and that's actually at least somewhat common in practice if one
of the values in still in a register.  For example if c happened to be
used in "...other code...", that register might still be live, and the
second sum might just add a and b (from memory) to the c still in the
register, where as the first sum might have added the three values in
the "normal" order.

Re: Damned floating point
On Fri, 26 Feb 2016 17:52:27 -0600, Robert Wessel

Quoted text here. Click to load it

The x87 floating point stack is small, so you may run out of stack
registers whit complex arithmetic expression. If part of the
expression is calculated in a floating point function, you easily can
run out of stack registers. If the function is declared locally, the
compiler will know if the whole calculation can be done in the FP
stack. However, if the function is defined external, the compiler
doesn't know the stack usage, so it may need to save FP stack register
across the call.

Floating point processors also have various rounding/truncation modes
set by some flags. If these bits are altered between calculations, the
result might be different.  

The MMS multimedia instructions use the same FP registers so the float
values need to be saved during the MMS calculations and the compiler
may fail to set the original rounding/truncate mode during register
restore.


Re: Damned floating point
On 26.02.2016 20:42, Tim Wescott wrote:
Quoted text here. Click to load it
Maybe score1 is computed as (a+b)+c, score2 as a+(b+c) (or even  
(a+c)+b)). I don't know if the compiler is free to rearrange the terms.

So you are doing other floating point calculations in between? Do they  
involve a,b,c as terms or results?

How much do score1 and score2 differ? How much do a,b,c differ?

Regards
Martin


Re: Damned floating point

Quoted text here. Click to load it

Taking a punt here but things I would wonder about:

Has something changed the operating set-up of FP unit?
Could an interrupt be coming in, saving and restoring FP result?  



Re: Damned floating point
On 2/27/2016 10:24 AM, Bill Davy wrote:
Quoted text here. Click to load it

I'm not certain an interrupt would cause this problem.  80 bit floating  
point is a valid size for calculations.  The interrupt code won't know  
how the floating point registers are being used so it will have to save  
and restore the full state of the FL unit rather than truncate the  
values before saving.

--  

Rick

Re: Damned floating point

Quoted text here. Click to load it


<clip>

Quoted text here. Click to load it

It is very hard for me to understand why someone would need to use
floating point instructions in the ISR, thus no need to save and
restore the FP stack.  

Usually there is something seriously wrong with the program
architecture if you _have_to_ use FP in ISR.




Re: Damned floating point
On 28/02/16 19:25, snipped-for-privacy@downunder.com wrote:
Quoted text here. Click to load it

An ISR is commonly where a thread context switch occurs.
Any thread is allowed to use the FPU.


Re: Damned floating point

Quoted text here. Click to load it

I agree it is neither nromal nor desirable to use Fp in an ISR but (a)  
someone else may have written the ISR, and (b) s**t happens, and (c) when  
you have discounted everything else ...  



Re: Damned floating point
wrote:

Quoted text here. Click to load it

Floating point instructions have various kinds of traps or faults
(whatever you call them) such as division by zero, A trap service
routine is more or less similar to an ISR and in general, you do not
want to trigger an other ISR from within an ISR.  

There are priority issues, is the original ISR be interrupted by the
trap or should it be suspended after the ISR returns (but then the ISR
can't use the result of the trap).

Neither do you want to have e.g. a segment fault within, which usually
results in a more or less well handled OS crash.


Re: Damned floating point
On 2/28/2016 3:25 AM, snipped-for-privacy@downunder.com wrote:
Quoted text here. Click to load it

I'm not going to judge code I haven't seen or requirements I don't know  
about.  Code is code and floating point is just another form of data.

--  

Rick

Re: Damned floating point
snipped-for-privacy@downunder.com writes:
Quoted text here. Click to load it

I have to wonder what x86 model this is running on and what compiler,
since it occurs to me that x86 floating point computation is typically
done these days with XMM instructions rather than the x87 (stack based)
instruction set.  I don't know what that does to the 80 bit intermediate
results.  I guess Tim has deadlines to meet and is happy to have put the
issue behind him with a workaround but it would be interesting to see
the assembly code that came out of the compiler from his code.

Re: Damned floating point
On 28/02/16 19:10, Paul Rubin wrote:
Quoted text here. Click to load it

No, floating point is typically done with x87 instructions when you are
compiling as "32-bit x86 code that works on anything" - which is
typically the default setting for x86 32-bit compilers.  XMM registers
and instructions will only be used if you set the minimum target
requirements to something a little higher, using appropriate compiler
flags.  Obviously if you are interested in performance you will do that
- in most cases, you can rely on the user having something more recent
than an early Pentium.  But you have to make an active decision to do
this.  I can only make guesses about how Tim set up his build, but my
guess is that he only bothered with simple flags like "-O2 -Wall
-Wextra", rather than figuring out what architecture flags were
appropriate for a good balance between speed and portability, because
code efficiency was not particularly important.



Site Timeline