two fpu compared

Hi everyone,

assuming two differently implemented FPU, both validated against the IEEE754, and limiting operations to the golden 5 (+,-, *, /, sqr), can I be sure that provided the inputs operands are the same the result would be the same bit wise?

BACKGROUND: our dear customer wanted to verify that the control law specified is correctly implemented by means of comparing output with a golden model. Now the golden model is running on a standard PC, with matlab, a pile of mathematical libraries and a processor that is humongous compared to our little embedded unit. I've already warned the customer that bitwise comparison might only be viable if precision wise the two calculations are equally configured (single precision, rounding, etc.). Are there any other elements to consider?

Moreover, we are driving a PWM with the control law on a 8bit, or max

10bit precision, so most of that fine grain precision would not be used anyway.

Any comment/pointer/suggestion is appreciated.

Al

--
A: Because it messes up the order in which people normally read text. 
Q: Why is top-posting such a bad thing? 
A: Top-posting. 
Q: What is the most annoying thing on usenet and in e-mail?
Reply to
alb
Loading thread data ...

It is normal to view floating point operations as slightly imprecise - there is a rule that you never compare floating point data with ==. Even if you are careful to have the same precision and rounding in the calculations, there may be other factors - for at least some operations, I believe there can be different results from two IEEE754 compatible implementations.

More importantly, however, is that normally you don't want full IEEE754 compliance - the overhead is too much. Typically, FPU hardware does not provide full support - you need to have extra software handling as well. In some aspects, non-compliant code can be significantly more efficient. For example, with "-ffast-math" in gcc, the compiler can turn "x = y + z - y" into "x = z", while in compliant mode it has to do the full calculation as stated, even though it is slower and gives less accurate results. And for square root, some FPU's may have a fast "approximate square root" function that is good enough and much faster.

The only way to be really sure of bit-accurate results is to use the same software floating point library on each system. But since the "golden model" here is a system that you have no control of (matlab plus external libraries - and different "standard PC's" can have different details in their FPU's), it is basically impossible to guarantee bit-accurate copies of the results.

Reply to
David Brown

The x87, which as well as I know follows the IEEE standard, holds intermediate results in extended precision. (64 bit significand). In that case, the results could easily be different from one that didn't use extra precision.

For x86-64 code, it is less usual to use x87, and so less likely that will be a problem.

-- glen

Reply to
glen herrmannsfeldt

Op Thu, 12 Feb 2015 09:30:32 +0100 schreef alb :

Iff the software on both ends *use* the FPU in accordance with the standard (i.e. strictfp, no -ffast-math) and in the same mode, then it should be the same.

It is one thing to have a mathematical model that happens to give accurate results when ran on a PC, but it is quite another thing to perform numerical analysis and determine the required accuracy (exactness) for each step of the algorithm. Regardless, if your customer can determine whether a 'golden' implementation provides accurate results, then I would think they can do the same for your implementation.

--
(Remove the obvious prefix to reply privately.) 
Gemaakt met Opera's e-mailprogramma: http://www.opera.com/mail/
Reply to
Boudewijn Dijkstra

No, you can't even get the same result on the same FP unit.

Years ago a young colleague of mine came to me with a problem. He was using floating point and at one point was getting a mis-compare on equal after some floating point computations on identical inputs. The computations went down some different paths and were compared at the end. (In same situations the inputs would not be identical on the two paths.) This was on a server grade PPC (some of you can see where this is going) which has a merged multiply-add. Due to the way it was coded on one path separate operations were done, on the other they were merged. The difference between one rounding and two was enough to give different results.

We had a discussion about how fp wasn't as easy as it looks, and why you need to be very careful with an equal compare using floating point.

Reply to
Dennis

Hi David,

David Brown wrote: []

[]

the extra bit I got recently is that the result is converted to an integer of 11bit which, a priori, means that a lot of the low bit precision from the 32bit is anyhow lost.

Indeed I could calculate what the integer LSB corresponds to in terms of floating point representation and be more confident on the comparison (i.e. if floating point results are not bitwise equal does not necessary mean the integer result is going to be different).

AFAIK the implementation we have is 'full IEEE754' and floating points operations are using directly the hardware (unless gcc is configured otherwise).

I see your point, this is certainly an extra bit of info since our harwdare is not that performing either and a soft implementation might actually be faster.

Still can be a viable approach if the result is converted to integer with a much lower precision. It is a bet though, since we cannot be absolutely sure that it would be the case.

Al

Reply to
alb

What this all boils down to is that the client wants to be sure that the code is "correct" - but has an unreasonable and restrictive definition of "correct". It would be better to make this clear /now/, before you end up contracted to produce code that is impossible to make (since it must match the bit-exact results from matlab on "a standard PC", even though these results could vary between different "standard PC's").

Figure out what the client /actually/ needs, such as a variation of +/-3 on the 10-bit PWM output, and tell them that this is a sensible goal.

Reply to
David Brown

On February 13th, 2015, David Brown explained: |-------------------------------------------------------------------------| |"What this all boils down to is that the client wants to be sure that the| |code is "correct" - but has an unreasonable and restrictive definition | |of "correct". It would be better to make this clear /now/, before you | |end up contracted to produce code that is impossible to make (since it | |must match the bit-exact results from matlab on "a standard PC", even | |though these results could vary between different "standard PC's"). | | | |Figure out what the client /actually/ needs, such as a variation of +/-3 | |on the 10-bit PWM output, and tell them that this is a sensible goal." | |-------------------------------------------------------------------------|

This reminds me of Chapter 97 Your Customers Do Not Mean What They Say of Kevlin Henney (editor), "97 Things Every Programmer Should Know: Collective Wisdom from the Experts", O'Reilly Media, 2010,

formatting link

With best regards, Colin Paul de Gloucester

Reply to
Colin Paul de Gloucester

It technically may be compliant, but not necessarily in hardware.

I'm not aware of *any* implementation that's fully 754/854 compliant in hardware. Every chip I have ever seen leaves most rounding modes and NaN handling to software/firmware.

George.

Reply to
George Neuner

(snip)

Are you sure the calculation should be done in floating point? Without more details, I suspect it could be done in fixed point, the results would not vary depending on implementations, and everyone would be happy.

(There is only one question left, and that is the way division works when one operand is negative.)

Small rounding differences can easily lead to differences when converted to fixed point.

I forget now exactly what "full IEEE754" is, but denormals are one that you have to watch. Personally, I think denormals were a bad idea, though. (snip)

Or use fixed point.

-- glen

Reply to
glen herrmannsfeldt

Sometimes the choice to do something using an FPU is not because things cannot be done using the integer unit but just because one needs the horsepowers the FPU adds to the chip. Was the case with me on my netMCA-3 design - the FPU did make things a bit more complicated but using it was the only way to do what I did. Apart from the sheer ops per clock cycle it (being 64 bit of course, 32 bit FPU-s are pretty much useless) gave me the dynamic range I needed (32 bits integers would not have sufficed and multiprecision would have slowed things down beyond any practical usability).

Why is that, what pitfalls do you envisage? "Normally" one would divide the absolute values and just xor the sign bits after that, you must have something else/more to it than that in mind.

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
Dimiter_Popoff

That would be those regarding the expected result of such operations.

To name but the simplest case, there are opposing, but roughly equally valid lines of reasoning about what the result of (-5)/3 should be: -2 or -1. One keeps the remainder positive and yields a curve for x/3 without a "kink" as x passes zero. The other allows independent treatment of the operands' signs, i.e. it guarantees (-a)/b = -(a/b).

Negative divisors have a similar issue, with different arguments being brought forth.

As is to be expected in such a case, not just MCUs, but also higher-level programming languages disagree on these issues. Some language definitions opted not to specify them at all.

This means that the same calculation, even though it's just basic operations, and in integers, just by being run in different programming languages can easily yield different results. And sometimes just picking a different toolchain for the same language will change the outcome ... or a compiler option, or a change to the execution environment at runtime, by other code, will.

For floating point operations, these same issues exist under the topic name of "rounding modes", and the applicable IEEE 754, 754 and IEC 60559 standard require support for several of them, plus the ability two switch between them at run-time. For integer divisional rounding, we're usually not that lucky: the MCU vendor or HLL specification decide for us, and they decide once and for all.

Reply to
Hans-Bernhard Bröker

(snip on validating floating point calculations, where someone wrote)

(snip, then I wrote)

Many science and engineering problems have a large dynamic range, and require results with a relative error (uncertainty).

Lengths can be from nanometers to gigameter, times from picoseconds to gigayears, and masses from eV to the mass of large stars. One can measure the atomic spacing in a crystal lattice or the distance between planets with a relative uncertainty of about one part in a million. Floating point is great for this.

But there are calculations where the required uncertainty does not scale with the size of the problem. I expect a bank to keep my balance to the cent, if I have one dollar or billions of dollars in my account. (Well, I won't have billions of dollars, but a big corporation might.)

Note also that for such quantities the values never get really small or really big. There is no use in computing with picodollars or exadollars. The US IRS allows rounding to the nearest dollar, there there are a few cases where ratios are computed to a specified number of decimal places. These calculations are best done in fixed point.

Because of the demand from scientists, many computers do have faster floating point processors than fixed point, but often enough the difference isn't all that large. Within a reasonable range one can get exact results from floating point add, subtract, and multiply. You have to be a lot more careful with divide.

With approriate values, add, subtract, and multiply will never round off the result, but divide can always do that.

That is the usual way, at least partly for historical reasons.

It is usual for fixed point divide to return a quotient and remainder. (Floating point pretty much never returns a remainder.)

In the case of a negative divisor, there are cases where one still wants a positive remainder.

Consider the simple case where one is keeping track of a date and time, with some epoch (time origin). It simplifies many calculations, if you keep the time positive even when the date goes negative. If it is now 06:22 on the 15th day, 30 days ago, the clock will have read 06:22 on the -15th day, and not -13:38 on the -14th day. (No matter what the day, a clock never reads a negative time.)

There are a fair number of problems in mathematics where one wants a modulo (remainder) operations with this property.

The early binary computers used sign magnitude representation, where the XOR of the two signs is applied to the quotient and remainder. That isn't so obvious for twos complement, but is commonly done.

Traditionally C allowed for either quotient and remainder in the case of a negative divisor, as long as the two were consistent. (That is, the / and % operators had to give appropriate results.)

Fortran, which originated on sign magnitude machines, requires the remainder sign to match the divisor sign. (I believe C has changed to require this, too.) When Fortran required one, and C allowed both, it was obvious to hardware designers which way to go. Many mathematicians would disagree. (It is mostly in discrete mathematics where one wants an unsigned modulo.)

-- glen

Reply to
glen herrmannsfeldt

Hi David,

David Brown wrote: []

[]

Sound advice. Indeed I underestimated the impact of this /restriction/ and since we are still in the conceptual phase it would be worth to give a heads up to everyone so that we won't fall short during testing.

We know what's the precision required on the current output (like 0.1% FSR) and I could, given the current regulation response, provide what's the precision required at the output of the floating point calculation (with the float_to_int as last operation).

This would allow indeed to have a reasonable target for the verification and avoid unattainable objectives.

Reply to
alb

Hi Paul,

Colin Paul de Gloucester wrote: []

Hey thanks for the pointer, I love those kinds of books and I'm already on it! Not that it will help me solve my problems with the customer but it's a good start ;-)

Al

Reply to
alb

The sooner you get this sorted, the better. It is usual for there to be a certain amount of re-negotiation of specifications during a project's lifetime (since few customers really know /exactly/ what they need at the beginning, and few developers know /exactly/ what they can deliver). But it is never a good idea to start out agreeing to something you know is impossible!

That sounds a good plan. It will also ensure that your hardware will meet your goals (hint - 10-bit PWM is /not/ good enough for 0.1% precision - the LSB in any signal is always noise).

Reply to
David Brown

Hi David,

David Brown wrote: []

Couldn't agree more, we've spent nearly 4 months on spec clarifications and we're not done yet!

Sometimes a little bit of noise can help. Indeed the noise we get from the current reading is injected in the regulation loop and may help get the precision wanted. If precision is defined as RMS over the output current, my 1-2 LSB can provide a gaussian shape whose RMS (or sigma or stdev or FWHM) is exactly my 0.1%. I could have a 1bit output and attain the required precision if switching frequency is high enough.

This is a bit OT w.r.t. the current thread and is still an open question on my side. I'll post something one day on this other subject, but I'm not yet there.

Al

Reply to
alb

On February 16th, 2015, Al posted: |------------------------------------------------------------------------| |"Colin Paul de Gloucester wrote: | |[] | |> This reminds me of Chapter 97 Your Customers Do Not Mean What They Say| |> of Kevlin Henney (editor), "97 Things Every Programmer Should Know: | |> Collective Wisdom from the Experts", O'Reilly Media, 2010, | |>

formatting link
| | | |Hey thanks for the pointer, I love those kinds of books and I'm already | |on it!" |------------------------------------------------------------------------|

Hi Al,

You are welcome.

|------------------------------------------------------------------------| |"Not that it will help me solve my problems with the customer but | |it's a good start ;-) | | | |Al" | |------------------------------------------------------------------------|

You can't have everything :)

Regards, Colin Paul

Reply to
Colin Paul de Gloucester

At that point the spec needs to be a limit on the error of the average signal over a specified time period, likely also with a limit on peak-to-peak ripple on the signal (remember, your PWM is likely to generate some ripple on the signal unless something on the other end is just decoding the PWM).

Note also that the ripple provides an effective upper limit on the possible precision of the results (which improves as the time period for the measurement gets longer).

And yes, noise in the output can get you past the quantization noise limit, as long as you can average over a long enough period of time. Years ago I built systems that measure values to much higher precision than the native ADC precision by time averaging, and sometimes needing to add a tiny bit of noise to make it work. You did need to make sure the system was stable enough that the value didn't change over the measurement period.

Reply to
Richard Damon

Hi Paul,

Colin Paul de Gloucester wrote: []

I did not know alpine could quote in such a 'fancy' way! Do you use anything on top of it? What about double quoting?

Al

Reply to
alb

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.