Ok, so how is that enforced? If you release a program linked with QT, then you have released the sources as well and you can't unring the bell. But if you did not release the program, how are you prevented from using the commercial version with your code that was developed under an xGPL version?
Which doesn't alter Nobody's point - 3, 4 and 12 are all exactly representable in essentially all FP formats (I don't actually know of any FP formats that have actually been implemented where that's not true). So there=92s no good reason that dividing 12 by 4 should not yield exactly 3. Of course there have been systems sufficiently perverse to fail that.
[I think that the original question was dividing 12 by 3 to produce 4, or not, which is a somewhat more interesting question.]
Of course there's a good reason: division is orders of magnitude slower than multiplication, and since the three on the denominator is a constant, the compiler can arrange to give you an answer with one of those faster multiplies. If you know that you care only about the integer part of the result then you can round to integer and still get exactly the right answer. Or you can use integer arithmetic in the first place. If you don't know that you only want integers then, in general, you don't know that your answer can be represented exactly as a ratio of an integer and a power of two, so you expect your answer only to be as precise as your floating point representation allows.
If you weren't particularly interested in speed, but were interested in exact arithmetic, then you'd use a language like scheme or lisp that by defuault has rational arithmetic.
One of the things that I have always found interesting about IEEE floating point, and its goals of ekeing out an extra LSB of precision here and there, is that it only really works that way in assembly language (and so therefore essentially no one besides the writers of hand- tweaked numerical libraries see those benefits.) In most computer languages that have algebraic-looking expression syntax, one generally expects that the compiler will do what it can to produce the result
*faster*, and in the fewest steps, according to the rules of algebra.
I think the theory of floating point is that 4.0 is not exactly 4. It's a number that is closer to 4 than to 4.1 or 3.9. The same would apply to the value represented in a C program as `(float)4`, except that the containing limits would be much closer together. Floating-point numbers are not exact numbers, regardless how an implementation might represent them.
But you're implicitly talking source code. By the time the hardware gets the numbers there's no way to tell that they weren't 11.9999999999999962 and 3.00000000000000003, or something similar, when they started out.
That is ALL that is known about the value. There may be other factors in the programming that say it is an integer, but that isn't in the floating value.
--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]:
If the project is developed in secret, it would be hard to get caught unless you do something silly like releasing a package with 100k lines of GUI code a week after obtaining the licence.
But most commercial software could easily use the LGPL version. Probably the biggest exception would be console games, where the console vendors normally require you to lock everything down.
If all the inputs are always 32-bit integers, and you're using double precision, it's often the case that those bits are *guaranteed* to be zero.
I frequently deal with code where the source data can be either integer or FP. It's often convenient to write one version of the algorithm which uses double precision FP, with integer data converted to/from double on entry and exit. It shouldn't be necessary to write a separate integer version.
That may be true if you adopt a vague definition of "floating-point", but IEEE-754 goes into some detail about the what, where, when and how of representation and rounding.
A particular C implementation isn't required to conform to IEEE-754, but most sane compilers will do so if it is remotely feasible. Even to the point of offering the option of a (much slower) software implementation on platforms where the FPU implementation doesn't conform down to the last bit.
In particular, each precision can represent a specific subset of the rational numbers. If the "true" result (using rational arithmetic) of any calculation is exactly representable in the specified precision, then it
*will* be exactly represented. And if it can't be represented exactly, the rounding mode specifies which of the two closest representable values is chosen.
For sure, there's plenty of code where this doesn't matter. But there's also plenty of code where it does matter, otherwise there wouldn't be an and Intel wouldn't have replaced a lot of the early Pentium CPUs due to the FDIV bug.
Then clearly you have not talked to any other members of those organisations. I can not and will not speak for any of the others but there are a range of views in all those panels on many things.
You seem to have a quite limited view of all of this coupled with very strong opinions. Not a good combination.
--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.