In article , rjf writes: |> |> y'think? Does your opinion of my opinion matter?
Probably not a lot, as I and people who believe like me have lost the politics. But damned if I am going to let the facts and history be rewritten.
While I cannot prove it, as I did not publish, I predicted that IEEE
754 would NOT be supported by programming languages when I first saw it, for good reasons. 20+ years on, I have been shown to be right.
|> I think there is |> considerable empirical support for the view that PL designs leave out |> access to features that are not supported by hardware and vice-versa.
At the basic arithmetic level, yes.
|> Scientific computing emphasizes speed, sometimes at the expense of |> accuracy. On the other hand, most architects have very little |> knowledge of, or sympathy for, the use of floating-point arithmetic. |> Frankly, it's not a big seller, once you can put a check in the box |> "IEEE floating-point format".
That has been true for at least the past 25 years.
|> perhaps check .. |> |> Fateman, Richard J. "High-Level Language Implications of the Proposed |> IEEE Floating-Point Standard." ACM Transactions on Programming |> Languages and Systems Vol. 4, No. 2 (1982).
Thank you for reminding me of that. I have looked at it again.
|> ...Richard Fateman, member of the (original) IEEE 754 (binary floating- |> point) standards committee..
No comment.
Now back to why I said what I said, and why I predicted 20+ years back that IEEE 754 would not be taken up by programming languages, and was right. Currently, none supports it in full, and none even supports enough to make it possible to write portable, reliable code using the features. This will not change.
Denormalised numbers are an irrelevance, provided that an implementation either supports them properly or ALWAYS forces hard underflow (both pre- and post-operation). It is only mixing the two that causes trouble.
Infinities would be easy to support, if they were a pure overflow value (i.e. if 1/0 delivered a NaN). But, because of their specification, most languages avoid them (see later).
The first big killer is that IEEE 754 included only one trapping mode, trap-recover-and-resume. That is a gibbering nightmare to implement or even specify, as soon as you allow ANY optimisation or any form of parallelisation or asynchronicity at the language, compiler, operating system or hardware levels. The traditional and straightforward one, which is useful primarily for debugging, is trap-diagnose-and-terminate. LIA-1 gets that right, but came too late to recover the situation.
If you look at the Fortran standard carefully, you will see that it is impossible to specify trap-recover-and-resume. Dammit, in 40 years, Fortran hasn't even managed to find a way of specifying the behaviour of impure functions, despite it being a major known problem with the standard all that time. C and C++ came later, and the same remark applies to them.
If you have ever implemented language run-time systems with proper trap-recover-and-resume (and I have, several times), you will know what an evil task it is. Few modern programmers can even imagine how to start.
So we have to stick with the IEEE 754 default carry-on-regardless mode and rely on flags.
Well, lets start with them. Global flags (EVEN the limited global forms permitted in IEEE 754R) were a known disaster area well before
1980. To start with, they create havoc with optimisation, especially anything involving asynchronicity or parallelisation. And, boy, do they just! The number of inconsistencies I have found on various systems is legion - and 95% of those have not been bugs.
But this also applies to specification. Fortran has ALWAYS permitted code movement, including function calls. The proponents of flags say that you can handle this by moving the flags with the code; this is hideously expensive for small code fragments, but let that pass. More importantly, it is not true. Fortran has NEVER required code to be executed if its value is not needed, and does not even specify its behaviour in terms of an abstract, serial von Neumann machine. Including useful flag setting is a nightmare.
Fortran 2003 merely PERMITS IEEE 754 flags to be set - it doesn't say anything about whether they must be. In fact, an implementation can conform by doing everything except setting them! C99 is much worse, but I am not starting on that here.
Now let's get onto signed zero and its effect on infinities. The problem about signed zeroes is that they destroy many mathematical invariants, and the fact that 1/0 = infinity means that a program can't simply treat -0 = +0. Lots of optimisations become invalid, and users are expected to write to an arithmetic model that confuses the hell out of 99% of them. What's more, it encourages languages and implementations to not trap errors, such as 1/0 or sign(0).
But NaNs are the worst. While they LOOK simple, they aren't. They are easy to propagate only for simple arithmetic expressions; as soon as they go into a function or any non-trivial code, it must either be "NaN-aware" or it will fail. Take, for example, IFs and other conditionals. I tried writing some code using the IEEE 754 "NaNs compare false" convention (and I am a careful programmer with massive experience), and I couldn't. I have never met anyone who could, except just perhaps Kahan himself.
Now, this could easily be solved in a pure functional language, but lets's get real. The only simple solution for conventional languages is an error exit on using NaNs when they make no sense. And that is what we don't have.
And so on. But let's stop here.
Regards, Nick Maclaren.