In article , rjf writes: |> |> The idea of a NaN is that operations on it should generally not |> produce a normal result which is, however, incorrect. It does require |> a different view of comparison, e.g. not(a>b) may not be the same |> logical value as a a>b and a do you have a specific example where a normal operation leads to what |> you refer to as both "apparently correct" and "meaningless"?
Boggle. Anything where the sign of a NaN is interpreted, including the various sign functions, for a start. Even the latest draft hasn't changed the situation where the sign of a NaN result is unspecified.
And, of course, 1/(A-B) = -1/(B-A), though that one involves infinities.
|> Doing correct range reduction was pointless? Why would that be?
That sentence is propaganda. I dispute that it is any more correct than many other options. It is pointless because it makes the false assumption that the input is perfectly accurate, and that leads to the error that is traditionally called "delusions of accuracy".
|> To say that a program, to be free of bugs, must run on all |> architectures, is a peculiar demand unless it is part of the |> requirement of the program. It would probably make the program longer, |> slower, and more likely to have bugs.
That, again, shows your limited experience. Those of us who attempted it found that it often made the program shorter, by forcing us to do the job to a higher standard, and usually REDUCED the number of bugs for the same reason. Yes, of course, we failed to achieve perfect portability, because the perversions of architectures and compilers are legion, but the exercise improved the quality of the program.
Something that you have forgotten is that there is also portability over time. Such a program will remain working as the compilers and hardware changes around it, whereas your non-portable code will not. For example, the NAG library got a considerable boost with the demise of the System/370 and the rise of IEEE 754, as most of it "just worked". Many of its competitors stopped working and the effort required to fix them was prohibitive.
You may expect the same to recur as decimal floating-point appears.
|> > Please don't be ridiculous. Firstly, neither Fortran nor C/C++ promise |> > reproducible results, |> |> Huh? Except if the program calls random(), the results should be |> reproducible. Maybe you mean compatibility across machines? Maybe we |> should be writing in Java?
I am suprised that you know so little about those languages. In neither of them is the order of evaluation of operands or arguments specified, and those evaluations can call functions, which may in turn
|> There are situations in which setting the optimization level changes |> the answers. we hope these situations do not occur in LAPACK, at |> least in any significant respect.
The authors of LAPACK were and are aware that they do, at least at the rounding level, as modified by the numerical stability.
|> > secondly, LAPACK delivers reliable results on all |> > sane architectures and, thirdly, it is NOT critical for parallel versions |> > and sparse problems. I have just spent a decade supporting a major one, |> > where none of the systems had compatible arithmetic, and only the buggy |> > programs (see above) had trouble. |> |> Perhaps you can write a paper about this, describing your experiences |> in detail and justifying your opinion.
I can. I don't have time, and experience indicates that few people are listening. I suggest that you investigate NAG software to see how it can be done on a large scale.
|> I'm sure that people working on new programs would like to hear you |> explain how the arithmetic doesn't matter; perhaps you could be more |> concrete and define "sane architecture". There have been attempts in |> the past by (for example, W.S. Brown, D.E. Knuth) and yet they were |> found wanting.
Perhaps you should justify your opinion. I know of no paper that validly refutes Brown's main points.
Regards, Nick Maclaren.