TILE64 embedded multicore processors - Guy Macon

In article , rjf writes: |> |> The idea of a NaN is that operations on it should generally not |> produce a normal result which is, however, incorrect. It does require |> a different view of comparison, e.g. not(a>b) may not be the same |> logical value as a a>b and a do you have a specific example where a normal operation leads to what |> you refer to as both "apparently correct" and "meaningless"?

Boggle. Anything where the sign of a NaN is interpreted, including the various sign functions, for a start. Even the latest draft hasn't changed the situation where the sign of a NaN result is unspecified.

And, of course, 1/(A-B) = -1/(B-A), though that one involves infinities.

|> Doing correct range reduction was pointless? Why would that be?

That sentence is propaganda. I dispute that it is any more correct than many other options. It is pointless because it makes the false assumption that the input is perfectly accurate, and that leads to the error that is traditionally called "delusions of accuracy".

|> To say that a program, to be free of bugs, must run on all |> architectures, is a peculiar demand unless it is part of the |> requirement of the program. It would probably make the program longer, |> slower, and more likely to have bugs.

That, again, shows your limited experience. Those of us who attempted it found that it often made the program shorter, by forcing us to do the job to a higher standard, and usually REDUCED the number of bugs for the same reason. Yes, of course, we failed to achieve perfect portability, because the perversions of architectures and compilers are legion, but the exercise improved the quality of the program.

Something that you have forgotten is that there is also portability over time. Such a program will remain working as the compilers and hardware changes around it, whereas your non-portable code will not. For example, the NAG library got a considerable boost with the demise of the System/370 and the rise of IEEE 754, as most of it "just worked". Many of its competitors stopped working and the effort required to fix them was prohibitive.

You may expect the same to recur as decimal floating-point appears.

|> > Please don't be ridiculous. Firstly, neither Fortran nor C/C++ promise |> > reproducible results, |> |> Huh? Except if the program calls random(), the results should be |> reproducible. Maybe you mean compatibility across machines? Maybe we |> should be writing in Java?

I am suprised that you know so little about those languages. In neither of them is the order of evaluation of operands or arguments specified, and those evaluations can call functions, which may in turn

|> There are situations in which setting the optimization level changes |> the answers. we hope these situations do not occur in LAPACK, at |> least in any significant respect.

The authors of LAPACK were and are aware that they do, at least at the rounding level, as modified by the numerical stability.

|> > secondly, LAPACK delivers reliable results on all |> > sane architectures and, thirdly, it is NOT critical for parallel versions |> > and sparse problems. I have just spent a decade supporting a major one, |> > where none of the systems had compatible arithmetic, and only the buggy |> > programs (see above) had trouble. |> |> Perhaps you can write a paper about this, describing your experiences |> in detail and justifying your opinion.

I can. I don't have time, and experience indicates that few people are listening. I suggest that you investigate NAG software to see how it can be done on a large scale.

|> I'm sure that people working on new programs would like to hear you |> explain how the arithmetic doesn't matter; perhaps you could be more |> concrete and define "sane architecture". There have been attempts in |> the past by (for example, W.S. Brown, D.E. Knuth) and yet they were |> found wanting.

Perhaps you should justify your opinion. I know of no paper that validly refutes Brown's main points.

Regards, Nick Maclaren.

Reply to
Nick Maclaren
Loading thread data ...

In article , snipped-for-privacy@cus.cam.ac.uk (Nick Maclaren) writes: |> |> I am suprised that you know so little about those languages. In neither |> of them is the order of evaluation of operands or arguments specified, |> and those evaluations can call functions, which may in turn

Oops.

and those evaluations can call functions, which may in turn do anything that a function may do.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

10^300 is (approximately) congruent to 4.543371 modulo 2*pi.

(Is that correct?)

Therefore sin(10^300) is approximately equal to -0.98575

As for the second expression,

(1+x)^(1/x) ~ e * (1 - x/2 + 11*x^2/24)

Reply to
Spoon

In article , Spoon writes: |> > There are few, if any, classic IEEE 754 floating point libraries. |> > |> > No, I am not joking. Almost all libraries use a hacked IEEE 754 |> > model, because the full one is such a pain, and doesn't make a |> > great deal of sense for many functions. |> > |> > And the answer is "it depends". It can be negligible, or it can |> > be huge, often depending critically on how accurately you want to |> > calculate the values at the extreme ranges. sin(1.0e300) or |> > pow(1+1.0e-15,1.0e15), for example. |> |> 10^300 is (approximately) congruent to 4.543371 modulo 2*pi. |> |> (Is that correct?)

Dunno. The problem there is that writing code to handle all possible input values is foully complex, and would be meaningful only if the input values were pefectly accuracte, which they aren't.

|> As for the second expression, |> |> (1+x)^(1/x) ~ e * (1 - x/2 + 11*x^2/24)

Yes, but so? Writing code to handle all such cases is a real pain (I have done it), needs a LOT of work to check, and it is VERY easy for errors to slip through.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

Assuming you are the author of a scientific subroutine library implementing "sine", with no information other than the floating-point argument, how could you possibly know if the input value was perfectly accurate, good to 3 decimal digits, or what? Would you say that 0.5 is any number between 0.0 and 1.0? or 0.45 and 0.55 ? Don't laugh -- Mathematica does something like this if you write out enough, but not too many, digits. And it gets its underwear tied in knots as a consequence.

If you assume inputs are perfectly accurate to the represented precision, then the other possibilities can be pursued by relatively simple programs.

Reply to
rjf

In article , rjf writes: |>

|> > Dunno. The problem there is that writing code to handle all possible |> > input values is foully complex, and would be meaningful only if the |> > input values were pefectly accuracte, which they aren't. |> |> Assuming you are the author of a scientific subroutine library |> implementing "sine", with no information other than the floating-point |> argument, |> how could you possibly know if the input value was perfectly accurate, |> good to 3 decimal digits, or what?

If you are working with normal, hardware floating-point numbers, you know the precision of the input representation. See below.

|> Would you say that 0.5 is any number between 0.0 and 1.0? or 0.45 |> and 0.55 ? |> Don't laugh -- Mathematica does something like this if you write out |> enough, but not too many, digits. |> And it gets its underwear tied in knots as a consequence.

It's an ancient convention, and I first saw it in the 1960s in some Autocode systems. I agree that it never did work very well. There were many variants, such as using leading and trailing zeroes to indicate precision.

0.5 is a bad example, as it is exact in both decimal and binary; 0.1 is a better one.

|> If you assume inputs are perfectly accurate to the represented |> precision, then the other possibilities can be pursued by relatively |> simple programs.

As you can if you assume that inputs are accurate to at most 0.5 ULPs. That is the traditional assumption, and it has been shown to be an excellent compromise between implementability and usability, over a period of 40+ years.

I have never seen a sane program that depended on more than that, for any operations beyond addition, subtraction, multiplication, square root, integer part, and division by a power of two. I would not go so far as to say that none have ever been written, but I have never seen any evidence for their claimed existence.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

You don't know that, of course.

So, the reasonable assumption is that the inputs are accurate to the limit of their precision, i.e. a 53-bit mantissa for IEEE double.

This means that for inputs greater than about 2*pi*2^53, there will be _zero_ significant bits left behind after range reduction, in which case any number in the [1.0..1.0] range would be OK.

In fact, there are several good arguments for random rounding, so maybe any random number in the same range should be returned. That would quickly show up as errors in a program that depends on the return value from sin(1e300) at least.

Right, it seems we agree on this part.

Terje

--
- 
"almost all programming can be viewed as an exercise in caching"
Reply to
Terje Mathisen

that too.

If you compute x>0 ? you will get one result. If you ask sign(x)=1? you are assuming x is not a NaN. If you ask sign(NaN) then it seems to me that your program is in error. So what you are saying is that the programmer can still write erroneous programs, and that is a problem in the standard. To make such a demand uniformly is unlikely to reach closure. In this particular matter, eh..

Not all mathematical identities are preserved computationally. Not even A+B+C = C+B+A. We know that.

Making the assumption that all numbers are fuzz-balls is, in my view, pointless (hm, a pun comes out of the woodwork). You can, if you wish, build a fuzzy 'significance arithmetic" system, or an interval arithmetic system with tools based on exact interpretation. It is much harder to build an exact system on fuzzballs.

Making a program work on 2 machines or compilers is, I agree, useful as a check. Making it work on 7 is not. Efforts like PFORT show that others share this view.

"Reproducible" to me means that if you run the program on Monday, Tuesday, and Wednesday, the answers come out the same. Debugging a programming system in which results are not reproducible is quite difficult, though it sometimes happens in real-time systems. I wouldn't expect it to happen in scientific software much. Maybe asynchronously running parallel systems would exhibit such problems? Is that what you are talking about? If so, I would still expect such systems to compute sin(10^100) the same.

I hope that even fewer people are reading comp.arch. Though who knows.

I suggest you look at the work of John Harrison regarding proving properties of programs, and see to what extent they could be done with Brown's definitions. As I recall, Brown attempted to describe a host of different architectures, including some that were notoriously inconsiderate, under one umbrella, rather than synthesizing an especially good one. I think it is unreasonable to view his work as prescriptive.

You say you have spent a decade supporting a "major one" -- what is that? A major installation of LAPACK?

RJF

Reply to
rjf

This is getting very boring. I shall stop after this posting.

|> If you ask sign(NaN) then it seems to me that your program is in |> error. So what you are saying is that the programmer can still write |> erroneous programs, and that is a problem in the standard. To make |> such a demand uniformly is unlikely to reach closure. In this |> particular matter, eh..

What I am objecting to is that IEEE 754R actually FORBIDS treating that error as an error. Yes, of course sign(NaN) should raise invalid.

|> Making the assumption that all numbers are fuzz-balls is, in my view, |> pointless (hm, a pun comes out of the woodwork). You can, if you wish, |> build a fuzzy 'significance arithmetic" system, or an interval |> arithmetic system with tools based on exact interpretation. It is |> much harder to build an exact system on fuzzballs.

That is true. However, it is a vastly more realistic model. And, when you start to introduce general parallelism, you have to abandon the unrealistic target of 'an exact system'.

|> Making a program work on 2 machines or compilers is, I agree, useful |> as a check. |> Making it work on 7 is not. Efforts like PFORT show that others share |> this view.

(a) That is a ridiculous claim about PFORT. You are also clearly not familiar with Toolpack and NAGWare.

(b) And those of us who have done for a LOT more than 7 systems know that you are wrong.

|> "Reproducible" to me means that if you run the program on Monday, |> Tuesday, and Wednesday, |> the answers come out the same. Debugging a programming system in |> which results are not reproducible is quite difficult, though it |> sometimes happens in real-time systems. I wouldn't expect it to |> happen in scientific software much.

Even with that restrictive interpretation, it happens in scientific software far more than you expect.

|> Maybe asynchronously running |> parallel systems would exhibit such problems? Is that what you are |> talking about? If so, I would still expect such systems to compute |> sin(10^100) the same.

All realistic parallel systems, which includes most modern 'serial' ones are asynchronous, at least to a visible extent.

|> You say you have spent a decade supporting a "major one" -- what is |> that? A major installation of LAPACK?

Don't be ridiculous. A major supercomputing system, with a lot of active users. There is also a fair amount of my code in NAG, and I have been or am involved in the Fortran, C and C++ and other standards processes.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

I disagree. The programmer had to choose from just 2 or 3 floating point types, and so their choice tells you nothing at all about their problem or their data.

Using the written precision of the function call as a spec seems to me to be something of a lazy option by language committees. Library authors get landed with the hardest requirement possible and everyone pays for it at run-time (either in time or space).

For a reasonable assumption, consider two things.

1 You know that no real measurements have ever been made to anything remotely close to 53-bit precision. The need for accuracy (in *any* function) is therefore driven by the numerical stability of the calling program. This is, of course, unknown and so we have to guess what identities and re-arrangements people will depend on. 2 Secondly, you know that anyone feeding you an angle that corresponds to more than a few hundred rotations would probably learn more from a program failure than from quietly returning an answer.

So sin(1e300) is not debatable at all. Any program making that call has already gone wrong. Returning a value is about as helpful as quietly returning -2 when asked for sqrt(-4). Both smack of "Yes, its rubbish but let's not be the subroutine that actually crashes the student's program.".

On the other hand, folks probably do want sin(x+dx)-sin(x) to have the right sign for sane x and very small dx, and 53-bit accuracy is probably as easy a way of ensuring that as any other.

Reply to
Ken Hagan

In article , "Ken Hagan" writes: |> On Wed, 07 Nov 2007 06:53:03 -0000, Terje Mathisen |> wrote: |> |> > So, the reasonable assumption is that the inputs are accurate to the |> > limit of their precision, i.e. a 53-bit mantissa for IEEE double. |> |> I disagree. The programmer had to choose from just 2 or 3 floating |> point types, and so their choice tells you nothing at all about |> their problem or their data.

I think that you are at cross-purposes. What I think that Terje means is that you should assume that the inputs are accurate to AT MOST that, and so not waste effort trying to deliver results that would be meaningful only if the input was perfectly accurate.

|> So sin(1e300) is not debatable at all. Any program making that call |> has already gone wrong. Returning a value is about as helpful as |> quietly returning -2 when asked for sqrt(-4). Both smack of "Yes, |> its rubbish but let's not be the subroutine that actually crashes |> the student's program.".

Grrk. Yes and no. With some kinds of modelling, you often need to calculate things line exp(-x)*sin(x) for 'ridiculously large' values of x. The answer is, of course, zero as near as dammit. But it doesn't matter whether sin(1.0e300) returns -1, 0, +1 or a random number between -1 and 1.

sqrt(-4) is just plain wrong, except in a language where that can return a complex result.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

... snip ...

No, it means that the value may be perfectly accurate, but may have an error as large as pi. Thus any number in 1.0..-1.0 is a reasonable answer, but the minimum error possibility is handled by assuming the input to be exact.

There are always at least two numbers attached to any experimentally derived value - the value itself, and the expected error.

--
 Chuck F (cbfalconer at maineline dot net)
   
 Click to see the full signature
Reply to
CBFalconer

Hmm...the g-factor of the electron has been measured with a relative accuracy of about 10^(-14). That's about 42 bits to me.

Jan

Reply to
Jan Vorbrüggen

In article , =?ISO-8859-15?Q?Jan_Vorbr=FCggen?= writes: |> |> > 1 You know that no real measurements have ever been made to |> > anything remotely close to 53-bit precision. |> |> Hmm...the g-factor of the electron has been measured with a relative |> accuracy of about 10^(-14). That's about 42 bits to me.

There are also some real quantities that arise out of pure mathematics, and therefore can be calculated to arbitrary precision. They don't, of course, arise from measurement, but are used in calculations.

I have occasionally calculated formulae containing only such numbers to high precisions, to check that some equality holds. That can save a lot of time that would otherwise be wasted trying to prove something that isn't so.

But, frankly, such uses are esoteric.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

formatting link

"The electron spin g-factor, gS (more often called simply the electron g-factor, ge) is roughly equal to two, and is known to extraordinary accuracy."

cf.

formatting link

"The electron's magnetic moment has recently been measured to an accuracy of 7.6 parts in 10^13 (Odom et al. 2006)."

7.6 parts in 10^13 would be approximately 40 bits.

;-)

Reply to
Spoon

This is the most accurately measured physical quantity ever, and validates the core prediction of Feynmann's quantum electro-dynamics (QED), which is why QED is the jewel in the crown of modern science, and why Feynmann, not Einstein, should be considered the preeminent physics genius of the

20th century. Just IMO, of course.
Reply to
Clifford Heath

Yes, yes, yes. "Silent failure is never the right default".

A tolerant approach to robustness, with the thought that the result will be 99.9% reliable, is why so much modern software is unreliable. People point at MS, but Unix software has this philosophy deeply ingrained as well, for as long as I've been familiar with it (i.e., since v7 Unix in 1979 or so).

Reply to
Clifford Heath

That is an interesting question..... In the physical world, to what degree of accuracy has any physical quantity that defines the value of PI been measured? People calculate PI to thousands of digits, yet to what extent is the value of PI verified physically? Clearly measuring a circle circumference and diameter isn't going to get you much. I guess astronomical measurements such as orbits or perhaps some electronic measurements would be the most precise. So how much verification do they provide?

del

Reply to
Del Cecchi

I don't know about 1e300, but if I am running a circuit simulation with an input sine wave that goes on for a bunch of cycles, say while a phase locked loop initializes, I would be very upset if some doofus programmer decided that sin(10000) was somehow improper and deserved to crash. Likewise if I am simulating at a particular point in time using a mixed mode simulator and happen to be a few million cycles into it.

Nope, but how about a right answer?

Reply to
Del Cecchi

Why would that require anything more than sin( 2 * PI) ?

Robert

--
Posted via a free Usenet account from http://www.teranews.com
Reply to
Robert Adsett

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.