TILE64 embedded multicore processors - Guy Macon

Yes. Unless you actually read the documentation for your sin() function, where it states the allowable range, and documents the behaviour when the range is exceeded. And whether you find such documentation or not, you should construct an automated test that validates that the sin() function acts as expected for the some subset (including the boundaries) of the range of inputs you will provide, and another test that shows that your code will fail correctly if sin() does misbehave in any of its documented ways.

IOW, don't make the reliability of your code depend on the whim of the author or maintainer of your math library. Take responsibility for the reliability of your own code.

Sometimes there is no right answer, so an error must be thrown unless explicitly configured to be ignored.

Clifford Heath.

Reply to
Clifford Heath
Loading thread data ...

How about a unified theory of GR that successfully merges gravity, EM, SF, WF, refutes the Heisenberg uncertainty theory, proves that black holes are a physical impossibility and explains many experimentally observed nonlinear optical phenomena observed in experiments which cannot be explained using MH?

See aias.us and atomicprecision.com. Proofs of errors in the theory are welcome.

Reply to
Nutso OpenBSD User Dave

Sorry, I was talking about science.

Unless your theory makes predictions that contradict accepted theory and can be shown experimentally to match both those predictions and all other existing experimental data that is known to match the predictions of accepted theory, it's not yet even a theory, let alone science.

The onus is on you to demonstrate it, not on anyone else to disprove it.

Reply to
Clifford Heath

[you then snipped my argument that sin(1e300) could just as well return a random value]

Intel actually documents that input values with zero significant bits, i.e. outside the +/- 2^64 range (using long double), will trap.

Terje

--
- 
"almost all programming can be viewed as an exercise in caching"
Reply to
Terje Mathisen

Why do you have to compare them?

It seems a very (US) American thing to always need a winner, i.e. overtime, "sudden death" an all sorts of competitions, instead of simply declaring a draw and be done with it. :-)

Terje

--
- 
"almost all programming can be viewed as an exercise in caching"
Reply to
Terje Mathisen

Astronomical measurements would have to assume that spacetime was flat, which we know (er, from astronomical measurements) isn't true. Your best bet is probably that magnetic moment, verified to 10 places, assuming the correctness of a theory where pi crops up in the formulae quite a lot. That also assumes flat spacetime, but that's a better approximation for regions of spacetime that are only a few microns wide.

Reply to
Ken Hagan

Yes. I had that one in mind. It's only 3 orders of magnitude removed but it's also an isolated case. At

formatting link
you'll find very little in the way of absolute values known to better than 10^(-10).

There are also sums of money that traditionally require 20 digits to run the accounts but (at the risk of making a snarky political comment) I'm not sure they are known to very great precision.

Reply to
Ken Hagan

In article , "Ken Hagan" writes: |> |> There are also sums of money that traditionally require 20 digits to |> run the accounts but (at the risk of making a snarky political comment) |> I'm not sure they are known to very great precision.

They are known to great precision but very low accuracy :-)

Regards, Nick Maclaren.

Reply to
Nick Maclaren

Ah. I can't disagree with that. He also mentions Intel's spec for the underlying machine instruction, the accuracy of which appears to drop off with increasingly absurd argument values in a reasonable manner.

I suppose my point is that languages have traditionally made a stronger demand of library authors and I suspect this is simply because they can rather than because there is any demonstrated need. In the case of C89, it would have been a case of "somebody did once, and it became fairly common, at which point it was enshrined as the standard way to do it".

Conversely, they have failed to provide operations that would address a demonstrable need and failed to make such strong demands consistently across the library.

An example of the first is that sin(x) wants an argument in radians. I'd guess that roughly 50% of programs that call it have to convert from degrees to radians before doing so, and since there is no standard=

function for that purpose, many of them are probably losing precision in the process. (A tiny minority are probably using 22/7. :)

An example of the second is that we accept that we may have to write x1e64 ? 0 : sin(x) to avoid exceptions from a version of sin(x) that objects to meaningless= ly large values of the angle.

There's also plenty of numerical code that does "if (x!=3D0) y=3D1/x; el= se ..." so the whole maths library is aimed at people who (one hopes) realise th= e limitations of the arithmetic, so why does anyone seriously suggest that=

library functions should generate "the right answer" for all representab= le input?

Reply to
Ken Hagan

In article , "Ken Hagan" writes: |>

|> > I think that you are at cross-purposes. What I think that Terje means |> > is that you should assume that the inputs are accurate to AT MOST |> > that, and so not waste effort trying to deliver results that would be |> > meaningful only if the input was perfectly accurate. |> |> Ah. I can't disagree with that. He also mentions Intel's spec for the |> underlying machine instruction, the accuracy of which appears to drop |> off with increasingly absurd argument values in a reasonable manner. |> |> I suppose my point is that languages have traditionally made a stronger |> demand of library authors and I suspect this is simply because they can |> rather than because there is any demonstrated need. In the case of C89, |> it would have been a case of "somebody did once, and it became fairly |> common, at which point it was enshrined as the standard way to do it".

Eh? None of Fortran, C89 or C99 demand any sort of accuracy, nor even that the compiler documents what it is. I believe that Ada 95 does so, and is the only modern portable language with any significant following to do so. C99 and Fortran 2003 have some sort of IEEE 754 support, but I have ranted on the deficiencies of that often enough.

What C89 and (especially) C99 do that Fortran, and most other languages don't, is to explicitly forbid a compiler/library from raising an exception or even diagnosing a problem when its code is about to go wrong.

|> An example of the first is that sin(x) wants an argument in radians. |> I'd guess that roughly 50% of programs that call it have to convert |> from degrees to radians before doing so, and since there is no standard |> function for that purpose, many of them are probably losing precision |> in the process. (A tiny minority are probably using 22/7. :)

A rather larger number are using 3.14159 - in double precision :-(

Regards, Nick Maclaren.

Reply to
Nick Maclaren

Consider the problem of evaluating (Theta mod PI). Remember that PI is transcendental, and has no exact value available in a finite series of digits. Once that possible error reaches +-PI the possible end values of the sine can fall between -1 and 1.

--
 Chuck F (cbfalconer at maineline dot net)
   
   Try the download section.
Reply to
CBFalconer

... snip ...

355/113 gives 7 digit accuracy!
--
 Chuck F (cbfalconer at maineline dot net)
   
   Try the download section.
Reply to
CBFalconer

It's not my theory. It's Dr. Myron Evans' Theory.

There are about 100 papers there that develop the theory. There also is a paper that contrasts ECE with standard GR. Dr Evans has about 700 published and refereed papers, most available at atomicprecision.

The theory is amply demonstrated in all the papers and 3 published books, with probably 3 more books in the works.

Reply to
Nutso OpenBSD User Dave

Here, the point is: how about the right question?

The proper function to use in most engineering (cycle simulation, graphics etc.) calculations is sinpi(), not sin(). In other words, only use rational parts of a cycle, so the circle closes exactly. It is only when doing mathematical analysis that sin(radians) is sensible. The trouble is historical -- sinpi() may be too new.

As to why sin() should return the right answer, which in this context means interpreting input values as exact (in the actual format used, e.g. double, and not a pre-conversion value like 1e300 (unless DFP is being used, of course)), it is for reproducibility. A vendor of a math library (or a machine instruction) wants to be able to improve the implementation without changing the results. The only sensible way to do this is to produce the correctly-rounded result, though the option to limit (and document) the range, and out-of-range behaviour, remains open.

Michel.

Reply to
Michel Hack

In article , Michel Hack writes: |>

|> Here, the point is: how about the right question?

Agreed.

|> The proper function to use in most engineering (cycle simulation, |> graphics etc.) calculations is sinpi(), not sin(). In other words, |> only use rational parts of a cycle, so the circle closes exactly. |> It is only when doing mathematical analysis that sin(radians) is |> sensible. The trouble is historical -- sinpi() may be too new.

Grrk. No. Yes, it is often an appropriate function, but it is NOT the relevant one for most engineering - only the most simplistic forms of it. As soon as you have to do calculations as basic as the forces in a particular direction due to a rotating, unbalanced object, using radians is generally the right approach.

And it isn't a new function. After all, the official standard for measuring angles is the grad :-)

|> As to why sin() should return the right answer, which in this context |> means interpreting input values as exact (in the actual format used, |> e.g. double, and not a pre-conversion value like 1e300 (unless DFP is |> being used, of course)), it is for reproducibility. A vendor of a |> math library (or a machine instruction) wants to be able to improve |> the implementation without changing the results. The only sensible |> way to do this is to produce the correctly-rounded result, though |> the option to limit (and document) the range, and out-of-range |> behaviour, remains open.

That is a fantasy. While it is possible for trivial calculations like sin(), bitwise reproducibility forces both serialisation and effectively no change to the implementation, ever again. Even simple calculations like linear equations, eigenvalues and fast Fourier transforms are sensitive to the exact operations used.

On the contrary, the only sensible approach is to pursue reasonable accuracy, which gives reproducibility to within the numerical analysis of the problem.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

I have a cute program I snaffled somewhere that does this:

$ frac 3.1415926535897932384626433832795029 5e-19

3/1 epsilon = 5.000000e-02 22/7 epsilon = 4.000000e-04 355/113 epsilon = 8.000000e-08 104348/33215 epsilon = 1.000000e-10 312689/99532 epsilon = 9.000000e-12 1146408/364913 epsilon = 5.000000e-13 5419351/1725033 epsilon = 7.000000e-15 80143857/25510582 epsilon = 1.000000e-16 245850922/78256779 epsilon = 1.000000e-17 817696623/260280919 epsilon = 2.000000e-18 1881244168/598818617 epsilon = 5.000000e-19

... for any fraction. Useful! The reference in the source code doesn't give the author, just a reference: Jerome Spanier and Keith B. Oldham, "An Atlas of Functions," Springer-Verlag, 1987, pp. 665-7.

Clifford Heath.

Reply to
Clifford Heath

In article , Clifford Heath writes: |> |> I have a cute program I snaffled somewhere that does this: |> |> $ frac 3.1415926535897932384626433832795029 5e-19 |> 3/1 epsilon = 5.000000e-02 |> 22/7 epsilon = 4.000000e-04 |> 355/113 epsilon = 8.000000e-08 |> 104348/33215 epsilon = 1.000000e-10 |> 312689/99532 epsilon = 9.000000e-12 |> 1146408/364913 epsilon = 5.000000e-13 |> 5419351/1725033 epsilon = 7.000000e-15 |> 80143857/25510582 epsilon = 1.000000e-16 |> 245850922/78256779 epsilon = 1.000000e-17 |> 817696623/260280919 epsilon = 2.000000e-18 |> 1881244168/598818617 epsilon = 5.000000e-19 |> |> ... for any fraction. Useful! The reference in the source code |> doesn't give the author, just a reference: Jerome Spanier and |> Keith B. Oldham, "An Atlas of Functions," Springer-Verlag, 1987, |> pp. 665-7.

It's a trivial calculation, starting from the continued fraction, which can be found in any suitable mathematics book.

You could probably do range reduction by manipulating the continued fraction, too :-)

Regards, Nick Maclaren.

Reply to
Nick Maclaren

Note that using 6 digits to specify a 7-digit accurate value isn't much of an improvement.

Stefan

Reply to
Stefan Monnier

Best advice I have seen here for a long time.

--

******************************************************************** Paul E. Bennett............... Forth based HIDECS Consultancy Mob: +44 (0)7811-639972 Tel: +44 (0)1235-811095 Going Forth Safely ..... EBA.
formatting link
********************************************************************
Reply to
Paul E. Bennett

In IBM Powerspice format....

E3, gnd-input= (sin (omega*time))

or something like that. Now if the frequency is 1GHz, then running the simulation for a millisecond gives sine (6e6) more or less. 6e6 is >>

2*pi.

So as I said, I would be upset if it crashed. And even more upset if some guy at Cadence said I deserved it.

del

Reply to
Del Cecchi

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.