TILE64 embedded multicore processors - Guy Macon

So that takes care of the first 10 digits of pi. How about the next

999,990 or so? :-)
Reply to
Del Cecchi
Loading thread data ...

... snip ...

Here's the program I used:

/* Find best rational approximation to a double */ /* by C.B. Falconer, 2006-09-07. Released to public domain */

#include #include #include #include #include #include

int main(int argc, char **argv) { int num, approx, bestnum, bestdenom; int lastnum = 500; double error, leasterr, value, criterion, tmp; char *eptr;

value = 4 * atan(1.0); if (argc > 2) lastnum = strtol(argv[2], NULL, 10); if (lastnum 1) { tmp = strtod(argv[1], &eptr); if ((0.0 >= tmp) || (tmp > INT_MAX) || (ERANGE == errno)) { puts("Invalid number, using PI"); } else value = tmp; } criterion = 2 * value * DBL_EPSILON; puts("Usage: ratvalue [number [maxnumerator]]\n" "number defaults to PI, maxnumerator to 500"); printf("Rational approximation to %.*f\n", DBL_DIG, value);

for (leasterr = value, num = 1; num < lastnum; num++) { approx = (int)(num / value + 0.5); if (0 == (int)approx) continue; error = fabs((double)num / approx - value); if (error < leasterr) { bestnum = num; bestdenom = approx; leasterr = error; printf("%8d / %-8d = %.*f with error %.*f\n", bestnum, bestdenom, DBL_DIG, (double)bestnum / bestdenom, DBL_DIG, leasterr); if (leasterr

Reply to
CBFalconer

CCalc suggests some divergence on the last 3?

3/1 epsilon = 5.000000e-02 22/7 epsilon = 4.000000e-04 355/113 epsilon = 8.000000e-08 8.49136786583709e-8 104348/33215 epsilon = 1.000000e-10 1.05560409261568e-10 312689/99532 epsilon = 9.000000e-12 9.27662754162471e-12 1146408/364913 epsilon = 5.000000e-13 5.12714472405737e-13 5419351/1725033 epsilon = 7.000000e-15 7.048902178673586e-15 80143857/25510582 epsilon = 1.000000e-16 -1.84329122292753e-16 245850922/78256779 epsilon = 1.000000e-17 -2.48852651567486e-17 817696623/260280919 epsilon = 2.000000e-18 -4.05126327236517e-17 1881244168/598818617 epsilon = 5.000000e-19 -3.84703658079006e-17

-jg

Reply to
Jim Granville

[...]

That's just continued fraction expansion. It's a technique similar in general approach to ye olde decimal fraction, but using a different refinement scheme. In a nutshell you round() / floor() off the integer part, invert the rest, and so on. For pi, this yields:

pi = 3.14159... = 3 + 1/(7.0625...) ~= 3 + 1/7 = 22/7 = 3 + 1/(7 + 1/(16 - 0.003405...)) ~= 3 + 1/(113/16) = 355/16 = 3 + 1/(7 + 1/(16 - 1/(293.634...) ~= 3 + 1/(7 + (293 / 4687)) = 3 + (4687 / 33102) = 103993 / 33102

This may feel like a mostly academic exercise, but there's practical application. E.g. if you ever wondered why our predominant musical cultures subdivide the octave into 5 or 12 steps, and not some other numbers, the explanation is that the first three steps in the continued fraction evaluation of log(3)/log(2) are: 3/2, 8/5 and 19/12. These mean the following approximations work increasingly well:

2^3 ~= 3^2 ( 8 ~= 9) 2^8 ~= 3^5 ( 256 ~= 243) 2^19 ~= 3^12 ( 524288 ~= 531441)

The next step from there would be 84/53, i.e. a scale with 53 intervals, which would be severely impractical. Humans just don't have enough fingers to play an instrument with so many settings.

Reply to
Hans-Bernhard Bröker

I noticed that also, but didn't dig into the code to find out why.

Reply to
Clifford Heath

I'd guess it's the limit of the simple float most compilers use. Windows Calc, and CCalc differ way out around e-47. In which case, the code should include a precision reality check :)

I did a quick google and found this

formatting link

The last one here has a 9 digit denom, and an error of

-9.7726e-19

Reply to
Jim Granville

I think I was sloppy in what I was saying. Let me try again. Given you are generating a continuous (more or less) sine wave for simulation why would you need anything more than sin( 2 * PI)? I can see it might be convienent but I don't see why it would be necessary. In this context at least I don't see why you would ever need to compute sin for any larger argument.

Robert

--
Posted via a free Usenet account from http://www.teranews.com
Reply to
Robert Adsett

Yes!

This is sort of what I've suggest doing for sin(rad), i.e. convert to sinpi() by multiplying with an arbitrary-precise (1/(2*pi)) value.

sinpi() has many other good properties, like the way it makes it nearly trivial to do range reduction.

Improving the implementation often does mean changing the result:

At one point in time (probably pre-Pentium), AMD (or Cyrix?) had a better implementation of trancendentals, so where Intel's fpu gave up to

3-5 ulp errors, AMD was in the 1-2 range (both ranges are afair).

The really ugly part here happened when Intel produced a program intended to show that the competitior's chip wasn't compatible, and they did it by showing that some trancendental operations gave different answers.

I.e. being significantly _better_ than Intel was being shown as an error. :-(

Terje

--
- 
"almost all programming can be viewed as an exercise in caching"
Reply to
Terje Mathisen

... snip ...

Say you have a time, and some oscillator running generating a sine wave. At some time in the far future you need to know the instantaneious amplitude of that sine. How do you calculate it?

--
 Chuck F (cbfalconer at maineline dot net)
   
   Try the download section.
Reply to
CBFalconer

Nobody is forcing you to work in radians until that very last step. How many other variables do you store as multiples of a unit that has no exact representation on your machine? Very few, I'd guess.

Reply to
Ken Hagan

In article , CBFalconer writes: |> Robert Adsett wrote: |>

|> > I think I was sloppy in what I was saying. Let me try again. |> > Given you are generating a continuous (more or less) sine wave |> > for simulation why would you need anything more than sin( 2 * PI)? |> > I can see it might be convienent but I don't see why it would be |> > necessary. In this context at least I don't see why you would |> > ever need to compute sin for any larger argument. |> |> Say you have a time, and some oscillator running generating a sine |> wave. At some time in the far future you need to know the |> instantaneious amplitude of that sine. How do you calculate it?

Yes. That is an argument for calculating sin(10000) to a reasonable accuracy. And, indeed, calculating sin(10^9) to at least 10^-7 accuracy.

But none of those examples give any good reason for wanting what the 'accurate sine' people want. I don't believe that they can produce a real example of where taking an input whose magnitude is more than

2.pi times the inverse precision justifies any accuracy in the result.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

If the accuracy to which you know the time in the future is less than the time for one cycle of the sine wave then you can't know the amplitude. This is the whole point of floating point. You have the dynamic range to refer to points in time way into the future, but only high accuracy for things that are close. If you want both then use fixed point, but make sure you include enough bits to cover the range you might want.

As an excercise, the amplitude of a 2GHz sine wave will cross through zero at precisely the stroke of midnight December 31st, 2007. What will be the amplitude of the sine wave on November 5th, 2008 at precisely the stroke of midnight. Use a single standard double precision floating point value to represent the time between the two dates. Compare the result to other methods of calculation. Calculate the date at which your amplitude calculations become meaningless.

-- TJ

Reply to
TJ Merritt

In article , TJ Merritt writes: |> |> As an excercise, the amplitude of a 2GHz sine wave will cross through |> zero at precisely the stroke of midnight December 31st, 2007. What |> will be the amplitude of the sine wave on November 5th, 2008 at |> precisely the stroke of midnight. Use a single standard double |> precision floating point value to represent the time between the two |> dates. Compare the result to other methods of calculation. Calculate |> the date at which your amplitude calculations become meaningless.

Ah, but which standard time are you using?

Regards, Nick Maclaren.

Reply to
Nick Maclaren

Of course -- but what if you're NOT ALLOWED to change the result? This is why the first implementation should be correct, though perhaps slow, or with a documented (and enforced) domain restriction. It is usually permitted to relax restrictions, but what is not permitted (in contexts where release-to-release consistency matters) is having old programs relinked with a new library (or run on a new machine) give different results. In such contexts, if the first implementation was not correct, a "compatibility flag" needs to be provided with the new and improved version. At IBM we faced this situation in 1998 when support for IEEE floating-point was added to the S/390 assembler. I supported both HFP (old hex format) and BFP in my correctly-rounding conversion routines, but those conflicted in a few rare cases in the last bit with the old HFP routines (which were actually pretty good, considering they were already 20 years old). We solved this by adding two new type qualifiers (DH as well as DB) to select the new routines (double HFP and double BFP), leaving plain D to denote the old HFP type, which continued to use the old conversion routines.

Incidentally, the original spec for those conversion routines was simply to be IEEE 754 compliant (1985 version, this being in 1997). I convinced "them" to go all out for correctly-rounding conversions precisely because of that release-to-release consistency requirement. Now we are all glad we did, as IEEE 754(2007) will require correct rounding across the entire exponent range.

Michel.

Reply to
Michel Hack

In article , Michel Hack writes: |> On Nov 9, 6:21 am, Terje Mathisen |> wrote: |> > Improving the implementation often does mean changing the result: |> |> Of course -- but what if you're NOT ALLOWED to change the result?

Then you're not allowed to improve the implementation. Simple.

|> This is why the first implementation should be correct, though |> perhaps slow, or with a documented (and enforced) domain restriction.

That is just about feasible for trivial functions, but isn't for even simple ones involving multiple values (e.g. linear equations, FFTs, eigensystems), and is utterly counter-productive as soon as you start using parallelism for performance. And what is the name of the performance game in the 21st century? Ever faster serial execution?

Look, nobody, but nobody, in this Year of Our Lord 2007, writes serious numerical code that uses nothing more complicated than basic trigonometric functions.

You have also missed the point that pursuing such a restriction ALSO prevents any significant improvement to optimisation in the compiler. Java is notoriously dire for precisely that reason. And the matter of parallelism (and asynchronicity) is redoubled in spades.

|> It is usually permitted to relax restrictions, but what is not |> permitted (in contexts where release-to-release consistency matters) |> is having old programs relinked with a new library (or run on a new |> machine) give different results. ...

With a half-decent specification, error returns are trappable. Even relaxing a restriction is a behaviour change and, in my experience, is typically a MORE serious one than just changing a few low-order bits. In particular, parts of the code that were previously never executed because the domain error trapped first, now get executed.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

You ask the Oscillator what the current sine amplitude is ;)

-jg

Reply to
Jim Granville

I see three possibilities here - you are looking for a future value to affect the current simulation value ;) Since that's non-causal I figure that's enough said :)

- You are needing a value from the far past to calculate the current value. If that's the case the problem is likely ill-determined enough to make the value of sin a minor irritant.

- You are dealing with a conventional simulation, like spice or weather. In that case you have a previous state from T-dt, you know what the frequency of the periodic wave is and from that and its previous phase you can calculate the new phase. If it exceeds 2 * PI wrap it before calculating the sin. Repeat as needed until done. You never need the value of sin for an angle greater than 2 * PI. You will get errors on the phase accumulation but they will show up us either a jitter or a frequency slightly off from that specified. In either case I expect the error will be rather less than the uncertainty in the physical component values of the circuit being simulated.

As Jim put it you ask the oscillator what the amplitude is.

It's the nature of the problem to never need widely separated values of sin.

Robert

--
Posted via a free Usenet account from http://www.teranews.com
Reply to
Robert Adsett

I don't think so actually. In any dort of simulation I've hear of where you use a varying stimulation you do it step by step within the wave not jumping many cycles between steps. The only way I can see you could make large jumps is if the value of the stimulation didn't matter (in which case the accuracy of it is also irrelevant).

Robert

--
Posted via a free Usenet account from http://www.teranews.com
Reply to
Robert Adsett

Does spice not allow you to set the start time?

Reply to
Del Cecchi

snipped-for-privacy@cus.cam.ac.uk (Nick Maclaren) writes: [snip]

This is an embedded newsgroup. You don't consider computing GPS solutions to be serious numerical code?

Reply to
Everett M. Greene

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.