Finding a function from its series equivalent

It is usually a fairly simple matter to get from a trig or exponantial function of one sort or another to its underlying series form.

But how can you get from a known accurate series expression to a nonobvious and crucially esoteric equivalent function?

Specifically, the "raw" power series

[-1517.83 5094.6 821.18 -29457.7 61718.9 -61268.8 30448.6 -4770.84

-269.684 -2892.14 3300.63 -1460.88 213.578 78.8959 -49.2164 12.3083

-1.74731 0.149743 -0.00245142 0.103691]

where 0.103691 is the x^1 term, -0.00245142 is x^2 etc...

The equivalent McLauran Series (or Taylor about zero) is found by dividing each term by its factorial. 0.103691/1! , -0.00245142/2!...

... may be of extreme interest in finding a closed form expression that involves trig products and possibly exponantials. The range of interest is from 0 to 1.

The function appears continuous and monotonic with well behaved derivatives. There is no zero offset.

The trig angle of 84.0000 degrees is also expected to play a major role in the solution. As is the trig identity of cos(a+b) = cos(a)cos(b) - sin(a)sin(b). As is a magic constant of 0.104528. Everything happens in the first quadrant.

Sought after is a closed form determnistic solution that accepts the 0-1 value, the 84 degree angle, and the magic constant that evaluates to the above series.

--
Many thanks,

Don Lancaster                          voice phone: (928)428-4073
Synergetics   3860 West First Street   Box 809 Thatcher, AZ 85552
rss: http://www.tinaja.com/whtnu.xml   email: don@tinaja.com

Please visit my GURU\'s LAIR web site at http://www.tinaja.com
Reply to
Don Lancaster
Loading thread data ...

I would think you'd find the coefficients of the Taylor series by multiplying the k-th term by k-factorial, not dividing. E.g., the coefficients of the power series for e^x are 1, 1, 1/2, 1/6, ..., while the Taylor coefficients, the coefficients of (x^n)/(n!), are

1, 1, 1, 1, ....
--
Gerry Myerson (gerry@maths.mq.edi.ai) (i -> u for email)
Reply to
Gerry Myerson

Dividing plots beautifully. just checked it. You basically have a ramp that gets VERY steep near a=0.

Also, multiplying -1517.83 by 20! might end up a tad on the largish side of humongous. In any sane function, the higher power terms should be quite small.

There are apparently two or three summation terms to the real answer. One is an easily found and apparently fully determninistic linear ramp. The second is an ever increasing "bent" variable that may or may not need some extra help above 0.9. It may or may not have its own small ramp component.

I suspect it does.

--
Many thanks,

Don Lancaster                          voice phone: (928)428-4073
Synergetics   3860 West First Street   Box 809 Thatcher, AZ 85552
rss: http://www.tinaja.com/whtnu.xml   email: don@tinaja.com

Please visit my GURU\'s LAIR web site at http://www.tinaja.com
Reply to
Don Lancaster

Question about terminology here. By the above, are you saying that

0.103691 = a1 * x^1 and

-0.00245142 = a2 * x^2 etc.

Or are you saying that 0.103691 is the coefficient of x^1, -0.00245142 is the coefficient of x^2, etc. so

f(x) = (0.103691 * x^1) + (-0.00245142 * x^2) + .....

BTW, you posted a similar, but slightly different problem a half hour earlier with terms starting with x^0 as opposed to x^1 (above). Which is correct?

--
Paul Hovnanian     mailto:Paul@Hovnanian.com
------------------------------------------------------------------
Who is General Failure and why is he reading my hard drive?
Reply to
Paul Hovnanian P.E.

Cool. But the function you're plotting isn't the one you're asking about. The power series a_1 x + a_2 x^2 + a_3 x^3 + ... is (obviously!) equivalent to the Taylor series a_1 x + (2 a_2) (x^2 / 2) + (6 a_3) (x^3 / 6) + ... and not equivalent to a_1 x + (a_2 / 2)(x^2 / 2) + (a_3 / 6) (x^3 / 6) + ....

So either you're not doing what you say you're doing, or else the function you're plotting has nothing to do with the function you say interests you.

--
Gerry Myerson (gerry@maths.mq.edi.ai) (i -> u for email)
Reply to
Gerry Myerson

Is the function known to be periodic?

A Fourier series expansion, if so, would be more illuminating than a power series.

regards, chip

Reply to
Chip Eastham

I have very positive experiences in finding good approximating functions. First a set of polynomials is calculated approximating the given data. The most suitable is selected. In most cases the one with least variance. Then a response function is calculated using a function tree structure to find the function combination with a taylor series fitting as well as possible to the approximating polynomial.

Some of the best alternatives are stored. The least squares approximation can be found using nonlinear estimation techniques.

In the one variable case it is practical to divide the data into several "wavelets" and use piecewise approximation.

In practise this method has worked well with 1 - 4 independent variable cases. 5 variables are possible, but never tried.

The method is slow, but quite well behaving. One more layer to the function net should demand grid computation.

On grid alternatives, please contact snipped-for-privacy@estlab.com

Reply to
Arto Huttunen

its First Few Terms" by Bergeron and Plouffe. I dont have the URL, but it is a free download. It doesn't provide a bullet proof method, but gives some suggestions.

Reply to
Jon

This is the art of curve fitting. You need one additional piece of information, the error (tolerance) of each of the measurements, to do it properly.

Since you have specified this over the range 0 to 1, the classical approach would be to use a set of functions that are orthogonal over this range. Since you already have a polynomial, a kind of Legendre polynomial is your starting point (I think the Legendre polynomials are orthogonal over (-1, +1), so there's some scaling and shifting involved).

Your goal is, from that starting point, to find a simpler expression that holds the same data (i.e. reproduces the same function to the given tolerance). Sine/Cosine, polynomial, Bessel, there are lots of sets of orthogonal functions to choose from, and one of the sets might have all-but-one-term-vanishes character.

If there is any OTHER information about the boundary conditions, (if it's periodic, use sines; if it doesn't blow up at infinity, don't use polynomials) that can help, you need to consider that now. Finally, there's a theorem, the Wiener-Hopf theorem, that states that a set of fit functions is optimal if its autocorrelation is the same as that of the data. That means that data with big ripples doesn't fit well with sinewaves with little tiny ripples, but you can find that kind of thing out the hard way, too.

For orthogonal functions, the fit procedure is simple: there's a unique right set of coefficients, and a procedure to find it (inner product of your data and the function). For other kinds of functions, like wavelets and fractals and such, it gets ... challenging.

Your starting point has 20 coefficients, so anything that expresses the data with 19 or less, and hits within the measurement tolerance, is some kind of success.

Finally, I should note that the 'measurement tolerance' defines a kind of weight, a statistical weight function, that cannot be ignored. The orthogonal polynomials (Legendre polynomials) with constant weight are very different from the orthogonal polynomials (Chebyshev polynomials) with (x(1-x)) weight, and the weight is part of that 'inner product' step as well.

Usually, one starts with a graph of the function, makes a guess as to its form, then looks at a graph of the difference-from-the-guess. If you get lucky with a guess, and it was a natural measurement you started with, you've just created a scientific theory. Kudos!

Reply to
whit3rd

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.