He is right though you have it for N=0 F(0), F(1) = 0.97*F(0) and then the recurrence relation gives you F(N+1) and if paranoid you can check it out formally by substituting your guess into it explicitly

F(N) = k*(0.97^N) F(N+1) = 0.97*k*(0.97^N) == k*(0.97^(N+1)) QED

which as Phil says is proof by induction.

This one is just a dull harmonic series.

Not necessarily - the finite difference equation is always a discrete approximation to the true full differential equation.

How well the finite difference equation reflects reality and what other approximations you have made along the way affects whether or not you can get back to the continuous version unambiguously.

Numerical approximations to derivatives are notoriously fickle.

You've received good answers so far, but another way to do it is through generating functions. The differential equation for the "exponential generating function" of this recurrence can be written by inspection; it's y' = 0.97*y.

Assuming F(0) = 1, the solution to this differential equation is y = e^(0.97*x). The solution to your recurrence is then the coefficient of x^k/k! in the Taylor series expansion of e^(0.97*x), or 0.97^k.

You can also "solve" it just by plugging in a trial solution r^n into the equation - any linear recurrence with constant coefficients can be solved this way. See:

I'm not sure what you mean by "continuous" equation -- a difference equation lives in discrete-time, and has no direct relationship to a differential equation in continuous time. (You can _make_ direct relationships, but to do so you have to specify how sampling and reconstruction are carried out).

I was taught in differential equations class that when you dig right down to the bottom of things, you solve a differential equation by guessing an answer, then proving that you were right. Finding symbolic solutions to difference equations is the same general procedure. So I think that's as much rigor as you're going to find in this.

Yes, there are recipes for these solutions, but all of of them (including the use of the z transform) are just the results of the guess-then-prove technique being carried out for a whole class of difference equations, rather than any one specific one.

In the case of a linear, shift-invariant difference equation, the recipe is to find the auxiliary polynomial of the difference equation, and "posit" that the solution is the sum of A_k * d_k^N, where d_k is the k'th root of the auxiliary polynomial and A_k is a constant that goes with it.

In your case (assuming that f(N) is the N'th element in f, which is a possibly infinitely long vector of values), then your auxiliary polynomial is z - 0.97, your "posited" values of d are just d = 0.97, and your "posited" solution is

f(N) = A * (0.97)^N

Note, too, that just like differential equations, linear difference equations can be homogeneous or non-homogeneous (yours is homogeneous). You can find the non-homogeneous solutions to difference equations the same way as you do for differential equations.

And, finally, if you do this a lot with linear, shift-invariant difference equations, it pays to learn how to use the z-transform. It simplifies things almost as much as the Laplace transform does for linear time-invariant differential equations, and makes all of this folderol much easier to remember.

--
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.

The guy who taught my second term of diff eqs clearly wanted us to remember this for all time, because he made this a mantra. Nearly every class meeting he would put some new form of differential equation up on the board, and he'd say "Now, how do we solve this differential equation?" then (because we didn't all shout it out in unison) he'd answer himself: "We guess, and prove that we're right!"

That was 30 years ago. It's stuck with me, so I guess he met his goal in my case.

--
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.

There are exceptions to this rule, especially transform methods and variation of parameters. It's much more true with PDEs, where e.g. there are 7 types of coordinate systems in which the Laplacian separates, including such intuitive ones as parabolic cylindrical coordinates.

I would contend, though, that all those methods basically came about either because someone guessed an answer to a general class of problems and proved it's validity for that class (e.g. the Laplace transform as used for solving LTI differential equations), or because someone found a trick (variation of parameters, separation of variables, etc.) that applies to a number of different problems.

So in those cases the "guess and prove" has been done for you, but you still have to make sure that your problem fits into the class of things that the method solves for (and heaven knows, I've seen people trying to cram nonlinear or time varying system descriptions into the Laplace transform, and being all confused when their nonsensical formulations give nonsensical results).

--
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.

I still recall my first serious brush with exotic differential equations in the freshman year although for different reasons.

The lecturer was an internationally famous astronomer and brilliant analytical solver of novel differential equations. The snag was that he could not teach for toffee and merely demonstrated pulling rabbit out of hat again and again and again. His coursework was impossible.

This particular course was so incomprehensible that after a while the best of us went to the other stream of maths on group theory since it was so much easier and the exam questions were likely to be possible to solve in finite time. Some from the other course which then became badly overcrowded then went to the ODE course so they could sit down.

Indirectly he probably contributed to the increased use of computers to solve differential equations as we later moved into research.

f(1) = A f(2) = k * f(1) f(3) = k * f(2) f(4) = k * f(3) ... f(N) = k * f(N-1) ===================== then the product of all the terms on the left equals the product of all the terms on the right, and after eliminating f(1), f(2) ... which occur on both the left and the right of the equaltio, we have

f(N) = A * k^(N-1), for N=1, 2, 3, ...

So, this is your formula.

Euthymios Kappos

"Jim Thompson" wrote in message news: snipped-for-privacy@4ax.com...

I recall struggling through Diffy-Q, which was widely regarded as a "bag of tricks" subject... you just had to figure out which trick to pull out of the bag for each special case.

Then next term came Laplace Transforms, where we learned that everything that needed doing could be done with simple algebra via Laplace... all that Diffy-Q torture had been just for background information and building character!

Bob Masta DAQARTA v7.21 Data AcQuisition And Real-Time Analysis

formatting link

Scope, Spectrum, Spectrogram, Sound Level Meter Frequency Counter, Pitch Track, Pitch-to-MIDI FREE Signal Generator, DaqMusic generator Science with your sound card!

Using the Laplace transform is just a really versatile trick for approximately solving real-world problems. Here's the reasoning that you should keep in mind whenever you use it (or the z transform):

1: All real-world systems are nonlinear and time varying.
2: The Laplace transform only works on systems that are linear and time- invariant.
3: Thus, I cannot use the Laplace transform to solve this problem.
4: But I can come _close_ by linearizing this here system
5: And now I can use Laplace!

This works great a whole lot of the time -- but it doesn't always, and engineers who are steeped in Laplace (or the z transform) tend to forget that they've skipped over steps 1-3, and did 4 without questioning why, or when it is not valid to do so.

--
Tim Wescott
Control system and signal processing consulting

Well, the question of how mathematical theorems are discovered is an interesting one--George Polya wrote a series of books on the subject back in (iirc) the 1950s, of which the best known is the elementary one, "How To Solve It". Great book.

You can often use it to derive a perturbation series for those times when linearizing isn't quite enough.

I highly recommend Bender & Orszag's "Advanced Mathematical Methods for Scientists and Engineers", which has all that sort of stuff--boundary layer theory, steepest descents, and a whole lot of other asymptotic methods. Arfken's applied math book is good as well--it used to be the standard textbook for physics undergraduates.

It's often possible to derive a series solution, which may or may not converge, and then convert to a continued fraction or apply convergence tricks such as Shanks's algorithm or Richardson extrapolation. When those work, which isn't always, they can be practically supernatural.

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.