Suppose I create a current source that's a sine wave, amplitude 1, frequency 1, and then run a transient response. Then plot current and FFT. On a linear scale, it has some harmonics (no surprise) but the amplitude of the fundamental is 690 mA.
Why 690? Is that a bad approximation of 707?
Update: if I set the time step to 10 us, it runs slow but the FFT amplitude is 706.9 mA. So the FFT reports RMS.
--
John Larkin Highland Technology, Inc
picosecond timing precision measurement
Try it also with the alternate solver ? Will probably be the same as decreasing the step size though I imagine. Not a lot to go wrong... go wrong.... go wrong...
As I've also harped... selecting "Alternate Solver" forces LTspice to behave in conventional Berkeley Spice fashion... no short-cuts, none of Mikey's "behavior" models that take gross liberties so that FAST (or is it HALF-FAST?) becomes the marketing line? ...Jim Thompson
--
| James E.Thompson | mens |
| Analog Innovations | et |
There's no operator error at all. LT Spice gives one a choice of solver, a choice of dt, and several other params to tweak to trade off speed against accuracy. What's wrong with that?
I needed to know what units the FFT uses when it displays current. The answer is RMS amps. Sinewave sources, voltage or current, use peak amps.
--
John Larkin Highland Technology, Inc
picosecond timing precision measurement
I think I see the problem. How many cycles at 1 Hz did you let the simulation run for? If you set the transient sim to 1 second at 1 Hz it hits 707 mA on the linear FFT just about on the money. 10 seconds, it's worse. 100 seconds a lot worse.
Floating point roundoff errors are accumulating cycle by cycle and a small step size makes it worse. The sine is starting at 0 amplitude at 0 seconds, but after "exactly" say 100 cycles it's not ending there. And the end-point error is throwing off the RMS calculation.
That is to say if you are seeing higher harmonics at all then by Parseval's theorem/energy conservation the fundamental definitely cannot have its mathematically ideal pure sinewave maximum amplitude.
If you want to get a better approximation relatively cheaply then use three simulations to compute it for 2dt, dt, dt/2 and use Richardson extrapolation to remove the leading error term.
formatting link
= 1/2
Hence amplitude = 1/sqrt(2)
It is common in numerical FFTs for the scaling to be done only in one direction as a shortcut rather than symmetrically as in symbolic maths.
In all numerical methods that is always true. More accurate methods tend to be slower unless they are highly optimised for a particular sort of problem. There are some incredibly cute tricks for motion under the influence of an inverse square law for instance.
Or if you really need to push the accuracy envelop for some reason look up Richardson extrapolation and Shank's transform which will deliver more accurate answers from slightly dodgy raw data on a good day.
The latter will also determine an "answer" even when the series it is being asked to give a judgement on is divergent.
(the same result as in twos complement arithmetic)
Pure mathematicians will be running away crossing themselves and shouting heresy at this point but physicists still use these tricks. They are a bit out of fashion now but were more popular long before digital computers and numerical methods came of age. The human computors found it easier to transform the algebra into something that converged more easily rather than hammer it out at a finer step size.
LTspice, like most simulators apart from FDTD EM simulators, uses a variable time step. That means that it has to interpolate to get regularly-spaced samples for the FFT. That interpolation introduces significant amounts of truncation error, which is probably what's responsible for both the harmonics and the reduced fundamental.
Right, because human-powered differential equation solvers almost always use a fixed step size, which works well with extrapolation methods. The assumption underlying Richardson is that the truncation error goes as a lowish-order polynomial in the step size h.
SPICE in general won't provide evenly-spaced data, so that it'll have to interpolate to a regular grid to apply the FFT. Each run's truncation error (both SPICE and interpolation) will depend on the details of the adaptive time step algorithm, so the extrapolation is liable to make it worse rather than better.
Harped is the word, for sure (or perhaps bloviated). ;)
I'm not invested in the quality of LTspice's 'normal' solver, and I don't know its internal details, but it's not like numerical algorithms have stood still for the last 50 years.
"Conventional Berkeley Spice fashion" was a good 1968 answer for a CDC
6400, which was pretty swoopy for its day, but less powerful than a 21st century elevator controller. It had 200k words of memory (about 1.5 MB) and a 10-MHz clock.
You generally get more performance improvement by using a better algorithm than you do by buying a faster computer.
The DFT/FFT only applies to periodic sequences. For a sequence periodic in the FFT interval and having compact support in the frequency domain, the FFT produces correct samples of the underlying continuous-time transform.
Real data, or in this case interpolated simulation data, doesn't fit those conditions exactly, which leads to errors. It's not very difficult to compute an error bound in any given case.
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.