Yup. With that many cycles, the window function just smears out the peaks by a few bins. The interpolation problem remains. Using smaller time steps improves the interpolation accuracy by some power like deltaT**3 depending on the method.
Cheers
Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510
http://electrooptical.net
http://hobbs-eo.com
That total signal energy is conserved when taking the Fourier transform is implied by the mathematical fact that the Fourier transform is a unitary transform. It has to behave that way in a computer simulation as well or it's not the Fourier transform!
the fact that the fundamental is 690 instead of 707 is not a "bad approximation" it's a perfectly valid and correct Fourier transform of some other sequence of samples generated through a given interpolation algorithm, that's not a mathematically abstract pure sine wave. SPICE doesn't "know" anything about sine waves or that they should be treated any different than any other random sequence of samples.
The implication by the OP seems to be that the FFT in LTSpice is somehow a wrong or "sloppy" implementation, maybe so but I don't think that follows from the observed symptoms at all.
That's your implication, not mine. I didn't use your terms "wrong" or "sloppy."
690 is a fairly bad approximation of 707.
I did note that the default settings of LT Spice had the FFT fundamental spectral line amplitude at 690 mA, and lots of harmonics. And that setting a small time step improves things greatly.
What's wrong with that?
--
John Larkin Highland Technology, Inc
picosecond timing precision measurement
jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
My copy of LTSpice gets it almost exactly right at 707 mA if I let a 1 Hz sine wave sim for 1 cycle, with the default settings.
I definitely couldn't tell ya what information there is to be gained from letting a FFT sim of a sine wave run any longer than that. Everything you need to know about the spectrum of an infinitely-repeating waveform defined by a closed-form mathematical expression is contained in one cycle, like why would one think DTFT/FFT transform of a signal like that is going to get _more_ accurate over ten cycles, or a hundred? It's not calculating some cycle-by-cycle average. The DTFT/FFT doesn't "know" anything at all about cycles, in fact.
The _transform_ itself is working just fine. The floating-point error non-idealites are causing it return a perfectly valid and correct transform of some other signal. That probably sounds like splitting hairs. Okay fine. But the DTFT/FFT is just an algorithm that takes bytes in and spits out a different bunch of bytes according to a cookbook list of instructions. It doesn't "know" anything about cycles or sines or floating points or current sources or RMS-es or any damn thing.
Here's what I get, alternate solver, 2 amp P2P into 1 ohm @ 1 Hz for 1 second, linear scale FFT, all other default settings:
One harmonic with a peak value of just about 0.707 right on the nose. Seems OK to me! It's a periodic function all the frequency-domain information it will ever have is contained in one period. And there it is.
Amplitude drift for one thing. You have to be very careful to preserve full accuracy if you are generating sin cos pair using the recurrence relations if you want to make sure ((-1,0)^(1/N))^N = -1 almost exactly.
The simulation will drift in amplitude due to the limitations of the general DE methods used to conserve energy. Could go up or down.
How bad the drift is can be tested by simulating for 2,4,8,16,32,64 cycles. It doesn't take eps to be very big for (1+eps)^N to grow.
I was running exactly 100 cycles, my original goal being to determine the absolute scaling of the FFT, which turned out to be RMS amps in my case. Generators are scaled in peak volts or amps.
Just checking.
--
John Larkin Highland Technology, Inc
lunatic fringe electronics
So what? I didn't complain that it was a "bad approximation", just that it depended on numerical details that were irrelevant to the problem at hand. Correctly coding a poor algorithm doesn't make your program "perfectly valid and correct" unless you're scrambling to keep your job.
So what? We're not talking about Volkswagen's engine management software here. The use of interpolation to force variable-step results onto the FFT's uniform grid introduces truncation error, so the resulting frequency-domain plots can very easily exhibit artifacts.
The FFT isn't the issue. As you say, it's an algorithm to turn discrete time-domain samples into samples of the Fourier transform of the input sequence. But when the assumptions underlying the theorems aren't met, the algos will produce wrong answers to the question the user is asking.
Unit tests can ensure that the code is doing what it's supposed to, but they're powerless if the design is wrong.
Cheers
Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510
http://electrooptical.net
http://hobbs-eo.com
There is a way to make an FFT give a very good approximation to a DFT in the case of unequally spaced data provided that the data is reasonably compact on the new uniform grid. You convolve the raw data with an interpolation function on compact support (Jodrell used to use truncated Gaussian, VLA used a variant prolate spheroidal bessel). This means that your final result is then multiplied by a scale factor that has to be corrected out but by keeping track of how much contribution is put into each cell and the cumulative value you can get a pretty good FFT out of data that are not uniformly spaced with minimal artefacts.
See
formatting link
The paper you want is Schwab 1984 from there
formatting link
Probably no help to the OP but I think Phil will enjoy it!
Windowing functions just alter the sidebands caused by the discontinuity between the start and end of your time series data. If your signal is exactly periodic on the FFT length it should be irrelevant (and will only make things worse).
You might find that choosing a timescale that is an exact power of two will give less side effects depending on exactly how LT spice does it internally. Modern FFT packages can do most sizes efficiently.
I suspect you are suffering amplitude drift. We used to optimise FFT recurrence relations to the extent of tabulating the discrete roots of rN = (-1)(1/2^N) that gave least error when rN^(2^N) was computed by the recurrence algorithm. It was never quite the same as theory.
Try a power of two number of cycles and I think some of your artefacts will spontaneously vanish.
It will probably keep on improving until dT is O(2*sqrt(eps)) but it will get very very slow and tedious.
Checking the actual harmonic content by simulating a couple of the stronger harmonics cos & sin and computing their products explicitly in spice would show if it is FFT artefact ot numerical.
The trouble is that you don't know to do that when you're simulating something more complicated, so you can get snookered.
How important was each item? I can well believe that turning off compression helped a lot, but I doubt that the window did anything except smear out the spectrum by a few samples.
Cheers
Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510
http://electrooptical.net
http://hobbs-eo.com
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.