Help. My Fourier Transform prog gives crazy results.

I'm learning the fourier transform but I'm confused by a problem.

As an exercise, I've created a Python program to model the effect of a low pass RC filter on a digital waveform. The prog creates a waveform, converts it to the frequency domain with a fourier transform, applies the gain of an RC filter to each harmonic [ gain = 1/(1+jw/w0) ] then applies the inverse fourier transform to show the filtered waveform.

Image of output:

formatting link

By choosing an extremely high cutoff frequency, I recovered the original waveform (shown in black) as expected. When choosing cutoff frequencies that were small multiples of the repetition rate of the waveform, I got waveforms with softened edges (blue and green) that were the expected shape except for being half the height of the original waveform which was unexpected.

To try to figure out why the waveforms were half the height, I tried increasing the cutoff frequency to see what would happen. At fairly high cutoff frequencies ( >100 times repetition rate ), I got crazy waveforms (red and orange) that were obviously wrong. Why?

The fourier transform that I'm using is the well-respected and well-tested 30 year old Fortran FFTPACK, that's wrapped into Python's "numpy" library, so it's a safe assumption that the tools are good but that I've misapplied it somehow. Any fourier transform experts here?

The short Python program below will work as-is in the default installation of recent Debian-derived distros, otherwise the "PIL" image library and the "numpy" maths library may need to be installed first. The relevant part of the program is the half-dozen lines in the "low_pass" function. Anybody know what's wrong with it?

#! /usr/bin/env python2 import numpy, Image, ImageDraw

# define input waveform as 010100 trapezoid lo = numpy.array( [0] * 400 ) hi = numpy.array(range(20) + [20] * 400 + range(19,-1,-1)) / 20.0 wave = numpy.concatenate( [lo,hi,lo,hi,lo,lo] )

# initialise PIL image width = len(wave) height = 2000 im = Image.new("RGB",(width,height),"white") draw = ImageDraw.Draw(im)

# Func for drawing a waveform to the image def draw_wave( waveform, hue, offset ): ypoints = (- waveform + 1.2 + offset / 4.0) * height / 5 xy = zip( numpy.arange(len(waveform)), ypoints) draw.line(xy, fill=hue, width=10)

# Func for low-pass RC filter of waveform def low_pass( waveform, freq, fmax, hue, offset ): fftwaveform = numpy.fft.fft(waveform) # fast fourier tr harmonics = numpy.arange(len(fftwaveform)) # 0,1,2,3,4... gains = 1.0 / (1.0 + 1j * (harmonics * freq) / fmax) # RC filter gains fftfiltered = fftwaveform * gains # filter the wave filtered = numpy.fft.ifft(fftfiltered).real # inverse fft draw_wave(filtered, hue, offset) # draw to image

# input_waveform, base_freq, RC_filter_freq, screen_colour, screen_offset low_pass(wave, 1, 1.0e10, "black", 0) low_pass( wave, 1, 8, "blue", 4) low_pass( wave, 1, 20, "green", 6) low_pass( wave, 1, 400, "red", 10) low_pass( wave, 1, 4000, "orange", 14)

smallim = im.resize( (width/4,height/4), Image.ANTIALIAS) smallim.save("image.png") smallim.show()

Reply to
Dave Rove
Loading thread data ...

I use octave, but there may be some similarities here.

For example, time waveform of N length [EVEN] produces frequency spectrum of N length BUT!!! the spectrum is a very confusing first half that goes from DC up to Nyquist, then second half that is like the mirror image of that first half going from high frequency dosnto near DC. Because your time wave form is REAL the reuslting spectrum will ALWAYS be symmetrical where the two halves are mirror images AND conugate values.

so they add like this 1/2 low end + 1/2 mirrir image conjugate values.

If for any reason osmehting gets thrown away, you end up with half the value.

just a thought.

get a copy of Ron Bracewell's text on Fourier Transform, I was in his class at Stanford and he [in my thinking] produced the best 'non-mathematical'transforms ever. You walked away from his class being able to do them in your head.

I have a complete set of functions to do what you're doing [but for octave] to enable you to bounce back and forth between time and frequency and add all that filtering. contaact me offline and put FFT Help as the start of the subject so I don't delete your email

Reply to
RobertMacy

I strongly suspect that you're not dealing with negative frequencies correctly.

You need to keep in mind, when using the FFT, that it is an exact transform of a finite chunk of sampled-time data. But if you're going from continuous time, that finite chunk of sampled-time data is only an approximation of what's happening in the real world. So you have to bear in mind the effects of sampling and truncation have on your data.

Most FFT packages do a complex FFT on data that is presumed to be complex. That means that for an N-point input, they cough up an N-point answer. Because of aliasing due to sampling, the frequency at point n (assuming 0-based vectors) is 2 * pi * (m + n/N), where m is any integer.

The best approximation for dealing with this in your case is to take the radian frequency at point n to be 2 * pi * n/N for n N/2.

Try that, see how it works, get back to us.

--
Tim Wescott 
Control system and signal processing consulting 
 Click to see the full signature
Reply to
Tim Wescott

Thanks. I think I mostly get it. I'd studied pictorial representations of the fourier transform that showed how each harmonic increased in frequency and decreased in amplitude, so I'd supposed that once you got past 100 or so harmonics that it wouldn't make much difference for simple cases like representing a waveform. But yes, the numpy library FFT produces an N length spectrum from N length samples, so a back-of- envelope calculation tells me that you hit the Nyquist frequency half way through. So the second half of the spectrum creates aliasing that's something like the wagon-wheel effect, I guess.

In real world filtering, you'd start by discarding any frequency that's more that half the sample rate, so I think that's the easiest fix here. Discard the second half and multiply by 2. And that does indeed give me exactly the expected result now. Using same filter values as before:

New image of output:

formatting link

Thanks, but now that I'm past that hurdle, I should be fine. I've been thinking of getting into Octave when I've got more time, but probably not for a while yet.

Fixed program below:

#! /usr/bin/env python2 import numpy, Image, ImageDraw

# define input waveform as 010100 trapezoid lo = numpy.array( [0] * 400 ) hi = numpy.array(range(20) + [20] * 400 + range(19,-1,-1)) / 20.0 wave = numpy.concatenate( [lo,hi,lo,hi,lo,lo] )

# initialise PIL image width = len(wave) height = 2000 im = Image.new("RGB",(width,height),"white") draw = ImageDraw.Draw(im)

# Func for drawing a waveform to the image def draw_wave( waveform, hue, offset ): ypoints = (- waveform + 1.5 + offset / 3.0) * height / 6 xy = zip( numpy.arange(len(waveform)), ypoints) draw.line(xy, fill=hue, width=10)

# Func for low-pass RC filter of waveform def low_pass( waveform, freq, fmax, hue, offset ): fftwaveform = 2.0 * numpy.fft.fft(waveform)[:len(waveform)/2] harmonics = numpy.arange(len(fftwaveform)) gains = 1.0 / (1.0 + 1j * (harmonics * freq) / fmax) fftfiltered = fftwaveform * gains filtered = numpy.fft.ifft(fftfiltered,len(fftfiltered) * 2).real draw_wave(filtered, hue, offset)

# input_waveform, base_freq, RC_filter_freq, screen_colour, screen_offset low_pass(wave, 1, 1.0e10, "black", 0) low_pass( wave, 1, 8, "blue", 4) low_pass( wave, 1, 20, "green", 6) low_pass( wave, 1, 400, "red", 10) low_pass( wave, 1, 4000, "orange", 14)

smallim = im.resize( (width/4,height/4), Image.ANTIALIAS) smallim.save("image.png") smallim.show()

Reply to
Dave Rove

Thanks. Simply discarding the second half was good enough for the simple case.

I'll have to look again at the fourier transform equations to get my head around it in more detail. I guess this issue has an analogy with the way that if you're raising a complex number to a fractional power, then the result depends on where you placed the "branch cut", i.e. if you represented the phase of the complex number with the range -pi to pi or with 0 to 2 * pi.

Reply to
Dave Rove

Because of that intrinsic symmetry most serious codes for time series use a form of FFT that is real to complex conjugate symmetric to take advantage of the prior knowledge that the imaginary components of the source are identically zero. It gets dirty in FFT space when to fit it in the same memory the alternating component is packed into where the imaginary DC would be. Some codes require the input array be N+2 long...

Plenty of people have tripped over that one in the past. It gets quite tricky in 2 or 3D keeping track of where they end up being stored.

Depends whether you are starting from N complex samples with imaginary components all set to zero or N real samples.

The FFT representation will be returned will have N complex values or DC, N/2-1 complex and alternating component.

You need to know how your chosen transform places the FFT components if you are going to apply a filter in the Fourier domain correctly.

Regards, Martin Brown

Reply to
Martin Brown

On a sunny day (Wed, 30 Jul 2014 12:55:36 +0000 (UTC)) it happened Dave Rove wrote in :

You know that in case of FFT over a group of finite number of samples you need to apply some windowing first?

When you have n samples of a digital signal, and do a Fourier transform, then it seems like the start and end are an 'abrupt' event, simply speaking,. For that reason you want to gradually attenuate the beginning and end of that sample bin you have. The effect of not doing that depends on if the signal you transform 'fits' exactly in the bin. If it fits exactly (say multiples of one period for example) then the FFT is correct, but for any other frequency it will be wrong, and will seem like high frequency components are present due to the sudden start end end of the data in the bin. There are several 'window' methods to make the gradual rise and fall of the data in the bin.

For example try this on your input bin (for each bin):

/* Hamming window */ for(i = 0; i < samples; i++) { angle1 = (360.0 / (float)samples) * (float)i; window[i] = 0.5 * (1 - cos(rad(angle1))); /* Hamming window, 0 at start and end of samples, 1 at centre, sine function */ z[i] = z[i] * window[i]; /* Hamming */ }

Yea well I dunno, and I cannot read python.

Anyways there will probably be some ringing.

Reply to
Jan Panteltje

Python is a widely used general-purpose, high-level programming language.[1

5][16][17] Its design philosophy emphasizes code readability, and its synta x allows programmers to express concepts in fewer lines of code than would be possible in languages such as C.[18][19] The language provides construct s intended to enable clear programs on both a small and large scale.[20]
Reply to
bloggs.fredbloggs.fred

Yes, the problem turned out to be something else. I did know that in effect, the end of the time series connected to the start, forming a loop so to speak, so since I was defining it, I simply started and ended on zero. I hadn't got as far as thinking about real-world sampling but I can see why what you describe would be necessary.

Reply to
Dave Rove

I'd not seen any options for the FFT function that I was using for specifying that, so I'd fixed it by discarding the second half, but I've just looked through the functions in the numpy.fft library again, and I've just realized that it contains a "standard" FFT (the complex-number one that I'd used) and a "real" FFT (that I _should_ have used). The "real" one does indeed produce a half-length output. Ho hum. In my code, I've replaced "fft" and "ifft" with "rfft" and "irfft" and that works fine. More elegant, but either way it works in the blink of an eye. I guess it's called a "fast" fourier transform for a reason.

Reply to
Dave Rove

You probably forgot the negative frequencies.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

Yes, as all the other posters have pointed out now. I'd only got them because I picked the wrong damn function from the library. I had probably glanced through the list of functions, seen the "real" FFT, and thought: "Well that's no good because I definitely want complex numbers out of it because I'm filtering the result with complex numbers". Of course, the "real" FFT produces a complex number _output_ alright.

Reply to
Dave Rove

Glad throwing away half helped. I don't, but forgot why, I fold mine over on itself before using it. perhaps with the misconception I am somehow averaging out 'numerical' noise?

Careful 'window' implementation is a major art. I highly recommend using the hanning window, not the hamming window. The hanning window is like one cycle of a cosine, starts at zero, goes to one in the middle, then returns to zero. Sometime you have to multiply by two to keep the energy the same. Actually the area the same [I think]

I have run half a hanning window for 50 points, then flat, then the last

50 points the second half of the hanning. don't forget to 'normalize' that window, but it preserves a lot more.

Try to read about the windows used to make those chip manufacturer plots, TI's ADC, the 32bit one. Their window is absolutely inspired, but complicated beyond belief.

Reply to
RobertMacy

I wrote an octave plot program that gives you a 3D plot of an fft using octave. octave allows mouse control to spin around 3D plots so you can look at the 'side' [along the frequency variable] or straight into it [looks like a real/imaginary polar plot]

It's a hoot to see the textbook spikes sticking out, then watch how tiny time delay corkscrews up your plot twisting it like a coil spring. I actually used that plot program to find a delay, faster than using math! AND found the delay was sadly a 'partial' sample! So linearly interpolated all the sample just to get that partial.

Reply to
RobertMacy

OK. This is all theoretical for me at the moment. As I'm taught various things, I like to find ways of "trying them out" because for some reason I have difficulty remembering things that I don't interact with. Unlike most people perhaps, I actually prefer computer-based learning systems where they're available, and have difficulty concentrating on a professor droning away at the front of the class. Octave does sound like it'll be useful for that, so that's another thing on my list of things to learn.

Reply to
Dave Rove

Depends on the signal but in general the boundary edge discontinuity if it is present hurts badly. An FFT implements the transform of an implicitly periodic function that wraps around at the end of the array.

There are several conventions for doing this and they can also cause interesting subtle effects. The most common is translational tiling which means that the target DFT is of ...0ABCD0... in splendid isolation but you are computing the FFT of a periodic function.

Translational tiling means computing FFT of ABCDABCDABCD... Some FFT codes use mirror boundaries like the JPEG DCT does so the implicit transform is of ABCDDCBAABCDDCBAA... or possibly ABCDCBABCDC...

These latter ones can catch out the unwary.

Depends whether or not you have a time series that starts at zero and naturally goes to zero with increasing time.

There are cunning ways to preconvolve the source data to block out of band picket fence problems (especially useful with non uniform sampled data). The simplest being the prolate spheroidal Bessel function over a small kernel of 7 to 9 points (and then correct the transformed data).

You then compensate for the multiplier in the transform space and throw away a guard band at the edge and you have a near perfect DFT from FFT.

Basically there are many ways to trade resolution for SNR and each has it's merits. Schwab's VLA paper of 1984 is still online although the scanning hasn't treated it kindly and it isn't an easy read.

formatting link

Regards, Martin Brown

Reply to
Martin Brown

One of the nice things about having a reliable reference implementation is that you pretty well know that the bugs are yours.

Right now I'm trying to bring a pre-production instrument up using all-new PC code. The contract engineering firm that the client hired to do a pre-production version of my original prototype spent almost a million bucks of his money, and failed.

The main software problem is that their PC software was written in Labview, by someone who really really didn't know what he was doing.

The firmware was written by a much more talented guy, but it's still buggy, and of course now that that their relationship with the client has gone sour, I can't get the bugs fixed, and have to find them by Braille and code around them. Razza frazza &$%*@!!

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

Big tip when learning. Learn not from the standpoint of trying to 'pour' knowledge into your head, rather learn from the standpoint that you must go home and teach someone else. Makes a big difference in something inside your brain. When I try to learn by the pouring method, it might as well be chloroform in print. ....zzzzz

PS for what it's worth, I've used both python AND octave between the two, I think python has a bit more power to it. But I could never master it at all! So not wishing to change from a 'winning' game, well you know how that works.

Reply to
RobertMacy

I tend to use delphi or packages like it these days.

For me I can whip up a quick app with pre-made graphing controls etc..

I have a pascal file on my system of some basic FFT, IFFT etc... code that seems to work to get starred.

I wrote a DSP package using that code with some refinements. I did continuous FFT with over lapping raw data, resolved it, extracted what I needed and then shuffled the raw data over 1/3 of the buffer to make room for a new chunk of raw data coming in.

In the old Delphi compiler this code executed very fast.

Originally the code was some old 8 bit FFT/IDFT/DFT code but easy to convert over to 16 bit.. I suppose one could do 32 bits with it.

I really do like languages like Delphi for fast and dirty ways of banging out some quick plots.

Jamie

Reply to
Maynard A. Philbrook Jr.

On a sunny day (Wed, 30 Jul 2014 22:35:44 +0100) it happened Martin Brown wrote in :

But we still cannot receive alien TV

Reply to
Jan Panteltje

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.