A low pass time domain filter has a roll off. A low pass Fourier transform frequency filter can eliminate all the frequencies above a cutoff frequency 100% without attenuating any of the lower frequencies at all.

If the goal is to determine a phase angle between two signals, is there any advantage in low pass filtering in the time domain before frequency filtering in the frequency domain?

Well........... From an operational point of view, of course "filter can eliminate all the frequencies above a cutoff frequency 100%" by multiplying (perhaps actually replacing) with zeros.

But, you have to ask yourself what this does in the time domain.

Any flat-zero segment in one domain will create an "infinite" sequence in the transform. This is what happens when we zero-pad a time sequence in order to have higher frequency resolution or to implement circular convolution. This doesn't mean, necessarily, that doing something like this is "bad". You just have to know a bit about what you're doing.

One of my favorite applications of doing just that is this:

- The objective is temporal interpolation of a sequence - which is basically an increase of the sample rate with some filtering involved.

- One simple way is to intersperse a bunch of regular, zero-valued samples in time and then pass the new padded sequence through a lowpass filter in order to get nonzero values where those added zeros were place.

- Another way is to rather "assume" the new sample rate and plot the new frequency content. This will look like the original spectrum but repeated as many times as the interpolation factor "I". THEN, we lowpass filter (by multiplying in frequency) to remove all of the energy bumps at fsOLD, 2fsOLD .... up to but not including the fsNEW. That suggests the same process you describe.

The real trick here is to not create any "sharp edges" when multiplying by a contiguous sequence of zeros. If you do, there will be stuff spread out in time just the transform dual of spectral spreading when we use a rectangular window in time.

So, one approach in this interpolation case is to do this.

- Assume we know little or nothing of the signal spectrum because, after all, we're building an interpolation "system". So, if we just multiply by a sequence of zeros, we have no idea what anomalies we're creating.

- Do the interpolation in stages. In fact, the first stage would be to repeat the spectral sequence to only 2fsOLD and then lowpass filter the data with an actual LPF - so that the spectral energy is fairly well limited to 1/4 the fsNEW1 or 1/2 fsOLD. Only now can you know that the spectral data at fsNEW/4 to 3fsNEW/4 will be "nearly zero".

- Now you can pad the spectral sequence with zeros symmetrically about fs/2 - effectively increasing the value of fs or N ... whichever way you like to think of it. This padding operation is exactly like what we do in the time domain when we want higher N but have limited data - for whatever reason. NOTE: This padding is exactly the same as looking at the repeated spectrum up to I*fsOLD and then multiplying that sequence with zeros from fsOLD/2 to fsNEW-fsOLD/2.

In effect, the edges of the zero sequence will fall on frequencies where there is little energy (due to the prefiltering). And so, there will be assuredly little temporal spreading caused by doing this.

So this is an example of doing what you want but taking the precaution of pre-filtering before multplying by a sequence of zeros. It's the same sort of thing as doing windowing in time to reduce the effects of rectangular windowing - done in advance of zero padding in time. But, in this case our "window" in frequency has the shape of an actual lowpass filter - which is a "rectangular window with smoothed edges and long (nonzero) stopband tails"

So, that's a long-winded caveat. Now on to your question:

First of all, "time domain" filtering and "frequency domain filtering" are two different animals. You can't do temporal IIR filtering in frequency unless it's a special case made to look like a FIR. So that's a limitation of sorts. And, you are forcing yourself to do block or finite sequence oriented processing. If your objective is to deal with streaming data then maybe time domain processing is better - or you will have to deal with how to combine the blocks.

The advantage of frequency domain filtering is that it can save a LOT of compute time if the (FIR) filter has any appreciable length.

As long as you pay attention to the sequence lengths and what that means in your system then I see no reason to do time domain filtering *before* frequency domain filtering. Also remember that frequency domain filtering is equivalent to circular convolution in time and the temporal sequences have to account for that (be zero padded) to avoid overlap in time.

Or, one might chose to have the zeros run from fsOLD to fsNEW-fsOLD so that the stopband of the lowpass filter are included, rather than forcing those values to zero as well.

One might also imagine that avoiding sharp edges caused by multiplying by a sequence of zeros would be helped if the value and perhaps the first derivative of the original lowpassed sequence were to be zero at the edges of the "split point" .. the fs/2 point at this stage. So, if the requirements are particularly stiff then that might be a consideration. Again, similar to tapered time domain windowing ideas.

Unlike phase sensitive filtering where the phase must be known to get the amplitude the phase of the reference must be irrelevant with match filtering.

If there are 2 sinusoids, at different frequencies, then there is no "phase angle" between them - although there may be a delay between them.

If there are 2 sinusoids at the same frequency then, in the context of a single DFT, they will add and appear to be one sinusoid. But, maybe the system is different.

If they happen at different times then there are other system considerations - as in "where is the time reference?"

Match filter with an out of phase reference. For awhile I thought it wouldn't matter and in fact the absolute value in the frequency domain is the same for two signals that are identical except for a phase angle..

But after trying it out on Excel it seems the phase angle must still be calculated from the real and imaginary components, the difference in the angles between the signals.

The noise will probably require some kind of average of the phase angles.

An iterative technique may be necessary. First you get an estimate of the phase angle from the noisy signal and reference then you phase adjust the reference then you match filter then you recheck to see if the phase angle is converging.

Assume just one fundamental frequency.

e

The SNR may be as high as 20 so hopefully one cycle will be enough. In that case only 90% of the noise needs to be eliminated.

Well, I don't know what it means to have an "out of phase reference" in the context of a matched filter.

The classical matched filter uses a time-reversed complex conjugate of the known signal to convolve with what is received in order to detect the presence of the waveform. There is no temporal reference (thus no phase reference) because it's going to be a convolution - that is, the two signals are multiplied together and integrated for *all* reasonable time shifts - thus for all reasonable phases. That's one of the fundamental underpinnings of the thing - to get phase alignment at one of the shifts.

Of course, temporal convolution *is* a filtering operation.

If you want to do this in the frequency domain then you would take the Discrete Fourier Transform of the temporal filter (the time reversed complex conjugate of the desired signal to detect) suitably zero-padded to match the required sequence length.

Then, grabbing likely overlapping sequences of suitable length coming out of the "receiver", you would DFT them and multiply with the saved FFT of the filter.

Then, you might keep multiple results and search over some history for peaks in time and frequency - a 2-D search.

All that does is reverse the sign of one of the inverse transform signals. The amplitude drops non linearly in the signal that is a little out of phase.

I don't know what Excel has to do with it...... Lots of things can be done with Excel - indeed sometimes to good advantage.

You can surely compute the inverse transform to get a temporal view of the result... You may want to do multiple sequences overlapped to get the best results to look at.

I don't think it best to consider the reference as having a time reference - other than it begins and ends. The time reference is likely best determined by when you snagged the received signal sequence - the absolute time of the signal sequence. Then, the output of the matched filter would be referenced to that time frame.

Not true since each frequency bin is finite =3D N/fs. You cannot have a brick wall filter even in teh freq domain. You can however make an approximation by using the uncausal part of a filter and introducing a time-delay.

Wrong. At least under the usual assumptions about context.

The time domain (TD) filter can work on a signal of arbitrary length, so it should be analyzed in Frequency Domain using the (infinite length) Discrete Time Fourier Transform (DTFT). The DTFT produces a continuous spectrum.

If you *choose* to use the DFT and do the filtering in Frequency Domain (FD), you also *choose* to

- Work with a finite amount of data

- Work with a finite number of spectral samples

So you are comparing apples and oranges.

If you work through the maths, you will find that the 'true' spectrum of the finite number of samples in fcat is continuous, and that it ripples between the normalized frequencies w_k =3D 2*pi*kn/N.

You will find the analysis in any half decent textbook on DSP. Look for the term 'spectral sampling'.

I was using it to get an idea of what kind reduction of noise is possible with reference filtering in the time domain [phase sensitive rectification (PSR)] v reference filtering in the frequency domain.

A PSR simulator on Excel cycles different noise -- use the RAND() for different "noise" phase angles and frequencies -- at several hertz so it's easy to imagine what a histogram would look like in just a minute or so. What I found was dozens or hundreds of cycles might be necessary to get the noise down to an acceptable level -- not fast enough.

Even worse, PSR + an unknown phase angle could create bigger errors in the magnitude than leaving the unfiltered signal alone.

Frequency domain filtering should solve both problems. A match filter can determine the time period between the reference and the signal, what I've been calling a phase angle.

Once that is known and corrected another FFT based reference filter can reduce the noise in one or two cycles.

Excel's FFT is too unwieldy to quickly try dozens of different noise levels but from the few I've tried it seems to reduce noise much faster than PSR.

The CORREL function may be the fast way to go. It isn't giving the same results as the FFT reference filter, however.

I may have done that. The IMABS of the inverse transform maxes out when the reference and the signal are in the same time period.

That's how I may wind up determing "phase angle."

Isn't that how radar works? They keep taking FFTs and dot products until the correlation spikes?

That seems like it would be nearly impossible without computers.

If you're seeing the magnitude of the matched filter output change based on a constant phase shift between the signal you're processing and the reference, then you're doing something wrong. Convolution is a linear operation. A constant phase shift on the template signal is equivalent to multiplying the reference by exp(j*theta), a constant. Therefore, if;

The magnitudes of y[n] and exp(j*theta) * y[n] are equal for all n (a phase shift does not affect the magnitude of a complex number). So, if you're observing that this is not the case, something is wrong with your simulation.

Note that you will see attenuation if there is a *frequency* offset between the signal and the reference; this is a well-known phenomenon for communication receivers that use "long" correlators, such as direct-sequence spread-spectrum systems. If the frequency offset between the transmitter and receiver is a significant fraction of the inverse of the reference's length in time, then you will start to observe attenuation in the matched filter's output. This problem is often addressed by using a bank of parallel correlators, spaced in frequency, such that the signal of interest will be near the center of one of the filters in the bank.

How in the world did PSR get into this thread? I thought we were talking about matched filters and whether there were any differences between time domain implementation and frequency domain implementation.

OK, I found a description:

formatting link

Geez. This looks a lot like a phase-locked loop (PLL) receiver of the sort that's used, for example, in deep space communications. In those applications, the data bandwidth is very low so that the receiver bandwidth can be very low (thus improving SNR at the output).

Matched filters are most often used in pulse systems like radar and sonar; although they tend to work better in radar. There's a ton of literature on the subject.

PLL receivers tend to work on continuous signals. There is generally a "lock" period to get the phase right and the receiver can also "lose lock".

The difference seems rather stark to me. One is for short, known signals (known modulation if you will) and the other is for long, known frequency, signals of unknown modulation. That may not be the best description but it's close enough for now.

So, it appears you're pondering a *system design* question in addition to your original question about matched filtering.

It was a stepping stone to using Excel for match filtering.

It's no longer of interest so feel free to ignore it.

216153544735.pdf

The time multiplication step is the heart of PSR and lock in.

".

The phase may not always be known in which case it won't work for precision amplitude measurements. It may very well be worse than the noise.

It would be surprising if the two types of reference or "adaptive" filtering weren't compared before now.

A lot more cycles [time] should be necessary with PSR to get the same reduction of noise as FFT reference filtering.

With a FFT you know everything possible about the signal and with reference filtering in the frequency domain, all that information is utilized. That may be why they call the match filter the "optimal" filter. It may be the absolute best you can do.

With PSR you only know or need to know the phase angle. Many lock in systems simply multiply with a square wave. All the information in the wave form is tossed with PSR.

The PSR is only of interest now if it was possible to somehow glean a phase angle, maybe by comparing the reference * signal with the reference * reference or the signal * signal. If that's not possible then the phase angle will have to be determined by some kind of convolution / match filtering.

Once phi is determined than the reference can be corrected in either domain.

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.