filter design for low-pass

Hi,

I had a very general question.

I'd like to design a low-pass filter, and I was wondering what the general lay-out for one was.

I'm currently using a National Instruments FPGA module. They sell a filter design kit for $1,000 and I'm wondering if I can avoid buying it.

The National Instruments FPGA I'm currently using comes with one FIR filter example which uses 4 shift registers (plus the imput) and three coefficients to filter the signal by a factor of 10 (200kHz -> 20kHz). Which in itself is strange, since the window is only 5 points wide?

I need to filter it down to 200 Hz for my application. I'm afriad of programming 500 shift registers. Even if I did something clever with the FIFO, in the end, there's a lot of multiplication, which is very costly.

It seems like some kind of decimation strategy is my only hope? but this is certainly not the same as filtering, and the primary objective is to reduce the noise in real time. Or is there something similar to the FFT that divides and conqueres, breaking it up into smaller parts to get it done.

So I was wondering what the general strategy was for such filters.

Reply to
will.parks
Loading thread data ...

Building a filter by yourself in a HDL is fairly simple and a good learning experience. If I were you, I would look into IIR filters especially the Butterworth variety. Also, power of two multiplication and division involves a simple shift, which will reduce the time needed to filter the signal.

---Matthew Hicks

Reply to
Matthew Hicks

Tee hee.

filter .------. signal in | | signal out

---------->| |----------->

| | '------'

What? This isn't helpful? That's probably because you could write a book about what to put in the 'filter' block.

Common things would be an IIR linear filter or an FIR linear filter, but there are other options (including the decimation you mention later), and just "IIR or FIR" covers quite a bit of ground.

Yes you can, but you have to know what you're doing. Ultimately you'll have to know what you're doing to really use the NI package as well, unless your capabilities far outstrip your requirements. Those sorts of packages are great for getting something working in the lab early, but without knowing what they do you can't effectively get that last 10% worth of performance that makes the difference between a product that's a disaster and a product that's a success. Of course, if you _do_ know what they do, you don't need them.

"Understanding Digital Signal Processing" by Rick Lyons would be a big help, but you need answers faster, I assume.

You aren't saying what you're filtering down _from_, but your mention of

500 shift registers (I assume you mean a 500 tap delay line) implies that you're sampling at somewhat less than 100kHz (if you're sampling at exactly 100kHz you need to read
formatting link
I'll assume that you are sampling at 25kHz -- if you use one multiplier you'd only need to clock it at 12.5MHz, which is a pretty un-challenging clock speed. You'd still need that 500-tap delay line, however.

Decimation is not filtering, but if you filter then decimate you can reduce both the necessary processing (you only have to run through that

500-tap filter once for each output sample, not once for each input sample) and have less data for following stages to slog through.

Using a sinc^n filter has two advantages: it's light on processing resources, because all the 'multiplies' are by 1 or 0, and it sounds damn impressive when you throw the name at the boss. It works _very_ well in an environment where you're down sampling, because the filter has natural nulls at anything that would alias down to DC, and that's where you're usually most interested in what's going on.

In theory you reach a point where it's more efficient to filter by performing an FFT on a block of data, windowing it by your desired filter function, then performing an IFFT back to the 'real world'. Rick Lyon's book goes into this.

In practice this sort of optimization is very problem-dependent; using some sort of simple filter-and-decimate may take significantly less resources.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Posting from Google?  See http://cfaj.freeshell.org/google/

"Applied Control Theory for Embedded Systems" came out in April.
See details at http://www.wescottdesign.com/actfes/actfes.html
Reply to
Tim Wescott

Thanks for your reply. It does seem like I sould start with an easy filter first and work up to the filter I want to built.

Something Tim Wescott said perplexes me, though. Filtering and then decimating does sound like it will be computationally cheap, but I wonder just what is the advantage of decimating a control variable?

I'm fixed at 200kHz sampling, and the process I'm interested in controling is at 200 Hz. Having 200 kHz bandwidth makes the noise larger, (it should be a constant in volts per root hertz, so more bandwidth means larger noise in volts). It seems to me that filtering to 200 Hz is the correct way to reduce the bandwidth and thus reduce the noise in volts. Minimizing the noise is an important concern for controling the process quickly.

It's not clear to me (and I'm sure this is my fault for not being better informed) that decimating is an equivalent way of reducing the bandwidth. What effect will decimating have on noise as expressed in volts?

And this brings up a broader question I've hand in the back of my mind. Is it better to sample as high as possible and then filter down to the desired bandwidth? Or is it better to sample as needed, and the gains of sampling like mad and filtering like crazy are minimal. Forget the practical aspects of limited memory and large files.

I do have a hardware anti-aliasing filters. Your notes made some good points, anti-aliasing is always an important concern!

Will

Tim Wescott wrote:

Reply to
will.parks

First addressing the broader question: The A/D converter will have a noise floor specified in dBFS. This noise floor is what you would integrate over the filtered bandwidth to determine the total Signal to Noise Ratio. If you use a much higher bandwith, the total noise power will increase proportionately but you filter that back out and you're back to where you started. Typically the moise increases when you start to approach the device limits; using a 1 MS/s A/D at 1 kS/s versus

5 kS/s may make no difference but 500 kS/s to 800 kS/s may be noticeable.

At particularly low frequencies, you may see an uptick in the noise floor as other noise sources start to invade your signal chain such as schott noise.

On another subject, decimation could be a good thing. Imagine being able to evaluate your position on the road as you're driving at 60 frames per second video rates. Do you need to adjust your steering at 60 Hz? No. Your adjustments are partly dictated by the response of your system. There's no sense in providing 200 kS/s of adjustments to a system with 200 Hz response. You can, but there will be no gain as driving isn't improved with faster corrections.

- John_H

Reply to
John_H

I think you are right that sampling and then filtering does not buy you more S/N ratio.

However, I'm not sure if I explained the problem correctly. The analogy would be more similar to if you had a noisey dotted line that goes down the middle of the road. You want to follow the line as quickly as possible, but your car only moves at 200 Hz. So you can either decimate your observations of the dotted line to 200 Hz or filter to 200 Hz.

Maybe if I understood what decimation did in fourier space? Is decimation like sampling? Is it like applying a cut-off frequency, and so the nyquist theorem applies? does it need it's own anti-aliasing filters?

Thanks for this discussion! I think I've made a lot of headway in understanding. If I can write a filter that filters once or twice and gets most of the work done by decimation, this will save a lot of gates on my fpga.

Also, thanks for the choise of textbooks. The book I've been consulting is good at almost telling you how it works, but then they give example code in MATLAB, which is too meta and you never see anything!

Will

messagenews: snipped-for-privacy@m7g2000cwm.googlegroups.com...

Reply to
will.parks

Decimating is different than bandwidth reduction. Decimating is basically lowering the sample rate to something less that is still sufficiently sampled to convey the bandwidth of the signal. For example, if oyur signal has a bandwidth of 200 Hz, it can be fully reconstructed with sample rates as low as 400 Hz (although the anti-aliasing filters will approach an ideal brick-wall filter as you approach 400 Hz sample rate, and therefore get tougher to design as your sample rate approaches the nyquist rate for the signal).

Now, if your signal that is sampled at 200KHz has other noise or other signals that you are not interested in, you need to filter those out before you decimate, otherwise those out-of-band signals will fold into your band of interest (aliasing) when you decimate the sample rate. Whether you need to filter or not depends on the signal.

Oversampling has the advantage of making the analog anti-alias filters considerably easier to design, and the more the signal is oversampled the less stringent the filter characteristics are. It also spreads out the noise due to quantization and other noises in the ADC over a wider bandwidth, so after filtering you can end up with a lower effective noise floor. The disadvantage of oversampling is is greatly increases the processing load for your digital processing. In the case of an FIR filter, the steepness of the features in the filter's spectral response translate to the length of the filter required. If you try to filter the 200 Hz signal from the 200KHz signal in one step, the passband of the filter is only 1/500th of the fs/2 for the filter, which in turn means a rather long FIR filter. Also, the higher sample rate means less time per sample to compute the filtered sample; so by using a higher sample rate, you have both a greater number of multiplications to perform per sample, and less time to do it in.

The brute-force approach would be to make such a filter, and then decimate by 500 after the filter. As it turns out, you don't need to do the computations for the discarded output samples. There is a structure called a poly-phase decimator that rearranges the math so that the filter works at the output rate and only performs the computations that are needed for the retained output samples. Also, in this case, it is more efficient to decimate in steps with a series of filters. The first filters in the series need only a few taps because the transition region between the passband and stopband doesn't have to be sharp, as it gets cut off by the next filter in the chain. Half-band filters are a special class of FIR filters that have a response that is anti-symmetric about fs/4 and have nearly half the coefficients set to zero. For more details, google multi-rate filtering.

For high decimation ratios like you have here (500:1), there is yet another filter structure that is quite helpful called the CIC or cascaded-integrator-comb filter, also sometimes known as a Hogenaur filter for the inventor. That filter is a multiplier-less filter that is a recursive implementation of a boxcar filter (a moving average without the divide by N). If I were designing your filter, I would use a CIC filter to decimate by 125 or 250 to get the rate down, and follow that with one or two decimate by 2 FIR stages, with the second FIR being the one that defines the actual shape of the passband. The first FIR filter is probably no more than about 15 taps, and can be a halfband filter. The second is typically less than 100 taps, but depends highly on the shape of the passband relative to the output sample rate.

Reply to
Ray Andraka

Depends on the source of the noise. It isn't going to help if you have wideband noise in the signal. However, if the noise is due to quantization in the ADC, then by decimating you gain effective ADC bits. This is sort of like averaging the error on many samples to get a better estimate of one sample assuming the input hasn't changed significantly over the many samples. You can also shape the noise induced within the system by employing feedback to push the system noise into the stopband of your filter. This is the principle behind delta-sigma converters.

The classic advantage to oversampling, as I mentioned before is that it relaxes the specification for your anti-alias filters, making them much cheaper to implement.

Reply to
Ray Andraka

(top posting fixed)

-- snip --

It reduces computational load in two ways. First, and obviously, it reduces the frequency that you have to recompute the loop. This is a big deal if you're using a microprocessor, not so much with an FPGA. Secondly, it reduces the precision requirements, and hence data path widths, in any internal filters in your controller. Controllers are almost invariably implemented with IIR filters, and the closer the filter's poles and zeros are to 1 the more precision one needs to implement them properly. If you downsample by a factor of 1000 you theoretically reduce your data path widths by 10 bits.

Ooh ouch. You're confusing sampling rate with bandwidth. Please read that paper on sampling that I linked to -- you need it.

By "controlling at 200Hz" do you mean that the process has a natural bandwidth of 200Hz before you control it, that you designing things to have a 200Hz bandwidth after you control it, or that for some reason you're constrained to sampling at 200Hz on the output side? These are three very different questions, all of which have the answer "200Hz".

Having a 200kHz bandwidth doesn't necessarily make the noise larger -- it depends on the source of the noise. And you seem to be confusing sampling rate with bandwidth. They _aren't_ the same.

If you're sampling with a typical SAR A/D converter then the converter's front end has a bandwidth that's well above the sampling rate, and it has a noise level that's well above thermal noise levels. One of these A/D converters will pretty much give you the same noise statistics per sample whether you sample it as fast as it'll go, or once every year. In this case oversampling and filtering _after_ the A/D conversion _will_ reduce noise.

No, decimating by itself doesn't reduce bandwidth, and it won't reduce noise at all. The effect of decimating is to just resample the sampled data. To effectively decimate you need low-pass filter first. This is anti-aliasing, but it isn't usually stated as such -- people usually say "filter and decimate". I dunno why -- it's just common usage.

It depends on your A/D converter. With a typical SAR converter, if noise is everything you should sample like mad, filter and decimate. In fact, a sigma-delta converter does just that.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Posting from Google?  See http://cfaj.freeshell.org/google/

"Applied Control Theory for Embedded Systems" came out in April.
See details at http://www.wescottdesign.com/actfes/actfes.html
Reply to
Tim Wescott

In Fourier space, decimation folds the wideband spectrum over the lower frequency in precisely the same fashion that sampling below nyquist aliases the higher frequency content back into your band of interest. If you sample significantly below nyquist (or decimate by a large factor) the folding can happen several times. By guaranteeing you have good stopbands where the aliases fall on your bandwidth of interest, the filter-then-decimate approach works for analog and/or digital front ends.

Ray's description is superb but I, too, like to visualize Fourier space.

- John_H

Reply to
John_H

Thanks for all the feedback! I'm going to chew on this for a few hours!

But just to clarify, the source of the noise is primarily, but not soley, from the system itself. It's a bead in a viscous fluid. It is being driven by brownian motion, which has a power spectrum (amplitude versus frequency) that goes as 1/f^2. The bead is a sphere, and it has viscous drag, so the power spectrum is flat until 10kHz and then it rolls off.

Following the bead at all frequencies is not yet practical. But we certainly don't want to alias the 2 kHz motions down to our 200 Hz control loop. The flat region of the power spectrum is 0.1mV per root hertz, so the whole idea is to filter down to the time response of the the instrument (200 Hz) and at the same time get improved resolution. I.e. at 10 kHz, the signal peak-to-peak fluctuations should be 10 mV, but at 200 Hz, it should be 1.4 mV (providing I don't alias anything!) So I think in this case, decimation doesn't help?

These questions about bandwidth and noise- are there any good books that address this. I've actually been picking it up as I go, as I am a scientist and not an engineer, so just the first step of quantifying the resolution of our instrument in nm per root hertz (taking the lead from electrical engineers) was a big step forward in my understanding. For example, another quesiton I've had in my mind is how to deal with an attenuated signal, ie, what if I sampled at 50 kHz where the signal is rolling off due to viscous drag, and tried to compare it to the rest of my signal. I'm sure this is a really basic concept in electrical engineering, so maybe someone knows of a book that speaks about bandwidth and noise and even the fundamental concept of listing noise in nm per root hertz, and how that works, ie does the distribution of noise have to be normal or gaussian, does it have to be flat, etc.

Or maybe there isn't such a book and you could write one?

Will

Depends on the source of the noise. It isn't going to help if you have

Reply to
will.parks

Then the advantage to sampling fast and filtering is that you insure that your ADC noise _isn't_ part of your system noise. If you cut out noise at all you'll be cutting out noise that you can't do anything with, anyway.

I'm not clear on where your control loop is, but if your instrument is inherently low-pass you should be able to control it with the full bandwidth signal, noise and all, and let it follow as best as it can. Assuming that this doesn't overload your actuators, and that you aren't limited to a 200Hz sampling rate, you'll get pretty close to an optimal control.

If it _does_ saturate or overload your actuators, of course, then low-pass filtering is indicated.

The three books that I have, in ascending order of difficulty are:

"Information Transmission, Modulation and Noise" by Schwartz, "Probability, Random Variables, and Stochastic Processes" by Papoulis, and "Detection, Estimation, and Modulation Theory" by Van Trees.

But the one that I hear recommended most often is "Signals and Noise" by Proakis (or something like that). I think if you searched a book site on "signals" and "noise" you'd do well. Stay away from the Van Trees book unless you can take a class and have _lots_ of time. It's very useful, but extremely challenging. It's one of the few books that I have to refer back to constantly, because the material is difficult enough that I just can't remember all of it all the time.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Posting from Google?  See http://cfaj.freeshell.org/google/

"Applied Control Theory for Embedded Systems" came out in April.
See details at http://www.wescottdesign.com/actfes/actfes.html
Reply to
Tim Wescott

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.