Sampled Data and Frequency Rejection RC Filter Question

Hi,

We are sampling a signal at 1kHz (every 1mS) and debouncing it (it has to be "high" or "low" 20 times to fall be debounced ON or OFF), but we are still having problems in our application with frequencies coming through (presumably around 1kHz or 2kHz) and "tricking" our debouncing scheme. The mechanism is that we are just catching the "peaks" 20 times in a row (standard aliasing stuff).

We don't have a big-budget filter (just an RC).

What is the standard rule of thumb for the relationship between the sampling rate and the RC time constant? Since an RC doesn't roll off very sharply, we aren't sure if we want to reduce 1kHz to 1/e amplitude or if there is a rule of thumb that says we want to reduce it even further.

How big should our RC time constant be? How much do we want to attenuate a

1kHz signal. 1/e? Less? More?

Thanks, Datesfat

Reply to
Datesfat Chicks
Loading thread data ...

The Nyqvist rule states that you have to get rid of all frequency components at and above fs/2 (500 Hz).

It depends on your application how near to the limit you want to go with the passband, but the nearer you want to come, the more complicated filter you'll need.

A single-pole RC low-pass with a time constant of

1 ms has 3 dB attenuation at 159 Hz, and the response drops by 6 dB / octave. The frequency limit scales inversely to the time constant.
--

Tauno Voipio
Reply to
Tauno Voipio

This doesn't immediately make sense to me in our application. If we have a

500Hz signal and we're sampling at 1Khz, it seems we might "perceive" the 500Hz signal correctly -- no harm done. I can't come up with a scenario where a 500Hz signal would cause us to perceive "ON" when it isn't.

I don't see that the frequencies in [500Hz, 1000Hz) will do us any harm. I could be missing something.

We aren't trying to reconstruct signals. We have a discrete (on/off) input, and we are trying to prevent a certain class of noise in the input from causing us to erroneously sense the discrete value incorrectly.

Our "input" is effectively a "light switch" (we are only interested in on/off), just that there is some noise on it periodically that has tricked our debouncing filter.

Can you give an example of how frequencies in 500 It depends on your application how near to the limit

The mechanics of the filter I understand. It is just a question of what is a good design rule for how far the signal has to be attenuated at 1kHz (or perhaps at 500Hz, depending on the question above). How far "down" do we want it? 1/e seems safe, but I'm wondering if there is some scientific way to think about it.

Our application is automotive (12V). Our threshold is about 8V. Every

1ms, we sample, and if it is 8V or above we say "ON", otherwise we say OFF. Our debouncing filter is that if we get 20 consecutive OFFs or ONs, we then say the output of the filter is OFF or ON.

If we attenuate to 1/e at 1kHz (or 500Hz, depending on the response to the question above), then a 1Khz square wave at 12V should get knocked down to about 4.4V, always below our "ON" threshold. 1/e attenuation seems adequate. Not sure if there is another way to think about it.

Thanks, Datesfat

Reply to
Datesfat Chicks

be

l

I think you are misunderstanding the problem. If you have defined your debounce time as 20ms and you are getting 20ms of a solid state, and that isn't correct for the application, then you need to lengthen the debounce time.

But your algorithm as described isn't precisely the kosher approach. This is more usual:

- start timer=3D20ms and clear edge detect interrupt request

- wait for timer to expire

- if edge detect interrupt fires, reset timer to 20ms

- latch new state of pin only when timer expires and edge interrupt request =3D clear

Reply to
larwe

... and obviously in this case you'd need to drive "edge interrupt" from a comparator, just in case that wasn't clear.

Reply to
larwe

For instance, noise at 990 Hz will alias to 10 Hz, as will

1010 Hz and 1990 Hz.

The disturbing signals obviously do not know this ...

--

Tauno Voipio
Reply to
Tauno Voipio

Since your response time is Fsa/20 due to debouncing, the reasonable RC time constant should be set somewhere around that. I.e. 20 periods of

1kHz = 20ms.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

Thanks for your patience in responding to me.

The behavior of 999Hz or 990Hz I understand. It will alias to 1Hz and 10Hz, respectively. I understand this.

It isn't clear, however, what will happen with signals like 501Hz or 600Hz. Any insight there?

Because we are requiring 20 consecutive "ON" or "OFF" to make it through our debouncing filter and be debounced as ON or OFF ...

... it is clear that 999Hz would hurt us (that would make it through the debouncing filter, because you could get 20 consecutive ON or OFF samples at

1kHz).

... however, it is unclear if 501 Hz or 600 Hz could hurt us ... unclear if could result in 20 consecutive ON or OFF samples.

I'm getting the impression that if we are debouncing by requiring 20 consecutive samples to be ON or OFF, the period there would be 40ms and the frequency 25 Hz.

I'm getting the impression that due to the debouncing nothing below about

975Hz could hurt us, but I'm not sure. ??? (975 = 1000 - 25).

Thanks, Datesfat

Reply to
Datesfat Chicks

501 Hz would alias to 499 Hz, and 600 Hz would alias to 400 Hz. There's nothing too unclear there. Given your 20 ms debouncing filter, the frequencies that should be able to sneak through are in the 975-1025 Hz range, then in the 1975-2025 range, etc.
--
Rob Gaddi, Highland Technology
Email address is currently out of order
Reply to
Rob Gaddi

Then my last question would be if that way of thinking is linear, i.e. could any two or more frequencies above 975 Hz mixed together cause the same undesirable behavior?

I think an RC filter is linear in that way ... just not sure if the

20-sample debouncing preserves that property.

Datesfat

Reply to
Datesfat Chicks

Datesfat Chicks escreveu:

I think you *believe* to understand, but from your posts, you show no articulation of this understanding. So let me try a different approach:

If you understand the 1Hz alias, can you realize that you'll have a component that will can be more than 20 ms ON with a period of one second?

--
Cesar Rabak
GNU/Linux User 52247.
Get counted: http://counter.li.org/
Reply to
Cesar Rabak

You _might_. But you can't be sure.

Draw or imagine this picture: a 500 Hz perfect sine wave (zero at time zero). Sample at exactly 1 Khz, i.e. a sample every millisecond, starting at time zero. Each and every sample is zero, i.e. you've completely missed the signal --- how is that "no harm done"?

Nyquist is an inequation, i.e. it defines a sharp boundary. Staying inside the Nyquist limit is a necessary condition, not a sufficient one. If you value a peaceful sleep, allow your designs some safe distance from such boundaries.

You're overlooking that false detection of "ON" may not be your only failure mode.

Then, with all due respect, you really will need to break out a textbook on signal theory and rehash what Nyquist, Shannon and the sampling theorem are about.

You are.

But of course you are!

Which is not DC (otherwise there would be no point sampling it...), so it is a signal with a frequency range you need to cover.

And to do that, you have to keep aliased copies of that noise out of the frequency range you need to detect the signal in. To do that, the cut-off frequency the digital filter has to be safely below that of the analog filter.

Any textbook on signal theory can. Or a couple variations of the image image, at frequencies near integer multiples of 500 Hz sampling range.

Far enough down that your digital input (Schmitt trigger or whatever) is guaranteed to judge a decayed high level as a low, and a decayed low as a high, on roughly the second sample after the spike.

You need think about your input circuit threshold voltages, and what exactly the frequency and amplitude range of that noise is.

Reply to
Hans-Bernhard Bröker

Nyquist, aliasing and dropoffs from an RC filter are easy enough to understand and calculate for a fixed sample rate, but how would the maths look if the sample rate is not constant? For example, suppose the timer periods varied between 0.5 and 1.5 ms, with an average of 1 ms? And how would that be affected by having a simple pattern (such as alternating between 0.5 ms and 1.5 ms - running the timer at 2 kHz and using two samples then ignoring two samples) or by having a random period?

Reply to
David Brown

Can we back up a bit and get a handle on *what* you are really trying to "see" and how that thing is expected to behave? Its important to be sure you know what you expect *from* your signal before you start deciding on how to *measure* it!

First, how are you sampling the signal -- with a periodic interrupt? If so, what sorts of latencies and variaations in that latency do you expect to encounter? I.e., can you *guarantee* that you take a snapshot every 1.00000ms? Or, are you hoping to see the signal at some *nominal* 1KHz rate? (i.e., if you don't have hardware to capture the times of *both* edges -- looking at just one edge only works if you are only concerned with that one edge, not the other edge nor the instantaneous *state*)

If you only watch edges (in hardware), are you sure your edge detector catches *both* edges -- even if the processor can't get around to telling it to "look for the other edge, now"?

When decoding barcodes, I typically use edge capture hardware in which the processor reprograms the edge detector (add an XOR) but this leaves me vulnerable to overruns if an IRQ isn't processed before the next edge occurs -- which could be as little as a few tens of microseconds! To accommodate latency in the IRQ servicing this "edge interrupt", I have timers that measure *when* the edge occured so I can account for the time that it took me to respond to the edge (when processing barcodes, the times between successive edges are crucial).

Can other interrupts be active at the time *this* (periodic) interrupt is scheduled to occur? These could delay the processing of this interrupt. Likewise, can other interrupts occur *during* this interrupt's service routine? Perhaps higher priority interrupts (or, even equal or lesser priority IRQ's -- e.g., I have a policy of enabling interrupts very soon after responding to one... deliberately allowing other interrupts to preempt *this* interrupt).

Are there "critical regions" in your code where you deliberately disable *all* (or just *this* interrupt)? This is common when you are trying to implement atomic operations on shared resources. E.g., if the IRQ has to talk to the rest of the system, then the resources that it uses to do this are often protected by such a critical region (mutex, etc.)

Is the signal that you are trying to observe inherently digital? Or, are you trying to massage an "analog" (multivalued) signal into a digital form?

For example, a mechanical on/off switch is inherently digital though the signal that you are processing may not exhibit a clean on/off state.

Assuming the signal to be bivalued, do you care equally about each transition (on->off vs. off->on)? Or, is one transition more significant than the other?

If this is really a switch, then typically "bounce" happens on the closure of that switch (not as common when a switch opens though it can be a problem there as well). Does the switch have any characteristics that further bias this contact action (e.g., Hg wetted)?

Note that you can bias your analog filter to favor certain transitions. E.g., you could add a diode to quickly discharge the capacitor on the low-going transition (i.e., this speeds up bounces that bring the signal to ground so that the switch needs to stay open for a full RC time constant and doesn't benefit from having been open previously -- the cap forgets quicker)

The list can go on depending on what your actual application entails... this just is a good starting point.

When dealing with switch-type inputs that need to be debounced, I usually use a periodic interrupt of known characteristics (i.e., I can qualify its rep rate, latency, variations in latency, etc.) and implement a simple FSM in the ISR (or, in a layer immediately above the ISR). Depending on the characteristics of the signal, the FSM can debounce all edges or just a certain type of edge.

E.g., the transition from IS_LOW to IS_HIGH might require going through IS_GOING_HIGH for some number of consecutive sampling periods -- any return to "low" sends you back to IS_LOW which is the equivalent of discharging the RC capacitor to reset the "timer" -- while a tranasition from IS_HIGH to IS_LOW might occur with a single "low" sensed input.

With this scheme, I can further decide how to convey to the application layer the activity of this switch. If, for example, I know the switch bounces on closure and I want to know when this

*starts*, I can signal the closure at the start of the debounce interval (instead of waiting for the interval to expire before signaling the application). This can be done by modifying the FSM to generate the required output function on the initial entry to the "IS_CLOSING" (whatever *that* means!) state and NOT on entering the "IS_CLOSED" state.

Why do you want to "attenuate" it at all? If the *signal* is bandwidth limited to X Hz, as long as your sampling at a sufficiently high rate to completely recover the signal's frequency characteristics, (note that you typically want to be noticeably above Nyquist's limit) you don't really care about the sampling rate (except as it affects the latency at which you report the signal's state to the application)

Reply to
D Yuniskis

Perhaps you need hysteresis as well as filtering.

Is your input a sensor that provides a continuous analog output, or is it a mechanical switch? Your mention of an 8V threshold makes me think the former.

Mark Borgerson

Reply to
Mark Borgerson

Or otherwise _wait_ until the thing has _settled_. Switches can bounce for amazingly long times.

--
www.wescottdesign.com
Reply to
Tim Wescott

You are mixing your metaphors. Logic inputs don't like indeterminate voltages, such as you'd get from the output of an RC (or any other analog) filter.

An RC filter followed by a Schmitt trigger input is good; putting the input close to its transition voltage for long periods of time is bad bad bad. At best there'd be a 'magic voltage' where the gate would become an amplifier and pick up any random noise on your board, at worst the gate (even if it's buried inside a processor or FPGA) will oscillate and generate its _own_ noise.

Check the minimum slope (or maximum transition time) for the input you're using -- I'll bet you're violating it.

--
www.wescottdesign.com
Reply to
Tim Wescott

Why not just sample it every 20 ms and accept what you see? A modest RC ahead of that, 100 usec or so, will nuke any EMI/ESD sorts of noise.

Debounce software is often unnecessary.

John

Reply to
John Larkin

be

l

=A0The

ing

ly,

a

te a

We have enough funs and suggestions with sampling theories. Many are correct, but some are questable. But what if your problem is not samplings. How are you scaling the input signal between several votls to 20+ volts in an automobile? What are the input thesholds of your sampler? How do you deal with dc or low frequency noises with large amplitude? Without seeing your design, we can only guess.

Reply to
linnix

I suggest the following software method, with additional hardware filtering if needed.

Take a moving sum of the input, say the last 16 samples, at least the duration of switch chatter. You get a count between 0 and max. Set a threshold with hysteresis applied, say max*1/3 when currently high and max*2/3 (round this to a integer) when currently low. That will give you a latency between max/2 and max samples.

--
Thad
Reply to
Thad Smith

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.