Nyquist Didn't Say That

sampling at 2.000001X solves that problem, there are no frequencies in phase with the sample clock anymore, the point I was making

There is no additional information obtained by sampling at a higher rate.

doesn't make any sense to me, so to resolve a frequency of 10 hz one must sample on the order of 1/10 seconds? Is that what you are saying, or am I reading it wrong?

Tim is making many assumptions (unfairly in my opinion) beforehand about the signal and anti-alias filter in his original post, and then saying this and that statement is not correct. Is he assuming frequencies higher then the desired signal exist, I think so, but I don't know, is he assuming a non-brick wall anti-alias filter? I think so but who knows. Nyquist assumes the ideals, you can't have a theorem otherwise.

Reply to
steve
Loading thread data ...

"steve" wrote in news:1156354799.705801.226400 @b28g2000cwb.googlegroups.com:

No additional information, but its certainly easier to look at your data when there's more than one point in each half cycle.

--
Scott
Reverse name to reply
Reply to
Scott Seidman

This last paragraph seems worth emphasizing, particularly on the subject of sampling rates, as it points out a reason why rather more than 2.00...01 X sampling may be important. I'm not sure how a practical reconstruction filter to compensate for ZOH could be arranged, causal or acausal, otherwise. You need some margin for the skirts, don't you?

Jon

Reply to
Jonathan Kirwan

Yes, easier to reconstruct a signal with more samples

Reply to
steve

Wikipedia articles often have external links, which people frequently leave alone (IE don't vandalize). Just contribute *something* useful to the wikipedia article, then link to your own area for in-depth coverage. Win-win scenario.

I don't think Wikipedia's going away. I find myself using it more and more as a first step to getting any info on some new subject -- even before google actually.

Now here's a thought -- if Wikipedia can become financially viable in its own right (currently it depends on donations) maybe a business model can appear where based on number of "views" of pages, the contributing authors can get some $$$ sent their way.

Yes -- it's viable!

#1) Suggest the possibility #2) ??? #3) Profit!

-Dave

Reply to
David Ashley

David Ashley said the following on 23/08/2006 19:28:

In some areas, Wikipedia is great, in others it's dire (no disrepect intended to anyone that contributes, myself included). Articles about comms and signal processing (as relevant examples) are on the whole scant, badly written and error-prone. However, I'm sure this will change over time.

Nice idea, but I don't think it's ever going to happen. For one, the Wikipedia administrators are already working hard to reduce the systematic bias that exists in Wikipedia (see

formatting link
introducing a financial incentive to writing good articles could only make this worse.

--
Oli
Reply to
Oli Filth

I used to work in an FFT factory, and we typically sampled at 2.56 x BW.

Reply to
pomerado

Oh, in that case I'm not clear on where there's so much confusion that needs an entire article to clear up. I was hoping you'd hit the idea that Nyquist really said 2x the bandwidth of interest (as others have already mentioned). Clarifying that doesn't lose the context of baseband sampling, does address where the most common pitfalls lie, and provide a full treatment of the issue as well as covers what Nyquist really said.

That's a fine notion to address, that all systems are essentially bandwidth limited by nature or can be made so easily. Tying that to the sampling rate is a fundamental issue, but I'm not certain that it can't be cleared up in a few well-written paragraphs with an illustration or two.

But maybe I'm too optimistic... Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions.

formatting link

Reply to
Eric Jacobsen

Right or wrong, that's what I meant.* What's more, to resolve Fs/2 - 10 Hz, you also need to to sample for a time in the order of 1/10 second. Why does it seem strange?

It seems to me that Tim is assuming anti-alias filters that produce results sooner than next week, and signals that would have components above Fs/2 without them. I don't think those assumptions are unfair.

The Nyquist criterion does indeed assume ideal conditions. Tim will show that the assumption is rarely justified for real work.

Jerry ___________________________________________

  • Shorter time serves if you know more about your signal. If you know frequency, phase, and amplitude, no sampling is needed at all. If noise and quantization are insignificant and you know that only a single frequency is present, three samples suffice. If you know what that frequency is, two samples suffice. With most real-world conditions, you need about Fs * 10 samples.
--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

steve wrote: > Jerry Avins wrote: >> steve wrote: >

In theory only. To resolve a signal at 2.00000X would require 1,000,000 seconds. (OK: maybe only 150 hours.)

True, but you can get that information a lot faster.

Right or wrong, that's what I meant.* What's more, to resolve Fs/2 - 10 Hz, you also need to to sample for a time in the order of 1/10 second. Why does it seem strange?

It seems to me that Tim is assuming anti-alias filters that produce results sooner than next week, and signals that would have components above Fs/2 without them. I don't think those assumptions are unfair.

The Nyquist criterion does indeed assume ideal conditions. Tim will show that the assumption is rarely justified for real work.

Jerry ___________________________________________

  • Shorter time serves if you know more about your signal. If you know frequency, phase, and amplitude, no sampling is needed at all. If noise and quantization are insignificant and you know that only a single frequency is present, three samples suffice. If you know what that frequency is, two samples suffice. With most real-world conditions, you need about Fs * 10 samples.
--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

it's not a howler, but the sampling frequency, Fs, must be strictly greater than twice the highest frequency, B, at least if that highest frequency is sinusoidal resulting in two dirac spikes at +/- B on the spectrum. the simplest way to say it is that Fs > 2*B

the other thing i was gonna say is that at Wikipedia we are stuggling with some of this same stuff (what the Sampling Theorem, as commonly depicted in textbooks really says, the historic sampling theorem from Shannon is a bit different) and, perhaps, to avoid duplication of effort, you might want to jump into that fray instead. it's at:

formatting link

i think there is some writing that craps up the article, but that is the lot and legacy of Wikipedia. an encyclopedia written by committee (the biggest, most inclusive committee possible). so "design by committee" is a problem.

Bitte.

r b-j

Reply to
robert bristow-johnson

150 hours? Why?

1/f is twice the nyquist time (1/2 the rate, undersampling) yet you seem to be implying oversampling
Reply to
bungalow_steve

but, Tim, the point is that you have to sample *faster* than 2X. sampling at 2X ain't good enough, even theoretically. sampling at

2.000001X might be good enough theoretically (the reconstruction filter will be a bitch) if acausality (or a long delay for the causal case) ain't a problem. the other thing to think about is that no D/A really outputs dirac impulses, so then something like a zero-order hold (ZOH) might have to be modeled for reasons of practicallity.

lastly, even though we fight about a bunch of other things, i was surprized at the support i had at Wikipedia to include that "T factor" in the dirac comb sampling operator. putting it there and not in the passband gain of the reconstruction filter is dimensionally most appropriate and help set up the ZOH model without dropping the T factor (or going through a contorted argument for how to include it).

i haven't read through this thread yet, so i apologize in advance if i am repeating someone else's words.

r b-j

Reply to
robert bristow-johnson

i just came upon this. as one who has recently jumped into that fray, alls i can say is "HELP!".

even if you lock horns with me, i would really like it if more comp.dspers would come to Wikipedia and contribute. someday, that can be the new FAQ, for *any* newsgroup.

r b-j

Reply to
robert bristow-johnson

... snip ...

There is a world of difference between the output filters needed after a sample and hold, and after a quasi impulse function. Also in the gain needed.

The impulse function has the advantage that several can be mixed. I took advantage of this in a PABX years ago to provide call merging. The actual pulses were about 1% of the repetition rate period. The accumulated DC components limited the merging to three calls.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@maineline.net)
   Available for consulting/temporary embedded and systems.
    USE maineline address!
Reply to
CBFalconer

A bad assumption on my part. 2.000001X isn't Hz; it needs to be normalized. The result is not a million seconds, but a million sample times. That's still a long time. Most of the time, there's pretty good resolution at half that, in this case, 500,000 sample times.

I don't see what you mean. Could you explain with an equation or two?

Jerry

--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

If course, and for other things too. Even if you can be certain that there is no signal energy above Fmax, you need to sample faster than

2Fmax in real situations. As it says on traffic a summons in Boston, "Fail ye not thereof at your peril."

Jerry

--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

"robert bristow-johnson" wrote in news: snipped-for-privacy@i3g2000cwc.googlegroups.com:

Robert--

CommitteeSize = CommitteeSize + 1 ; %!!!!!

I think what's missing is a demonstration that multiplication with the Dirac train in time pairs with convolution with the scaled Dirac in frequency. Without a good figure showing that, the figures under the "aliasing" subtitle lack some meaning.

As an aside, I'm interested in the analogs in AM. By the book, the carrier needs to be twice as fast as the highest signal component, but for Hilbert demodulation, all I can find is the specification that the signal and carrier not overlap. Is this because the transform essentially throws out the negative frequencies, so you don't have to worry about positive/negative overlap? Alternatively, am I just wrong, and the carrier needs to be twice the highest frequency, even for Hilbert demod??

--
Scott
Reverse name to reply
Reply to
Scott Seidman

-- snip --

Yes I am, and this discussion is great.

--

Tim Wescott Wescott Design Services

formatting link

Posting from Google? See

formatting link

"Applied Control Theory for Embedded Systems" came out in April. See details at

formatting link

Reply to
Tim Wescott

-- snip --

Mostly I'm assuming that things need to be done in the real world, with real equipment that can be bought for real amounts of money. Given those assumptions I think I'm on track.

Yes I am. That's a direct consequence of assuming a real system that is only turned on for a finite period of time.

Yes I am. That's a direct consequence of assuming that you don't want to wait an infinite amount of time for your filter's output.

Falling significantly short of that, I'm staying aware of just how much you have to pay for a filter that's 'practically' brick wall, whatever that means for your particular application.

Most other old timers who are pitching in here seem to understand.

That's true. The problem comes about when newbies who have forgotten all of the addenda, exceptions and quid-pro-quos* assume that Nyquist is a design guideline instead of a theoretical limit.

  • "Alladin", Walt Disney Co., 1992.
--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Posting from Google?  See http://cfaj.freeshell.org/google/

"Applied Control Theory for Embedded Systems" came out in April.
See details at http://www.wescottdesign.com/actfes/actfes.html
Reply to
Tim Wescott

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.