Sampling: What Nyquist Didn't Say, and What to Do About It

Read Tim's paper!

The thing about an electric meter is that you're not trying to reconstruct the waveform, you're only gathering statistics on it. The

27.xxx Hz sample rate was chosen so that its harmonics would dance between the line harmonics up to some highish harmonic of 60 Hz, so as to not create any slow-wobble aliases in the reported values (trms volts, amps, power, PF) that would uglify the local realtime display or the archived time-series records.

From a signal-theory standpoint, the bandwidth of the signal is in fact narrow, so the sample rate can be low. The "signal bandwidth" is actually the sum of the bandwidths of the various spectral harmonic lines, multiples of 60 Hz, mostly of the ugly current waveforms, which is pretty weird when you think of it. The sample-hold is simultaneously undersampling a bunch of narrow but disjoint spectral zones, still following the Shannon rules for each one.

Given that, it was a considerable nuisance to come up with that 27.xxx Hz sample rate. Using available crystals.

I also used a 7-bit single-slope ADC, which I didn't reveal to the customers because they would have argued over that, too.

I did waveform acquisition on demand, in a burst of samples, at some other goofy sample rate, some hundreds of Hz. I sampled over many line cycles, stuck the samples into RAM, and then reordered them to make them equivalent-time sequential. That was fun.

12K lines of MC6803 code!

John

Reply to
John Larkin
Loading thread data ...

I'm using Adobe Reader 9 and it looks fine here, even blown way up.

Eric Jacobsen Minister of Algorithms Abineau Communications

formatting link

Reply to
Eric Jacobsen

Tim,

First let me say that overall the new paper looks really great! I am pleased that you've chosen to utilize (La)TeX - it has served me well for over two decades. There are a few rough edges such as bitmapped fonts (which aren't necessary) and errors in spacing, but I'm sure you'll get those worked out.

What does concern me, however, is some of the theory you've presented. Specifically, this section on p.11:

Sampling at some frequency that is equal to the repetition rate divided by a prime number will automatically stack these narrow bits of signal spectrum right up in the same order that they were in the original signal, only jammed much closer together in frequency which is the roundabout frequency-domain way of saying that you can sample at just the right rate, and interpret the resulting signal as a slowed-down replica of the input waveform.

There are two points in which I challenge the veracity of your assertions:

  1. Sampling at a rate of F/N when N is integer will never help subsample a signal since the period of the sampling, N/F, is always a multiple of the repetition rate period 1/F.

  1. It seems that to truely, completely sample a repetitive signal in such a way, you would need a sampling period that will never be a multiple of the repetition period. For example, for the 60 Hz example you could use a sample rate of 60 Hz / sqrt(2). But then, even if you sample at such a rate, it would take an INFINITE amount of time to fully sample this signal. It's equivalent to sampling an interval on the real line a point at a time; real analysis tells us that there are an uncountably infinite number of points in such an interval!

So, I'm afraid I cannot agree that an accurate sampling of a repetitive waveform can be made in this manner. If you disagree, please show me where my reasoning is wrong.

--
Randy Yates                      % "My Shangri-la has gone away, fading like
Digital Signal Labs              %  the Beatles on 'Hey Jude'"
yates@digitalsignallabs.com      %
http://www.digitalsignallabs.com % 'Shangri-La', *A New World Record*, ELO
Reply to
Randy Yates

Ha! Interesting...

What may be happening is that some font is not embedded, and that if you don't have the TeX fonts installed, the reader is substituting a postscript font, which is vector, so it looks fine. But if you do have TeX fonts on your system, you get them rendered as bitmaps.

--
Randy Yates                      % "My Shangri-la has gone away, fading like
Digital Signal Labs              %  the Beatles on 'Hey Jude'"
yates@digitalsignallabs.com      %
http://www.digitalsignallabs.com % 'Shangri-La', *A New World Record*, ELO
Reply to
Randy Yates

Interesting; I am looking on a 1680x1050 as well (although on Windows) and they look great. Just an observation.

-- Les Cargill

Reply to
Les Cargill

This is another one of those things where I think university courses have done most of the damage -- not emphasizing that Nyquist only cares about the bandwidth of your signals, but not at all what particular frequencies it is you're using.

2nd most common mis-interpretation (or perhaps, "non-optimal use") of Nyquist I've seen: Figuring that, if you were initially sampling at Fs and neede a brick-wall filter at Fs/2, if you go to, say, 8x oversampling you now need a filter with negligible response by 8*Fs/2 (an easier filter to build)... not realizing that actually all you really need is a filter with negligible response by 8*Fs-Fs/2 (even easier still!).

---Joel

Reply to
Joel Koltner

Is this something like heterodyning, then? You're building a detector, not a ... recorder. Right?

-- Les Cargill

Reply to
Les Cargill

You're right. Acrobat does better still. I guess I'm not used to this since I don't see many bitmapped fonts. (Even with xpdf it is not at all "terrible" by the way, and thanks Tim for posting it).

--

John Devereux
Reply to
John Devereux

Pretty much -- read my paper!

You're taking advantage of the fact that the signal you're acquiring is very cyclic in character. So (for instance), instead of taking samples every 1/600 seconds, you could take samples every 1/60 + 1/600 seconds, and get the _effect_ of taking samples faster.

John chose a frequency that would let him get decent statistics faster and more reliably, but he's just building on the basic idea that I present.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.wescottdesign.com/actfes/actfes.html
Reply to
Tim Wescott

0: thanks for the kind words. I wrote my Master's thesis in LaTeX, and have been living in a continual state of disappointment since. I'm actually using Lyx, because I'm lazy, but it's still LaTeX underneath. 1 & 2: I felt that my arguments were not well stated in the paper. Since I have to re-post it _anyway_, I'll spend a bit of time with the math. I just replied to another post, and in the process realized, tentatively, a relationship: if you have a cycle interval T = 1/F and you want to capture N samples of a cycle, then sampling at Ts = (M + P/N) * T will do the job as long as M, N and P are integers, and P and N are relatively prime and both non-zero. Reordering things for P != 1 is a challenge, but not impossible.

Whether I'm right and didn't argue my case well, or I'm just wrong, I need to change things there.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.wescottdesign.com/actfes/actfes.html
Reply to
Tim Wescott

In comp.dsp Randy Yates wrote: (snip)

One has to choose carefully.

That would be true for signals with infinite bandwidth. At least for the AC power meter, you won't have that. Harmonics from SCR (or triac) based light dimmers likely get into the MHz range, so one should be able to see that far. The usual computer power supply is a voltage double off the AC line, which shouldn't be as bad as the SCR, but still has significant harmonics.

But as was previously said, the goal is not to sample the 60Hz waveform, but, as used in describing modulated signals, the envelope.

If one sample 60Hz power usage at 60Hz, one would lose much important information. At 27Hz, where do the aliases end up?

60Hz --> 6Hz 120Hz --> 12Hz 180Hz --> -9Hz 240Hz --> -3Hz 300Hz --> 3Hz 360Hz --> 9Hz 420Hz --> -12Hz 480Hz --> -6Hz 540Hz --> 0Hz

It seems that you don't want exactly 27Hz, maybe that is what he said previously.

What you want to measure, though, is the RMS power over some period of time, taking into account the significant harmonics.

Now, say you have a signal with harmonics up to a few MHz, and say, for example, that one of those aliases to 0Hz, and so you don't see. How much of a problem is that? If you have all the floor(1000000/60) harmonics up to that point, then you are likely pretty close.

Floor(1000000/60) is 16666, so if you sample at 1000000/16666, for a sampling rate of 60.0024... Hz. If you want something near 27Hz that doesn't have harmonics that are multiples of 60 until 1000020, then it looks like 27.000027Hz is about right.

It seems to me that you pick the harmonic that you can afford not to see, and plan the sampling rate accordingly.

However, as that is getting close to crystal tolerance, I might suggest that phase locking to a multple of 60Hz, and then dividing down would be a good way to generate the sampling clock.

-- glen

Reply to
glen herrmannsfeldt

Actually, I've found the opposite to be the case, more often than not. Printers are pretty much what they are. OTOH, on screen, you can choose to zoom to arbitrary levels into an image to see greater bits of detail. With images resampled at lower resolutions, you quickly end up with jaggies that wouldn't have been obvious to the unaided eye in paper form.

But, very high resolution photographs quickly eat up lots of bytes. So, you have to come to a balance, somewhere.

Dunno, I don't use word or pdfLaTeX. I use FrameMaker for all my DTP as it's "quickest" to merge sources into a presentable form (and ~20 years of experience with it has a significant bit of inertia).

One typical technique I use is to include a photo of . Then, create another "window" (not in the GUI sense) overlapping the original photo's "window". In this smaller, overlapping window, I paste yet another copy of the photo -- but zoomed to much higher magnification. Then, pan that image to the part of the underlying photo that is "of detailed interest". I.e., I end up with a "closeup" of some portion of the basic photo to which I want to draw attention. It's more economical on real estate than a separate "closeup photo" would be. And, gives viewers of the print edition the detail that would otherwise only be visible "on screen" in an interactive environment.

Since FrameMaker writes PS, it relies on PS's innate abilities to do this cropping on its behalf. As a result, you end up with the whole image *in* the document, layered *under* a viewport built in PS. :-/

My point was to understand what your tool is doing to your "input"/data so that you aren't "leaking" anything that you don't want to leak (nor adding to the size of the resulting file, needlessly).

Next, I want to try embedding audio in some documents (e.g., it would be far more informative for folks to *hear* certain phonetic sounds than to *see* visual symbols thereof.

Reply to
D Yuniskis

napisa?(a):

CMR fonts are not actually bitmapped fonts, but they are by the time they end up in the pdf file. They are metafont fonts, described by a metafont program. But pdf format does not support metafont fonts - so pdfLaTeX uses a bitmapped CMR font build for something like a 300 dpi laser printer, and this is not optimal for screen usage.

When used as intended - using dvi files on a system with the metafont sources and metafont program available - metafont fonts have much more flexibility than truetype, postscript or type 1 fonts, and will give you results that are fine-tuned to the exact printer you are using. But that information is lost with pdf files.

The easiest way to improve the pdfs generated by pdfLaTeX is to add some usepackage lines:

\usepackage{times} \usepackage{mathpazo} \usepackage{courier} \usepackage{helvet}

This will result in the common fonts Times, Helvetica (Arial), and Courier being used as the serif, typewriter and sans serif fonts, which work well on all systems. Of course, you still get the better font handling of LaTeX - things like kerning and ligatures work as you would want.

And it's always possible to use any one of a gazillion other font packages that are common in TeX installations - or to build the required metric files from any other fonts you might have.

Reply to
David Brown

It records rms volts, amps, power, but doesn't try to reconstruct the raw waveforms; so the Sampling Theorem doesn't apply. That didn't stop all sorts of people from arguing that the sample rate had to be twice that of the highest reasonable AC line harmonic. As Tim says, lots of people fling "Nyquist Rate" around without really thinking about it.

If the voltage waveform is a sine wave (which it pretty much is) then there's no energy in the current harmonics anyhow.

John

Reply to
John Larkin

I thought about sampling close to 60 Hz. I could have taken a block of

256 samples at, say, 60+1/256 Hz, and walked the whole sine wave in a few seconds at equivalent steps of 1.406 degrees. But that had ugly side effects for sampled harmonics, specifically reporting the RMS value of ratty current waveforms. And I didn't have enough compute power anyhow. So I sampled at 26.9947, which is 800.156 degrees at 60 Hz, which still gives 256 evenly-spaces samples but the harmonic aliasing behavior is entirely different.

Messy stuff.

John

Reply to
John Larkin

I agree with the people who don't like the font, though I would go as far as "looks nasty". On a 1920x1200 screen with Acrobat Reader the text isn't as comfortable to read as in most PDFs. When watching it with pages side-by-side (as I do with most documents) or at 100% the fonts are too thin/light. When I zoom in the fonts do look indeed bitmapped; the jaggies get worse as the zoom increases.

The fonts used in the graphs look perfectly fine though, even when zoomed in.

Reply to
Dombo

No, I gotcha - I just wasn't thinking in terms of conversion to an... "IF regime" for line voltage measurements!

Nice paper, BTW.

-- Les Cargill

Reply to
Les Cargill

I used Acrobat Pro 9 on Windows XP. Acroreader 9 on OpenSuse also looks terrible. Looks fine in Evince on the same machine. So it looks like it is Acrobat. Unfortunately that is one PDF Viewer it has to look nice on.

Regards Anton Erasmus

Reply to
Anton Erasmus

I used 26.99947. I wrote a horrible Basic program that explored the possible selections of available crystals, divided IRQ rates, divided channel rates (16 channels), and possible aliases of the sample rate against line harmonics. All against a guess about avaliable compute power. It was one of those ill-posed problems with no hard quality metric, just a guess as to which solution felt better.

The current harmonics have no real power, at least as long as the voltage waveform is sinusoidal, which it usually sort of is.

Most power people are only concerned with the 10th or maybe 15th harmonic.

Fun, but overkill. The line voltage and currents are constantly jumping around anyhow.

John

Reply to
John Larkin

In comp.dsp John Larkin wrote: (snip)

I remember a story in an undergrad quantum mechanics class about some radar designers that believe that Heisenberg uncertainty applied to the returning radar reflection.

Unless the source impedance is too high, such as it might be at the end of a long extension cord with small wire.

If you have 1/n harmonic distribution, as from a square wave or from an SCR light dimmer, then you can figure how many harmonics you need from the error tolerance. I might have wanted to go to 1MHz, but 256*60 isn't so far off.

-- glen

Reply to
glen herrmannsfeldt

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.