Is there a simple formula I can use that will roughly tell me the true rise time of a low frequency (or one-off) signal given the measured rise time and the bandwidth of the 'scope I'm measuring it with please?
Here's an illustration to provide some context:
In that example, what would be a reasonable estimate of the true value of the rise time, from the measured value of 15uS with my (deceased)
50 MHz 'scope?
And what measurement would I expect from that same signal if I used
What attenuation does the nominally 50MHz scope give to a 50MHz signal? Manufacturers can be optimistic is it 3dB down or almost nothing at all?
Slightly smoother curves but otherwise indistinguishable.
Even a 1MHz scope would track that slow 15us rise fairly well although it might not be able to show the steepest parts of the curve correctly.
Things get hairy when the bandwidth of the scope is comparable with the rise time of the signal. If you can oversample by 10-20x then the rise time you measure will be dominated by that of the signal itself.
To a first approximation the time axis is blurred by 1/(2f[max]).
How fast a rise time do you need to be able to distinguish? What is the highest frequency waveform you are ever likely to want to capture with the new unit?
Good scopes used to have gaussian responses, BW*Tr=0.35. Lately most everyone is peaking their responses, which causes step overshoot, to claim more bandwidth.
Based on measured Tr, my "200 MHz" DPO2024 is a 180 MHz scope.
Used to be that composite rise time was the sq root of the sum of the squares of scope and signal rise times, so one could math out the scope risetime, within reason. That's not as accurate with a peaked scope response.
John Larkin Highland Technology, Inc
picosecond timing precision measurement
What you're seeing on the scope could be the unvarnished, absolute truth of what the signal actually did.
Or it could just be your scope's weak approximation of a pulse that rose "instantaneously".
If you happen to know that the signal did rise "instantaneously", but the scope didn't track it, then you're the wise one; your belief must be based in something, like having the knowledge of the signal's rise time.
But if you don't know the signal's rise time, then you have no basis for challenging the view that the scope is in fact presenting it exactly as is.
A lot of people over on the EEBlog forum have gotten themselves quite excited at the overly optimistic "rise" times measured with pulses rather than steps (Jim Williams' famous avalanche pulse generator).
We try to correct them, but alas, it's so very hard to convince someone when they read "rise time 1.15ns" on the screen, and The Instrument Must Be Right!...
One of the sadder days of my professional life was about 10 or 11 years ago when I had to explain to a group of *Tektronix factory engineers* that a scope lives and dies by its step response.
They were trying to sell me this $90k scope that had *seven percent* overshoot on its step response. I bought a refurb version of their previous model (TDS 7704A iirc) that had almost the same BW (7 GHz, 20 Gs/s) but a much much cleaner step response, and was a bit more than half the price. (This was IBM's money and not mine, of course.)
I interviewed at Tek in 1987, back when giants still walked the earth. They wanted me to work on an electro-optical ADC based on lithium niobate Mach-Zehnder interferometers, which was an interesting idea. Although my and my wife's families were and are in the Pacific Southwest (of Canada), I didn't take the job because their lab wasn't nearly as well equipped as the alternatives (IBM Watson and HP Labs on Page Mill Road).
They never did get their revenue per employee high enough to pay competitive salaries for people they really wanted, which was and is a pity. I still read with admiration the works of the Tek guys from their glory days, but those are unfortunately long past. :(
At a ratio of 3:1, you get 9+1 inside the radix, or a 1/10th error. The sqrt of which is about 20% error, so you'd be left wondering just how accurate the measurement is.
a 10MHz system might be expected to have a risetime of 35ns, so your 15us signal would be fine. A 200ns signal would be marginal, and 100ns would be expected to read with some error (on the order of 106ns).
Regarding accuracy, uncertainty due to unknowns in system performance (i.e., have you precisely measured its response?), in the shape of the signal itself (can you resolve all the squiggles and ringing, which will affect the measurement?), and how they interact (does the system bandwidth follow a simple couple-pole Bessel response, or is it high order, or peaked?), mean your effective number of digits is about 1 in this region. So, if you have measurement cursors that say "35.24ns", do feel free to ignore most of those digits as erroneous...
Homework: if you measured a 42ns risetime on a 10MHz scope (assuming the usual rules apply), what's the actual signal expected to be?
We should complain more or something. I did this stepped thing with butterworth filters first... looked like shit at certain frequencies. I did a bessel then. (that has a nice step response*, what's better w/ gaussian?)
That works if both the scope response and the signal rise are reasonably gaussian, and if you don't try to push it too hard. You can't measure a 1 ns rise time very well with a 5 ns scope.
One can apply a FIR filter to scope waveform data to increase the apparent bandwidth, or to correct for scope/cable defects like overshoot or reflections. That might get you 2:1 bandwidth improvement until the noise explodes in your face. Determining the FIR fiter coefficients is called "the deconvolution problem", a proud member of the family of "ill-posed problems."
John Larkin Highland Technology, Inc
lunatic fringe electronics
Thanks Phil. I'm out of my depth with most of the replies here so yours is the first answer to one my specific questions that I've understood.
One point I'm fretting about though is the literal meaning of sentences lik e "the 'scope is 50MHz which equates to a rise time of less than 10nS". Sur ely the word 'signal' must appear whenever rise time is mentioned? Yet as I understand it, a 'scope (DSO) doesn't have a 'signal', it's just measuring them?
I'm guessing Trr is the 'scope rise time? (Although I'm having trouble understanding how anything other than a *signal* can have a rise time? Mine stays firmly on the bench.)
So it's a parameter of a 'scope that has to be calculated rather than being explicitly specified as part of the manufacturer's spec, yes?
I can't find any mention of 'rise time' in the specs of either my obsolete Pico ADC-200/50 or my likely replacement, the Picoscope 25 MHz 2205.
But from your formula I calculate the latter would have a 'rise time' in microseconds of Trr = (0.35/25) * 10^-6 = 0.014 us = 14 ns
Is that right?
If so, then am I also right that a 25 MHz 'scope displaying a signal's rise time (i.e. the time to reach roughly 70% of its peak) as 1 us, could be confidently taken as very close to that?
I think you've provided the particular formula I was seeking later in the thread, in your response to Kaz's post, namely:
t = sqrt(a^2 - s^2)
where t = true signal risetime
a = apparent rise time, seen on screen
s = scope's rise time
Whether the 'scope is 10 / 25 / 50 MHz BW, plugging in the appropriate numbers confirms that all three reporting that signal's rise time of
15 us could confidently be taken at face value.
IOW, my apprehension about 'downgrading' from 50 to 25 mHz is quite unfounded. As confirmed by the first reply I immediately understood, from Phil Allison, and also from other responses after some coffee and further study ;-)
Essentially if the scope really is capable of reliably sampling a signal at 50MHz as is claimed by its manual (although perhaps 3dB down from the actual amplitude). Then it must take accurate independent samples at least every 10ns and preferably every 5ns 2x oversampled to catch the waveform no matter what the phase may be. The analogue performance of the front end must be at least that good.
Imagine you fed a perfect instantaneous step function into the scope. It will respond to that in a way that depends on the maximum frequency it can handle and that limits the waveform the scope will show.
Typically in modern scopes the trend is to exaggerate the rate of change of a step function at the expense of ringing on the top (in order to satisfy marketing requirements as opposed to signal fidelity).
But it does have a frequency response plot which in an ideal scope would be flat up to its nominal cutoff point and then nothing.
In practice it will be flattish upto its nominal cutoff and then go down fairly steeply. However, modern filters to go down more steeply also tend to peak just before they drop off (and marketing men like it).
If you want to convince yourself experimentally feed a sine wave and a square wave of the same frequency into the two scope channels and measure the amplitude* as a function of frequency as you approach the scopes nominal bandwidth. This should give you a better feel for its limitations and the artefacts in simple displayed waveforms.
*assuming here that your signal generator can hack it at 50MHz.