Suppose I have some sampled signal data in a ROM on a microcontroller or something, and I'm reading it out to a DAC. Let's say that for some reason, I'd like to begin reading this data at some random position, go for a while, and then stop at a random stopping point and jump to another random position in the ROM and continue reading, and so on, to obtain pseudorandom variations on the waveform data.
What sort of interpolation algorithm might be appropriate to use in a situation like this to smooth the discontinuity between the two jump points?
You won't be able to jump at random samples, just at the zero crossings. If you jump around randomly, the output data will no longer represent samples of a band-limited signal, and no postfiltering process can remove the influence of the jumps.
If you really have to support arbitrary branches, the best approach is probably to continue reading from both the original and destination points and blend the data gradually.
Yes. I could try the former, but then I guess I'd need to store metadata on where the zero crossings are. I'd like to do this on a little 8 bit microcontroller, and space is kind of limited. And the metadata would have to be regenerated if I were ever to change the original data.
Could the latter approach work on a processor with limited resources?
If you jump around randomly, the output data will no longer represent samp les of a band-limited signal, and no postfiltering process can remove the i nfluence of the jumps.
obably to continue reading from both the original and destination points an d blend the data gradually.
This is one of the topics that fascinates me most.
The Zero-Crossing cut & splice always works. But what about those cases whe n the waveform crosses Zero, but it happens Between Sample Capture for the actual Zero-Crossing in time?
. . . t-3 sample = 0.31 t-2 sample = 0.23 t-1 sample = 0.14 Zero-Crossing's zero point is not in the sample data! t0 sample = -0.02 t1 sample = -0.25 t2 sample = -0.37 . . .
Also, how does one calculate the time for a given list of arbitrary Sine wa ve frequencies for one full cycle to where all oscillators Cross-Zero in tr ue unison again together? For example 100 Hz, SQR(2)*100 Hz, and 200 Hz sta rting in-phase at zero-crossing.
How many seconds does it take for all three tones to cross zero together wh en all start in-phase at the zero crossing origin? I know you have to multi ply all three together and take out common factors if any, but then how to get it to duration in seconds or instead of number of cycles. My brain must be fried in this area as it is coming up a total blank, sorry.
Assuming that your "position" (ie, time) variable has finer resolution than your lookup table, you can use simple linear interpolation.
Linear interpolation between two points can never result in an output bigger than either point, whereas an ideal filter can. So for better approximation of a good sample reconstruction filter, you need to account for more than two table points, and use something like a cubic or higher order spline interpolation. That may not matter in your situation.
John Larkin Highland Technology, Inc
lunatic fringe electronics
Or maybe I misunderstood the problem. If you want to smooth the jump between the two waveform segments (and not between points within a waveform) a lowpass filter might work. Or a fader type algorithm, depending on your requirements. The point interpolator would still help, to better map the table output to the time variable.
John Larkin Highland Technology, Inc
lunatic fringe electronics
hen the waveform crosses Zero, but it happens Between Sample Capture for th e actual Zero-Crossing in time?
This can definitely be a problem. Splicing at the zero crossing doesn't gu arantee the absence of artifacts, it just minimizes some obvious ones. Ide ally you'd match the slope of the waveform as well as the level, but even t hat doesn't guarantee that the transition will "sound" good in the frequenc y domain.
The problem is similar to the motivation behind windows in discrete Fourier transforms. An overlapped FFT performs a crossfade between two blocks of data, essentially, in an attempt to maintain the fiction that the individua l blocks were infinite in length. It's not enough just to choose block bou ndaries at zero crossings.
when all start in-phase at the zero crossing origin? I know you have to mul tiply all three together and take out common factors if any, but then how t o get it to duration in seconds or instead of number of cycles. My brain mu st be fried in this area as it is coming up a total blank, sorry.
They'll coincide at the least common multiple of their periods. E.g., 3 Hz and 6 Hz would coincide after 1/3 second, but 3 Hz and 7 Hz, both being pr ime, will coincide after 1/((1/3) * (1/7)) = 21 seconds. This would have to be quantized to the sample rate in your example, of course, since an ex act LCM doesn't exist.
Hz and 6 Hz would coincide after 1/3 second, but 3 Hz and 7 Hz, both being prime, will coincide after 1/((1/3) * (1/7)) = 21 seconds. This would ha ve to be quantized to the sample rate in your example, of course, since an exact LCM doesn't exist.
Thanks, I've been doing it backwards which means not well at all...
i'll jump on board with the crossfading thing. so rex, this is what you should do if you want the least "glitchy" waveform splices or jumps. it requires having a buffer of ancillary metadata alongside of your sampled signal data. this metadata buffer would not need to be as large as the waveform sample data, but what it would contain is pitch detection data for it's associated portion of the waveform data. if this is stored in ROM like the sample data, it might be pre-calculated. that might not be so good.
pitch or period estimators can reasonably easy to design using AMDF (or better yet ASDF - Average Squared Difference Function) or some kinda autocorrelation function. (pitch detection algs are a whole 'nother topic.)
without the metadata, with some contemporaneous work, you can correlate the spot where you are expecting to splice from to various possible spots around the "random" place you intend to splice to.
using ancillary metadata, you are always aware of the period length (if it's not periodic, it's the splice displacement that is the "best splice" by some measure, like how good the correlation is). then between the splice-from spot to the proposed splice-to spot, compute the mean period of all of the waveform data between the two spots, then adjust the splice-to spot a little earlier or later to make it an integer number of mean periods.
now, in all cases, whether the splice is good or bad, *crossfade* from the splice-from spot to the splice-to spot. there are issues regarding that. if the correlation between the two spots is very good, then you want a complementary-voltage splice. but if the splice is very bad (but it's the best you can do with the data in the neighborhoods of the two spots) like if it was white noise, then the splice should be a complementary-power splice. and you can go in between the two extremes. i said something about this a couple years ago on the music-dsp list. i can dig it up if you want. Olli Niemato had some similar results.
r b-j firstname.lastname@example.org
"Imagination is more important than knowledge."
It's possible to define (in digital space) a mix-and-filter by oversampling both the two data streams, and then making weighted sums. The weights define a digital filter. Then, like in an audio CD output, only the sums go to the DAC, and from there to a brickwall analog filter, to deal with quantization noise at frequencies too high for the digital filter.
You don't say what kind of data, and it might matter.
Assuming that the full bandwidth is in use, you should be able to just switch.
For audio, there is much less at the higher frequencies, which makes it more noticable when you add high frequencies with a discontinuous jump. This is also the reason why audio is fairly compressible.
To answer the question, you need to know, approximately, the power spectrum of the signal. That is, how much there is in the high frequencies to mask the discontinuity.
A key point, I think, is whether you know about the jumps in advance. If they are predictable, then you can use future jump points in your interpolation for better results. Another issue is whether or not you can use the next point from the first section during the jumps - that will make the crossfade smoother.
You (bitrex) say you want to use a small 8-bit micro - unless your frequencies are very low, consider using a small 32-bit micro instead so that you can use cubic interpolation rather than linear. Cubics mean a bit more maths to figure out, and quite a bit more calculation on the processor (and with larger data sizes - hence the recommendation for a
32-bit cpu). But they give smoother outputs and more freedom for optimisation (such as minimising different derivatives) than linear interpolation.
Unless the thing is very slow clock rate you can afford to switch stream at/near a zero and run along the new source waveform until you reach a suitable position. Provided the CPU has some spare capacity available this shouldn't be too hard without storing metadata.
Or as someone else suggested fade one out and the other in over a number of samples comparable with a few wavelengths.