It's nothing related to DTMF. The frequencies I'm interested in would have overtones/harmonics. I'm counting on just being able to look for the base frequency and ignoring the overtones, and hopefully the signal strength at the base is the main component.
Integers must be faster than that...software floating point implementations would require a lot of cycles for doing each operation.
-Dave
--
David Ashley http://www.xdr.com/dash
Embedded linux, device drivers, system architecture
I replaced the floating point equation with fixed point, not considering scalings and overflows. I test it on both a Pentium and ARM, the result agrees with approx 0.2MHz per frequency (200 samples). The Algorithm is quite simple, so integer alone won't change much.
What is the frequency difference of the components to be separated? You need a long enough sample of the input signal to correctly see an appreciable part of the lowest difference frequency.
Please remember also that the analog input has to be correctly filtered for the Nyqvist frequency for the sample rate.
The frequencies go from a base frequency up by a constant factor of 1.05946 in each step. That's just 2 ^ (1/12). 12 steps and you're at double the original frequency.
-Dave
--
David Ashley http://www.xdr.com/dash
Embedded linux, device drivers, system architecture
8 bits? Each bit is 3.9% error. The coefficients between the frequencies are approx 4%. The error rate is close to 100%, not to mention the accumuated effects.
Nope! Actually I think for that application you'd want much higher accuracy than I'm interested in. And you'd only need to tune one frequency at a time, so you could put a lot more cpu onto the task...
-Dave
--
David Ashley http://www.xdr.com/dash
Embedded linux, device drivers, system architecture
What is the highest fundamental frequency you are interested in ? If the input signal is digital samples at 44100 Hz and the highest frequency of interest is only a few kHz, I would suggest low-pass filtering the input to 4-5 kHz, decimate it to say 11025 kHz and do the rest of the processing at that sampling rate, perhaps even 5501 Hz would be enough.
One way of detecting these tones would be to use the numerically controlled oscillator (NCO) principle. For each frequency maintain a phase accumulator, incremented by the frequency specific phase increment at the common sample rate. Use the phase accumulator to address sine and cosine tables, multiply the input sample with these coefficients in the I and Q multiplier and square the results and add them together.
If 3 dB amplitude error is allowed, just take the absolute values of the I and Q multiplications and add together. In the simplest case this would be two additions, two table lookups, two multiplications and two absolute values for each sample and frequency of interest.
Since the frequencies of interest are in harmonic relation, the 2f and
4f could directly cause unwanted mixing products with any harmonic distortion in the frequency f detector. Also the 3rd and 5th harmonic can beat with nearby potential tones. Thus the multiplication waveforms should be quite clean, so the phase accumulator should be at least 16 bits, preferably 24 bits and the sin/cos coefficients should be 12-16 bits. To reduce the lookup table size, a single quadrant of the sinus table could be used, but this requires branching according to the quadrant and two additions/subtraction to adjust the arguments according to quadrant.
Yes, musical scale. Not for automatic transcription (asked elsewhere). Hopefully no more 20 questions! :)
Relates to a potential idea for the Circuit Cellar contest on the luminarymicro kit. Don't know if I'll pursue it though. I got a free kit and want to play with it if I ever get time...
-Dave
--
David Ashley http://www.xdr.com/dash
Embedded linux, device drivers, system architecture
In addition to the problems with harmonics, the frequencies resolutions must be very clean. So, a minimum of 360 phasers must be used. With such resolutions, the coefficents are like:
0.0174, 0.0348, ... 0.9993 and 0.9998. At the extreme end, the differences are 0.05%.
No less than 16 bits with the exception of input data, where you are expected to accept some errors anyway.
Even 1K lookup table is fine. Without the tri functions, the entire program can fit in 8K including floating point. My benchmark is done on the same Luminary Micro LM3S811 (64K flash) as the OP is using.
With 5512 Hz (original fs/8) sample rate and 16 bit phase accumulator, the NCO can generate any frequency 0,084 Hz apart, with 24 bits and the original sample rate, the frequency resolution is even finer. Any out of synch signal will generate a beat tone (difference between the NCO frequency and the input signal frequency) and you have to consider this when averaging.
If you are referencing with the 360 phasers that the input waveform can have any phase difference relative to the NCO carrier, this is not a problem, since IQ detection is used. When the signal phase is the same (or 180 degrees) as the NCO phase, the I (in phase) multiplier will produce +/- 1, while the Q (quadrature) will produce 0. At 90 and 270 degrees I will produce 0 and Q +/-1. At 45 degrees both I and Q will produce 0.71. When the magnitude is calculated as square_root(I²+Q²), you will always get the same result regardless of phase. There is no need to calculate the square root for each sample separately, since you are interested in the average power anyway.
The simplified method calculates just abs(I)+abs(Q), which at 0 and 90 degrees produces 1, but at 45 degrees produces 1.41, which is 3 dB off the correct result, however, in many applications, this is sufficient accuracy.
I don't see 8 bit cutting it. If the frequency of interest is much smaller than the sample freqency the cos will be close to 1. Also the Goertzel algorithm uses differences of adjacent yn, so rounding error can easily get amplified.
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.