I have a question about FFT. I want to implement Cooley-Tukey FFT algorihtm but i must know how much RAM i need. I have N samples (24bit) so i need 3xN bytes for input and 3xN bytes for output and how much more?
There are FFT algortihms that allow the computation to occur "in place", so a separate output memory is not required unless you are then going to process the results further. Google "FFT in place".
3xN for input, 3xN for the infamous but necessary pre-zeroed "buffer"; that space can also be used during the transform (there are a few in-place versions), and maybe a few dozen or so for intermediate calcs (Real, Imaginary, Sine, Cosine; butterfly, etc). Else double for 6xN on seperate output.
now that depends. if you convert that to floats, you would need the Sizeof(Float)*N*2; the reason for the 2 is that you need the Real and imaginary readings. if you are attempting a full integer level FFT which i have done, you would need to allocate 4*N*2; 4 bytes for each 32 bit. place the 24 bit sample in upper order, and use the lower byte for your fraction. That is just my opinion of course.
--
"I'm never wrong, once i thought i was, but was mistaken"
Real Programmers Do things like this.
Dnia Sat, 24 Mar 2007 19:24:49 -0500, Jamie napisa³(a):
I have 2.5M samples for each 24bit because I need high frequency resolution. That what i want to do is a kind of THD meter. I think 2.5M/s samples is enough for band 20-100kHz. (maybe I'm wrong)
But I don't understand what You mean "little chucks"? You mean that I should divide all 2.5M samples to smaller parts and make on each FFT?
By time you get through an FFT, you lose bits. You should usually figure on dropping one bit for every power of two in the length. This means that at 24bits, your results will be poor. You may want to add some LSB space to the values to help prevent this.
Now this one i gan agree mostly with. Proper FFT does require imaginary input and does produce imaginary output (sine-cosine). In most real cases the imaginary input is all zero, but it does double the input buffer space needed to compute the FFT and doubles the output size as well.
There are various tradeoffs in setting up any particular implementation, you can have in place computation (where the input buffer, the output buffer, and the computation all take place in a single buffer space), the input buffer can be in-order or "decimated" (the ordering mixed up in a particular way), the output may be the in-order or decimated. Also the coefficients may be read from a table or computed "on-the-fly" (a memory space versus time tradeoff, usually with options in between the limit cases). Reordering algorithms take only a little time and can easily be in-place, and normally take little time.
If you need to support streaming instead of computing on a single sample you may need to double the input buffer space yet again.
If you are using integer math for the FFT you need to consider how to prevent or handle overflow and underflow cases which will occur, this often becomes the requirement for additional bits/bytes for the buffer.
--
JosephKK
Gegen dummheit kampfen die Gotter Selbst, vergebens.
I'll take your word for it, I have only done up to 500 khz sampling in code writing for it. The last project was for a vibration monitor system and we kind of went over board with the project but we wanted good resolution. The transducers were a bit hard to find
--
"I'm never wrong, once i thought i was, but was mistaken"
Real Programmers Do things like this.
No, if you have purely real inputs (and hence half of the outputs are redundant by conjugate symmetry), there are numerous methods to save a factor of two in memory (and time) over the complex transform. These methods can also operate in-place.
So, the answer to the original poster is that he/she needs storage for the N samples (overwritten by the N outputs), plus O(1) additional memory, at minimum. Of course, various algorithms may require more memory in exchange for better speed, accuracy, and/or simplicity.
You'll have to do some googling, but there is a FFT library for linux. No need to reinvent the wheel.
I saw your other post regarding THD. You need to beware of edge effects if the source is not synchronous to the sampling AND the data at the edges are not continuous. You need the sample clock and sine source to be synchronous else the energy gets "smeared" between bins. The edge effects are hard to explain, but visualize the end of the sample stream being wrapped around and touching the beginning of the stream. If there is a hickup here, this shows up as an artifact. This is why more analysis is done by windowing the data. To be honest, you really need to read a basic DSP book like Openheim and Schafer.
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.