Power of 2

Hi, I find that the number of subcarriers is a power of two in OFDM ? Any specific reason for such a design ? Is it not possible to go in for odd number of subcarriers ?

Thx in advans, Karthik Balaguru

Reply to
karthikbalaguru
Loading thread data ...

Powers of two are convenient for the FFT.

Reply to
Arlet

And there's no fundamental reason that you couldn't use something else, except you'd be taking the 'fast' out of 'fast Fourier transform'.

--
Tim Wescott
Control systems and communications consulting
http://www.wescottdesign.com

Need to learn how to apply control theory in your embedded system?
"Applied Control Theory for Embedded Systems" by Tim Wescott
Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
Reply to
Tim Wescott

This is an old misconception: there are O(n log n) FFT algorithms for all n, not just powers of two, even for large prime n. (It just happens that the implementation is simplest for factors of 2 since 2 is the smallest prime, and the proportionality constant in the n log n gets worse if you have large prime factors.)

Regards, Steven G. Johnson

Reply to
stevenj.mit

Well, yes, you _risk_ taking the 'fast' out of the FFT by choosing some arbitrary number -- if I'm not mistaken choosing a large prime number for your number of carriers will throw a significant wrench in your works. Even with something moderately convenient, like 3^n carriers, I'd be surprised to see a canned FFT algorithm that you could just pour into your DSP chip and have fun with.

--
Tim Wescott
Control systems and communications consulting
http://www.wescottdesign.com

Need to learn how to apply control theory in your embedded system?
"Applied Control Theory for Embedded Systems" by Tim Wescott
Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
Reply to
Tim Wescott

...

I think you could count on Steven and his friends to come up with one. :-)

Jerry

--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

You are mistaken. Large prime factors only slow things down by a constant factor (sometimes by as much as a factor of 10, but it is still n log n.) e.g. Bluestein's algorithm is pretty trivial to implement, even on a DSP chip I expect, and will take any power-of-2 algorithm and bootstrap it into an n log n algorithm for any n (paying a price in the constant factor).

If you want to argue that power-of-two FFT algorithms are easier to implement efficiently on DSP chips (and certainly that they give better constant factors than Bluestein's algorithm), I'm all with you. But implying that fast Fourier transform algorithms simply cease to work for non-power-of-two sizes is feeding into an old, old misconception, which is why I corrected you.

Regards, Steven G. Johnson

Reply to
Steven G. Johnson

Just curiou: what's a typical increase in constant factor for powers of 3 and 5 versus that of the more commonly known FFT implementations for powers of 2 and 4?

And how large can the radix get before you run out of architecturally visible CPU registers for the FFT butterfly?

Thanks. rhn A.T nicholson d.0.t C-o-M

Reply to
Ron N.

I shall remember this fact; it should come in handy someday, somehow.

--
Tim Wescott
Control systems and communications consulting
http://www.wescottdesign.com

Need to learn how to apply control theory in your embedded system?
"Applied Control Theory for Embedded Systems" by Tim Wescott
Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
Reply to
Tim Wescott

It depends on the exact factorization (and obviously on the FFT implementation), but about 10% to 20% is typical for FFTW. (It's hard to do a fair comparison because usually FFT code is more optimized for powers of 2; even FFTW contains more pregenerated kernels for power-of- two factors, although you can change this by editing a couple of lines in a Makefile). This is small enough that if you have a size with factors of 2/3/5 it is usually slower if you pad to the next power of two, assuming your FFT optimization is well-optimized for non-power-of- two sizes.

e.g. with FFTW on a 3GHz Core Duo, n = 3600 and n = 3840 are both faster than n = 4096, even though the sizes are pretty close. In fact, with FFTW 3.2alpha, which adds more pregenerated kernels for non- power-of-two sizes, even n=4000 is neck-and-neck with n=4096.

One should also realize that non-power-of-two sizes actually have some advantages these days thanks to limited cache associativity based on low-order address bits (which penalize memory accesses at power-of-two strides).

In short, the old instinct that one should always choose power-of-two sizes if one cares about performance is not really true any more. (I doubt it was ever true in the multidimensional case.) You should still choose highly composite sizes unless you don't care about a factor of ~10 slowdown for large primes, though.

On general-purpose CPUs, the limitation is the instruction-cache size, not the number of registers. With FFTW, we don't usually use bigger than radix 64 for this reason.

The registers are just a kind of cache, except one that is fully associative and which you can manually manage by an offline precomputed replacement strategy (except on x86 where register renaming is done in hardware). The point is not to pick a radix that exactly fits into the registers, but to maximize register reuse over a larger radix --- if you pick a radix that fits exactly into the registers, then you are not reusing registers at all from one butterfly to the next. Put another way, the name of the game is to simulate as large a register set as possible by overlapping register spills with other computations. This is especially important on x86 these days when you hardly know how many physical registers there are.

I'm talking about general-purpose CPUs, of course, not DSP chips.

Steven

Reply to
Steven G. Johnson

Interesting :):)

Wonderful explanation !! Thx.

Karthik Balaguru

Reply to
karthikbalaguru

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.