I have three questions about theoretical THD+N distortion while creating sine waves with DACs.
First question: Assuming a perfect DAC with perfect analog components, obviously a DAC with fewer bits will create a sinewave with larger steps, and thus will have a higher percentage of THD+N, but how do I calculate the exact percentage?
Second question: I know about the Nyquist limit and it seems to me that as the Nyquist limit is approached the sinewave will have bigger steps no matter how many bits it has and thus will have a higher percentage of THD+N, but how do I calculate the exact percentage?
Third question; In the real world I wouldn't have perfect analog components; in fact I would purposely introduce a lowpass filter at the output of the DAC to attenuate the switching noise. How much would that change the answers to the questions above?
BACKGROUND:
We need to replace an old system that generates 20 Hz to 20 kHz sine waves with a 12 bit DAC that puts out a 4096-step sine wave -- the same number of steps whether it is putting out 20 Hz or 20 kHz. A variable oscillator changes the clock rate of a counter that gets the values from an EPROM lookup table.
We were discussing replacing the above with a modern DAC -- either
16 bits at 44.1 ksps or 24 bits at 96 ksps. The objection was raised that at 20Khz we are putting out 4096 x 20,000 sps, or 81.92 Msps. I am guessing that 96 ksps with a added filter at the DAC output is good enough. The final power stage starts slew-rate limiting at 30-40 kHz with large signals and the small- signal response is 3dB down at 50Khz and way down in the mud at 100KHz. I just don't see how it needs over 80 megasamples per second to keep the THD+N reasonably low. Am I right?