Start with the advertised 77.3dBFS noise floor and work back from there. The data sheet should have the conditions under which that noise floor is specified (probably at the highest reference level, because that would swamp out input noise the best).

Most monolithic SAR ADCs that I've looked at seem to have input noise bandwidths far in excess of the achievable sampling rate. This means that noise floor in the digital world is constant regardless of sampling, which in turn means that the effective noise spectral density, as referred back to the input, depends on sampling.

If you're planning on oversampling and averaging to get more resolution then just sample as fast as you can.

I could be wrong, but all ADCs use/have an opamp. And, for minimal noise optimization, the noise curves VS input resistance (or collector current in discrete designs) are mandatory. Spot noise figures are almost meaningless - like spot price of gold.

dBFS means "decibels below full scale". In this case the 77.3dBFS figure means that the accumulated noise power* at the ADC output is 77.3dB below the full scale input to the ADC -- and full scale probably means a sine wave.

So if a full scale excursion on the input is -2.5V to 2.5V, then "full scale" is a 5V p-p sine wave. That works out to around 1.75V RMS.

The noise, reflected back to the ADC input, is 77.3dB below that, or 240 microvolts.

That's the total noise -- but you were looking for the spectral density. If I'm remembering my math right, the spectral density is proportional to the square root of the Nyquist rate, and the Nyquist rate is half the sampling rate.

So if you sample at 1MHz (I have no clue what the sampling rate of the

2209 is -- stick your own number in here for the right value) then the power spectral density would be 240 uV / sqrt(500kHz) = 338nV/(rt Hz).

Does it now make sense, no matter how disappointing the actual numbers may be?

"power" in this case meaning ADC counts^2, not actual energy/time.

The LTC2209 samples at 160 MS/s making the Nyquist rate 80MHz, making the noise

240uV/sqrt(80MHz) approx 27 nV/rtHz that's a big OUCH! pretty much unuseable for me.

So, check numbers, first plot with 15MHz carrier tone at -1dBFS for the

2209 where FS is 1.5 Vpp, the noise floor looks more like -115dBFS, which translates to:
530 mVrms, 0.94uVrms noise floor, or 0.1 nV/rtHz not believable. at least I have a starting point.

I follow your analysis, but isn't the conclusion that the faster the sampling rate, the less noise. But generally more bandwidth means more noise.

This is the kind of logic that made the Nomad probe on Star Trek explode. But meditating on this a bit, the noise will always be folding back (aliasing) in a sampled data system. So a faster sample rate means less folding back.

Then the conclusion is you always sample as fast as possible, and if you need a slower sample rate, use a decimation filter.

I was trying to refer everything back to the input, which was kind of making _my_ brain explode.

The bottom line is that -- in my experience with SARs at least* -- the _output_ noise of the ADC is pretty constant regardless of sampling rate. So if you have the ADC and processing resources to sample fast, filter, and decimate, you'll average the noise out better, and be able to beat it down.

Every once in a while I beat a drum about only thinking in the frequency domain when it makes your life easier. It usually does. But sometimes

-- particularly when you have a nonlinear or time varying system, it's easier to think in the time domain. This is a case -- because of the sampling -- where you can get an answer a lot quicker by thinking in time domain terms.

Note that this sample-fast-and-filter approach carries two pitfalls: one, you're only increasing the resolution of the ADC, not its accuracy; and two, you need sufficient word depth in your filters.

The first point means that you're not going to fix up the integral nonlinearity of the ADC. If having a slightly wrong measurement that's really low noise is what you need, then this technique is a good one. If having a slightly wrong measurement kills you for any reason, then this technique is not a good one.

The second point happens because, given enough filtering, the noise will be in the neighborhood or below the quantization noise in a 16-bit data path. When you get to that point, you need to increase your word width or you will lose everything you gained by decimation to quantization noise.

This is not always the case with sigma delta converters. I don't have much mileage with S-D converters, so I can't say for sure, other than you should read the data sheet carefully and know what you're seeing.

1.5/2/sqrt(2)*10^(-77.3/20)/sqrt(160e6/2)*1e9 = 8.09 nV/rtHz If I use that number and put 15.1MHz at -1 dBFS through the thing then fft with 64000 samples, I pretty much get the exact plot shown on the spec sheet.

so, thanks again for the help, although 8.1nV/rtHz seems a bit large; all the data sheet info is consistant.

The poster didn't look at the datasheet. Since the part has a sample and hold at the front end, there is nothing you can do relative to the input to improve the noise figure.

One thing to note about parts with integral sample and holds is they cause a dynamic kick on the circuitry driving them. This can effect the accuracy of the ADC. One semi I worked at used a freakin' expensive Comlinear op amp to drive a cheap ADC because that provided the best performance.

It would be a good idea to sniff the inputs to the ADC and see if there is noise on them synchronous to the sampling clock.

If you meant me, as "The poster didn't look at the datasheet."; I did look at the data sheet and found it was in terms that did not directly relate to what I was doing. Thus I asked for help here, hoping someone had already experience with this part, apparently no one has experience with this part, but I did get help to convert the data sheet to numbers I need.

For what it's worth, the input noise density does appear to be around 8.3 nV/rtHz. When used to simulate a sample system, the performance does fairly closely match the plot of 15.1 MHz which used 64k packet length and the plot's noise floor approx -115 dBFS. Interestingly, increasing the packet length does improve that number, but only out to around 400k to

500k. When that many packets are averaged together the aperture jitter starts to 'appear' and cannot be removed. Aperture jitter causes a sharp rise in the density function at the high end, a k*f^a function, where k and a are determined by the packet length, and of course the aperture jitter. At 90% of Nyquist, 70MHz, jitter seems to be a constant relating to aperture jitter.

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.