16-bits ADC anyone?

And there is absolutely no need for the 16 bit accuracy in all of those cases, because the sensors are only accurate to somewhat 0.1% at the very best. All that required is a 10-bit ADC with the proper gain and offset.

If I was making the 100bit ADCs, I am sure I would recommend a 100bit ADC.

Vladimir Vassilevsky

DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky
Loading thread data ...

Yes, but you might want to do some gain in the ADC, to simplify the design (0.1% precision VGAs are not cheap either ), and then there is repeatability and granularity, to consider, so a designer might want to ensure the 'weak link' is dominated by the Sensor. Combine all those, and designer's can rightly choose 12-14b ADCs, for systems with 10b sensor precisions.

-jg

Reply to
Jim Granville

Often the requirement is simply for resolution, absolute accuracy isn't too important. This is the case for audio for instance. Other applications can have higher accuracy requirements for unexpected reasons.

While I'm certainly not an expert, as an example I'm reasonably familiar with astronomical imaging, which is often done with 16 bit (monochrome) CCDs. It might sound overkill considering the eye is only good for eight bits but there is often a heck of a lot of processing after image capture. Brightness/contrast is almost invariably tweaked with rounding errors as a result. More advanced techniques can combine hundreds or even thousands of images - the rounding errors can accumulate in such techniques easily.

This can apply to many other areas where a lot of processing is performed - better to start off with much more than you need so that you have enough usable bits left after all the manipulations have been done.

--
Andrew Smallshaw
andrews@sdf.lonestar.org
Reply to
Andrew Smallshaw

This assumes that all 65536 possible values do occur with a constant likelihood. However, if the signal has exponential likelihoods, the interesting values are at one end of the scale.

In case of audio, the interesting part is the values in the mid-range. For instance to generate the telephone u-law/A-law signal with 8 bit message words, a 12 bit linear ADC should be used.

You have to know something about the signal distribution in order to decide, if 16 (linear) bits are enough or not.

Paul

Reply to
Paul Keinanen

yea we use 12 bit A/D's all the time with 1% (7 bit ) sensors, so we don't have to use any offset/gain stages

Reply to
steve

There you go- digital calibration. Calculating the derivative (or even, sometimes, the 2nd derivative) of the PV is difficult when you have enormous quantization steps, far above the noise level of an analog system. So you need high resolution, not necessarily extreme accuracy.

Best regards, Spehro Pefhany

--
"it's the network..."                          "The Journey is the reward"
speff@interlog.com             Info for manufacturers: http://www.trexon.com
 Click to see the full signature
Reply to
Spehro Pefhany

Vladimir, apparently you have little touch with sensing. It is a dream come true to be able to measure an NTC, a Platinum or a Thermocouple with the same ADC, and thus offering true flexibility for the customer to choose sensors. Or to use the same design for multiple applications.

Meaning, I'm really fond of the 20 and 20bit converters. And yes, my stuff is connected to an AVR, no need for something bigger

Rene

Reply to
Rene Tschaggelar

As long as your captured image lies within the dynamic range of an 8 bit converter, it is of course rediculous to use a 16 bit converter just to give you the dynamic range for calculations.

Meindert

Reply to
Meindert Sprang

In news: snipped-for-privacy@sdf.lonestar.org timestamped Wed, 6 Jun

2007 20:34:07 +0000 (UTC), Andrew Smallshaw posted: "[..] Often the requirement is simply for resolution, absolute accuracy isn't too important. This is the case for audio for instance. [..]"

Hello,

I do not understand the distinction. I agree that absolute accuracy is not always important and that the ten most significant digits of a low quality 16 bit analog to digital converter might not be as faithful as a high quality ten bit analog to digital converter, and I agree that the least significant bits of an analog to digital converter are less likely to be as faithful as the most significant bits, but I do not believe that a sixteen bit ADC is equivalent to nothing except a ten bit ADC whose output is leftshifted by six bits. That would result in a datatype which has a resolution of 16 bits but clearly no more accuracy than a reading of ten bits. I believe Andrew Smallshaw was talking about something else but I do not understand what. Would you care to explain? "[..] the eye is only good for eight bits [..] [..]"

I do not know what the limit is, but I believe that it is significantly above sixteen bits and below 33 bits. I believe that much true color graphical work is done at 24 bits.

Regards, Colin Paul Gloster

Reply to
Colin Paul Gloster

I don't want to sound like a net-nanny (we have others in this group), but would you *please* learn to use your newsreader and stop your absurd quotation style?

Andrew was referring to monochrome resolution, and yes, 8 bits is a reasonable guess for the eye. The dynamic range of the eye, however, is very much larger - a monochrome CCD with no options to control the shutter speed or aperture would need close to 32 bits to get the full range an eye can work with. So a 16-bit CCD sounds like a reasonable compromise.

The reason high-end graphics work is done using more than 8-bit resolution is to have overhead for working with the picture without losing accuracy, and so that it can be displayed (on-screen or in print) in higher resolution, letting the viewer see full contrast no matter which part of the picture he concentrates on at the time.

Reply to
David Brown

What I was referring to was that is many circumstances, the absolute value of whatever is being measured isn't particularly important compared to the ability to finely distinguish small changes in the imput. Since I used audio as an example I'll continue with it - you can get away with many subtle distortions that won't be particularly noticable. For instance, your analog input stage may be slightly frequency dependent with the result that low frequencies are reproduced too loudly. That isn't too noticeable. What is important is that the waveforms have more or less the right shape. That means that the gap between distinct levels must be small.

I was talking there strictly about monochrome image data, and did say as much is my original post, although I could have been more explicit about it. You're right that 24 bit colour is generally accepted as 'true colour' (although that is simplifying things slightly). That's eight bits each for red, green, and blue. If you're talking about monochrome obviously you only need eight bits in total for black through to white. 32 bit colour is actually quite rare. What is usually meant is 24 bit colour in a 32 bit format because many computers make it easier to deal with 32 than

24.
--
Andrew Smallshaw
andrews@sdf.lonestar.org
Reply to
Andrew Smallshaw

A simple example. Let's say I have an image of something or other and the background sky is not true black due to the effects of skyglow caused by street lighting. I decide to improve my image by removing that and making the sky black by adjusting the contrast so that pixels below a certain value are scaled to make them darker.

Values above that value must now be scaled to fill in the gap in the scale. That could mean that a difference of one in the input becomes a difference of two or three in the output - effectively we have lost some bits in the processing. If we had an eight bit sensor, we can now see the difference in individual levels that were detected. If we have a 16 bit type, the steps are still too small for these to be noticed.

--
Andrew Smallshaw
andrews@sdf.lonestar.org
Reply to
Andrew Smallshaw

I do analog stuff at 16 bits all the time, where 12 or even 10 bits would have matched the sensor. It helps sell the product (which is more important then saving a buck).

Reply to
Hershel

Eight bits is a bit dubious for images, particularly if there is much manipulation to be done (for example, you'll lose detail in the highlights or dark areas that cannot be brought back again). My digiSLR allows 12 bits/color (36 bits per pixel), which is significantly better. When all the manipulation is done, it can be converted to 8 bit (24 bits/pixel) with little or no visible loss of quality.

Best regards, Spehro Pefhany

--
"it's the network..."                          "The Journey is the reward"
speff@interlog.com             Info for manufacturers: http://www.trexon.com
 Click to see the full signature
Reply to
Spehro Pefhany

yes, we use faster A/D's and more memory then needed for the same reason, well some times anyway.

Some products sell because the are the cheapest and barely do what needs to be done, other products sell because they are way over designed, the customer wants way more then he actually needs, for whatever reason.

I wish I knew how to reliably distinguish the two potential customer demands in the design phase, but it's not very predictable

Reply to
steve

give

Wait a minute, you already have an image you say. That can of course be processed in 16 bit for better results. But that has nothing to do with the dynamic range of the original video signal captured from the sensor. If that signal has a S/N ratio of less than 8 bit, it brings you zip when sampled with a 16 bit converter. But you are free to extend the word size of the already digitized image in order to give you more room for calculated results.

Meindert

Reply to
Meindert Sprang

then

You appear to have rather unusual customers. For most of us out here, saving a buck is *way* more important that giving marketing a meaningless bullet point to brag about. FWIW, you could be put out of business by a copycat who saves that buck, and then *pretends* to have a

16-bit ADC in there --- nobody could tell the difference anyway.
Reply to
Hans-Bernhard Bröker

Not everybody here is designing consumer products.

It's more of an industry preference then a customer preference. I've got a small number of competitors in a specialized area of industrial control. When I visit with a customer, I generally know which of my competors were there the week before, and (in this case) the resolution of their ADC or whatever.

The math is really simple. If you spend and extra buck on a device that sells for $2K with a 70% PM, and you sell 5% more, then you make more money.

Reply to
Hershel

You haven't said what type of sensor and whether its output is ratiometric to the power supply. If it is not ratiometric, then your ADC needs a precision reference and that will bump up the ADC cost significantly.

Mark Borgerson

Reply to
Mark Borgerson

There are a lot of oceanographic variables that need something near 16- bit resolution. When I was working with sensors to measure the optical properties of seawater, our minimum standard was one part in 10,000 sensitivity and noise levels.

Mark Borgerson

Reply to
Mark Borgerson

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.