what is 'data whitening'?

I was reading a protocol description today (Bluetooth IIM) and a line said "In order to reduce highly redundant data and minimise DC bias, a data whitening scheme is used to randomize the data. The data is scrambled by the sender and unscrambled by the receiver"

OK, I can see that there is some security being added here but what about the "removal of redundant data and DC bias" (which I assume means unequal numbers of 0 and 1). How can this work for the small data packets being sent? Surely if random data goes in, random data is going to come out and the resultant size, and number of 0 and 1s can't be controlled in any usable way (in much the same way as loss-less compression can't guarantee a smaller file).

Tim

Reply to
tim
Loading thread data ...

The purpose is to "whiten" the transmit spectrum and generate a specific response at the baseband to easely decode it. Mostly implemented with feed-back shift registers to generate pseudo random bit-strings. As digital logic cannot generate true random values you always have a opportunity to decode it at the receiving end. The file length will be the same but the synchronization delay time will be increased (that's the drawback). Required S/N will be slightly increased (if it is right implemented). The rest is math... Regards - Henry

tim schrieb in Nachricht ...

Reply to
Henry

A transmission system is usually optimized for a certain frequency range, and the transmitting electronics may be adversely affected (not operating in optimal range) if the average signal input is not centered at a certain value. Since the tranmission devices have no control over the data being transferred, conceivably a data stream could generate a modulation that has too low of a frequency and does not pass as easily through the demodulation electronics. An average voltage shift could also develop, which might interfere with the signal depending on the encoding scheme.

By "randomizing" the data with a predictable set of values and then stripping the injected data at the receiving end, the signal values will be much more likely to have a normalized statistical grouping, which is closer to the center of the frequency range the transmission system is optimized for. So the main advantage is keeping the signal levels optimized for passage through the electronics, and stabilizing the location of the signal average. Depending on the scheme of signal encoding, it may or may not have a significant effect.

A very similar technique is useful in analog-to-digital applications; a digital value is sent to a DAC, which generates a known analog value to be added to the input of the ADC. The injected value is then subtracted from the captured analog value. This allows the visible effects of ADC non-linearity to be minimized by distributing it statistically among the samples.

I'm not the expert on this stuff, so any flames and corrections are welcome.

Reply to
Garrett Mace

Garrett, you've got the right idea - the whitening is to get above the pink noise level (frequency dependant portion).

The A/D technique you describe I've never had to do with good ADC's, but we do dither (inject a small signal above nyquist on the input signal) to get a better idea of the LSB. Got any refs on your technique, MATLab models? Sounds good.

Andrew

Garrett Mace wrote:

Reply to
Andrew Paule

Hello Tim,

The main purpose of this scrambling is to remove the long strings of ones or zeroes which may exist in the raw input data. The alternation of symbols is required for normal operation of the bit synchronization in receiver. Therefore the randomizing of the data is commonly used inside modem protocols. As for data reduction, they may compress the long strings of repetitive symbols with the simplest RLE algorithm. I am not sure if they do it in BT, however it is rather common solution also.

Vladimir Vassilevsky

DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

Andrew,

Not at the moment, most of the work I did was under a non-disclosure agreement for a proposal on a job that didn't end up with me. The application was very sensitive to ADC nonlinearities. The concept itself is not too uncommon, it's basically the sliding scale method which you should be able to find information on.

Reply to
Garrett Mace

Was this a custom ADC? I've used many Analog Devices ADC's (even for PMU type work measuring currents and voltages for devices under development at many silicon sites), and one of the things that most of the world seems to require is guaranteed linearity - does this get past the +/- 1/2 lsb thing like dithering? I'm going to take a look tomorrow, have to get the kid in bed. This might be a cool idea.

Andrew

Garrett Mace wrote:

Reply to
Andrew Paule

I told the client that current ADC's were already doing a pretty good job of maintaining linearity, but they seemed convinced that this technique was necessary and they didn't get good samples without it. Then again, their current design was already a few years old, and the signal processing reference they were quoting was probably older than that. However as I mentioned, I didn't get the job so I didn't get to see exactly how much it affected the samples. I don't think they've gotten that project underway yet.

Reply to
Garrett Mace

Thanks to all who replied. Having acquired a copy of the full spec it seems that the data whitening algorithm is intended to randomise the payload when it contains configuration data which is "highly redundant" (i.e contains lots of don't cares which will inevitably be zeros) and that this is turned off when the payload is 'voice' data (and hence more random).

Reply to
tim

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.