How can digital be more spectrum efficient than analog ?

Raises hand.

Okay, you're officially troll material in my book now. Whenever rational argument appears you resort to naysaying and character assassination rather than any substantive counter argument.

According to who? You? This is an unmoderated newsgroup, so you have no authority to tell anyone what they can or can't post here.

Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions.

formatting link

Reply to
Eric Jacobsen
Loading thread data ...

Actually, it becomes quite intuitive when you get used to seeing BER vs SNR (or Eb/No) plots or bandwidth-efficiency planes comparing codes or modulations or whatever, and capacity appears on the same plot for reference. In those cases as SNR increases to the right, capacity will always (naturally) be the left-most curve. So only the area "above" capacity in an SNR sense is of interest.

The bandwidth-efficiency plane in Sklar's "Digital Communications", Fig. 7.6 (p.394), or, better yet, Figs. 7.2-7.4 all show various ways of expressing capacity curves (yes, all the same capacity) vs SNR or Eb/No, as I've been trying to explain. In all cases the "attainable" region, or region of interest is to the right of the curve in the "higher" SNR region.

So operating "above" the capacity curve in an SNR sense is pretty intuitive when you see it that way.

Proakis' "Digital Communications" has some similar plots, with a bandwidth-efficiency plane including capacity in Fig. 5.2-17 (p. 282 in 4th ed.), and a curve comparing capacity and cutoff rate in Fig.

7.2-6 on p. 401.

Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions.

formatting link

Reply to
Eric Jacobsen

...

It seems that "capacity" is used here to mean "SNR threshold needed to achieve channel capacity", and while that may be well understood by workers in the field, it seems decidedly odd to most of the rest of us. Mind you now, that doesn't make it wrong, just possible dangerous. For similar reasons, liquid-fuel trucks are labeled with the nonce word "flammable" instead of the established word "inflammable", which has the same meaning.

--
"I view the progress of science as ... the slow erosion of the
  tendency to dichotomize."                    Barbara Smuts, U. Mich.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

Right, and right. Shannon's formula doesn't know whether transmitted bits are FEC bits or information bits. It gives the theoretical maximum rate at which one can transmit error-free *bits*, not *information*.

FEC is useful when the cost of re-transmission is higher than the cost of the bandwidth wasted transmitting the FEC bits, and in systems where a request for re-transmission is not an option. FEC bits (and bits resulting from re-transmissions) are redundant and must be excluded from the Shannon bit rate when estimating the information-carrying capacity of a channel.

Why use FEC? Why not just keep the bit rate below the theoretical maximum rate. Well, one reason is that channel SNR is a usually a dynamic and unpredictable thing. If you hobble the channel to operate under worst case conditions, you are wasting resources when conditions are better.

But you already knew that ;-)

Reply to
John E. Hadstate

Actually, this wasn't expressed very well. FEC is useful when the *risk* of re-transmission is higher than the cost of the bandwidth wasted transmitting the FEC bits...

This isn't quite right either. The resources are being wasted by the transmission of FEC bits when the channel SNR is high enough to support error-free communications. However, since we often don't know, or can't respond, when SNR drops below acceptable limits, we pay the additional cost of FEC during good times to avoid shutting-down the channel altogether when conditions are poor.

Reply to
John E. Hadstate

If FEC is intelligently applied, as a part of channel coding, it doesn't consume useful throughput. It actually increases it. This is key to approaching the Shannon channel capacity - at least its the only way we've succeeded in approaching it so far.

Try Googling for trellis coding, which is one of the simpler and more mature forms of channel coding. You might be surprised to find that something like upping from 16 QAM to 64 QAM, and using some of the extra bits for a well crafted form of FEC, can result in greater useful data throughput in a channel with the same noise level and maximum permitted transmission power.

Regards, Steve

Reply to
Steve Underwood

That's not quite correct. Shannon's formula deals with "information" transmission so the FEC redundancy bits _don't_ count. This is fairly intuitive if you consider that if you can get even the parity bits through without error then you don't need them in the first place.

The beauty of Shannon's result was that it became clear that there can be a difference between the raw bit rate and the "information rate" in a channel. The difference is the code rate and it is then up to someone to come up with a type of code strong enough at a given rate to bridge the difference between uncoded transmission and capacity at a given rate. It only took fifty years to figure it out, and research is still continuing on capacity-approaching codes to reduce their complexity.

Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions.

formatting link

Reply to
Eric Jacobsen

In order to achieve the maximum capacity, you have to employ the FEC codes with many states, i.e. big blocks. This implies a processing delay which is highly undesirable in many applications. It seems like this delay raises to the infinity as the capacity goes to the Shannon's limit.

Is there a theoretical bound which defines the minimum processing delay in the channel vs how close the is system to the Shannon's limit?

Vladimir Vassilevsky

DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

Let's see - just a quick perusal of this thread alone shows that on the one hand we've got..well, you, and you alone, and on the other we've got pretty much everyone else (which includes some contributors of rather obvious credibility). So at this point, I don't see a whole lot of reason to continue this particular exchange with the "you alone" part of that. Have a great rest-of

-your-whatever, OK?

Bob M.

Reply to
Bob Myers

No. "Slim (or slender) chance" is straightforward, while "fat chance" is sarcastic. How about "Yea, sure" and "You gotta be kidding"?

Just a bit of misunderstanding.

Jerry

--
        "The rights of the best of men are secured only as the
        rights of the vilest and most abhorrent are protected."
            - Chief Justice Charles Evans Hughes, 1927
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

And has it ever bothered the rest of y'all that "slim chance" and "fat chance" mean EXACTLY THE SAME THING?

Must be some conspiracy, I tell ya...

Bob M.

Reply to
Bob Myers

Context is everything, so lets not mistake what we are talking about:

That is your definition, and it does not match. All of the various standards organizations which contributed to FS-1037C and MIL-STD-188 are not wrong, *you* *are*.

If it were ONLY me saying this, you'd have a point.

So you once again want to cite yourself as the authority. Your logic is hilarious.

You are right that there is little point in continuing this discussion!

--
Floyd L. Davidson            
Ukpeagvik (Barrow, Alaska)                         floyd@apaflo.com
Reply to
Floyd L. Davidson

Vladimir,

It's not as bad as you think.

Practical codes with block lengths in the 10k bit range can get pretty close (within 0.5dB or so) to capacity. There was a pretty famous simulation with a code in the million-bit-length region (not practial beyond simulation at this point, but practical in that sense) that got within 0.0045dB of capacity:

formatting link

About ten years ago I architected one of the early Turbo Code implementations (perhaps the first) that ran in the megabits/sec throughput range. It was demonstrated in some satellite modems but never marketed due to a number of reasons not necessarily related to the technology or performance. In that particular case the full modem was operating within about 1dB of capacity at the higher code rates (e.g., R = 3/4, R = 7.8), which included all of the implementation losses, including the rf, etc. At the lower code rates (e.g., R =

1/2), the SNR was so low that for BPSK/QPSK the implementation loss got larger due to the synchronization loops having difficulty maintaining lock in that much noise.

Capacity-approaching codes in modern standards (other than DVB-S2) tend to have maximum block sizes in the 1k-5k bit regions. There's a bit of a natural knee effect (essentially diminishing return) of capacity vs coding gain above about 2k-4k bit block length. In other words, it starts taking lots more complexity to get incrementally more gain somewhere around that block length for a lot of practical codes. As gates get cheaper in the future it'll be easier to use longer blocks to get that last bit of gain economically.

And to your point about:

It's not really a "bound", and it doesn't address delay (which is implementation and system dependent to a degree), but you might look at "cutoff rate" as an additional theoretical metric.

Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions.

formatting link

Reply to
Eric Jacobsen

And "I couldn't care less" and "I could care less" tend to mean the same thing as well.

There are lots of similar examples.

Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions.

formatting link

Reply to
Eric Jacobsen

Eric, what do you mean by "implementation loss?"

By the way, cool post/good stuff.

-- % Randy Yates % "Maybe one day I'll feel her cold embrace, %% Fuquay-Varina, NC % and kiss her interface, %%% 919-577-9882 % til then, I'll leave her alone." %%%% % 'Yours Truly, 2095', *Time*, ELO

formatting link

Reply to
Randy Yates

Implementation loss = performance loss due to implementation-related impairments like finite registers, quantization, oscillator phase noise, distortion due to amplifiers and filters, etc., etc. On single-carrier modems it's pretty easy to measure uncoded performance against the matched filter bound. The difference is the implementation loss.

We were able to get implementation loss in the digital baseband processing to be pretty much unmeasurable, but the rf sections/channel would still cause about 0.3dB-0.5dB degradation. i.e., we could run a BER vs Eb/No curve uncoded and be within about 0.3dB of the theoretical matched-filter-bound performance.

With the Turbo Code we knew our implementation loss due to uncoded testing, and we knew where capacity is, so at the higher rates we could tell reasonably well how much of the distance from capacity was due to implementation loss and how much was due to the code not quite reaching capacity (due to simulations of the code). The results were consistent except at the low code rates and very low SNR, where degradation just got worse as SNR was decreased. We could tell by monitoring the loop control signals that both the symbol clock recovery loop and the carrier phase loop were having trouble tracking at those extremely low SNRs. We tweaked the carrier loop a little bit but the symbol loop was in silicon and we had less control of how much we could change.

Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions.

formatting link

Reply to
Eric Jacobsen

At least in current usage, but the latter one still grates on my nerves every time I hear it. Of course, we could be here all month just covering the sins of Marketingspeak, Managementspeak, and other dialects common in the lower life-forms...:-)

Bob M.

Reply to
Bob Myers

When voice codecs run in the 4 to 8kbps range, the delay for a 5k bit block might be pretty awful - assuming the block is transmitted smoothly through time, rather than in a quick burst as part of a muxed stream. Fine for data, but limiting for interactive voice.

For most practical purposes the block size it irrelevant. Its the algorithmic delay which can be a killer. Depending on the nature of the channel the two might be directly or only vaguely related.

I think Vladimir has a really interesting question, though. I wonder if some theoretical bounding framework is possible for the tradeoffs involved in this area.

Regards, Steve

Reply to
Steve Underwood

Yes, for low bit-rate channels it's difficult since the latency gets ugly, and for voice traffic latency is important. For small packets (i.e., 40-60 bytes or smaller) it's difficult as well because there isn't enough bit diversity to make capacity-approaching codes work well enough to be worthwhile.

The block size makes a big difference for both latency (in some applications) and coding gain. Capacity-approaching codes get their gain by iteratively processing redundancy across all of the bits in the block, so block size matters in the amount of available coding gain that can be achieved. Asymptotic performance for most block codes is achieved as the block size goes to infinity.

The problem is that there is a lot of system dependence and decoding algorithm dependence on the results. Coding gain for iterative codes is not only dependent on the block size, but on the decoding algorithm used and the number of iterations performed in the decoding algorithm. How long it takes, which is relevant to the latency question, depends completely on the architecture and system clock rate of the decoder. A highly complex decoder for an LDPC, for example, could theoretically take only a few clock cycles per iteration (ideally just one or two), so it would take only the number of iterations used times the clock period times one or two to complete. This would essentially rule out latency as an issue for any system with a reasonably high transport bit rate.

But that's not a practical decoder for the foreseeable future. So how does one account for that in determining a theoretical bound?

Tough to do.

Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions.

formatting link

Reply to
Eric Jacobsen

Don Bowey wrote: (snip)

I meant it more in the sense that a non-baseband system must have been modulated. A baseband signal may or may not have been.

I would probably only claim NRZ as not modulated, and that won't survive a transformer (unless you are very lucky).

-- glen

Reply to
glen herrmannsfeldt

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.