How can digital be more spectrum efficient than analog ?

Capacity (a data rate) is never defined as an SNR or Eb/No, and capacity *is* an upper limit, *below* which it is possible to have essentially error free transmission.

By definition capacity, measured in binary digits, is

P+N W log2 ----- N

bits per second at some arbitrarily small error rate, which some sufficiently complex encoding system would be capable of accomplishing.

By definition it is not possible to transmit at a higher rate, regardless of the efficiency of the encoding system, without a "positive frequency of errors".

System capacity, not channel capacity. That is to say that the channel has a capacity determined by the bandwidth and the SNR; but any given system may not be sufficiently involved to attain that capacity, or to make use of either the bandwidth or the SNR. Hence the system has a capacity that is something less than the channel capacity. We could refer to the ratio of the two as "system efficiency" if you like.

Consider a typical POTS line using twisted pair. If we ignore other cable pairs, the channel it provides has limited bandwidth and unlimited (relatively) SNR. Which is to say that no matter what we do, it will not transmit a 10 Mhz signal very far, but if we need 1 dB more SNR all that is required is to feed a signal with 1 dB additional amplitude.

Said line is not limited by the Shannon-Hartley theorem of channel capacity because no practical system is going to use the actual maximum power that could be applied.

Therefore the limit for system capacity would be in terms of how well the modulation/encoding scheme uses the available bandwidth. SNR is not part of the equation.

Now consider exactly the opposite type of channel, where bandwidth is not limited but SNR is. Fiber optic cable is an example (and for practical purposes satellites, up to the maximum bandwidth of the transponder, are too). There is far more bandwidth available on a typical fiber than is required for any given application. Practical systems do not use anything like the available bandwidth. The actual bandwidth used depends on the data rate and the modulation scheme. The BER depends on the SNR, not on the available channel bandwidth.

Previously I posted a chart showing SNR values for several types of digital modulation. Posted responses related to "what bandwidth" type questions, which are not relevant. The bandwidth varies with the bit rate, is not limited by the channel, and does not affect the bit error rate.

As I mentioned, I took the figures from various graphs, as that is the way it is always presented and unfortunately we cannot easily post graphs in an ASCII text message. I poked around looking for a really good example graph, and think this one expresses it better than others (if for no other reason than showing Shannon limits on the graph),

formatting link

See Figure 1.

So which is it? FEC is more efficient and increases efficiency bringing the system closer to maximum capacity, or uses more bits causing the system to operate at a "reduced *information* rate".

You can't have it both ways.

(One problem is that you use "channel capacity" to describe two different things, alternating between Shannon's maximum capacity for a channel and the actual information rate of a given system.)

But *you* are telling us that he was wrong! He said it reduced the information bit rate available, and you are saying it is necessary in order for the information rate to approach channel capacity! You can't both be right...

Whatever arbitrary target values a system is designed to meet.

The bullshit meter just slammed against the peg, and bent one more time. *NOBODY* defines it in those terms. Shannon's maximum channel capacity is *defined* as a low BER, less than some arbitrary value.

See Theorems 9, 10, 11, and 20 in "A Mathematical Theory of Communication". For example,

Theorem 17: The capacity of a channel of band W perturbed by white thermal noise power N when the average transmitter power is limited to P is given by:

P+N C = W log -----. N

This means that by sufficiently involved encoding systems we can transmit binary digits at the rate P+N W log2 ----- bits per second, with arbitrarily small N frequency of errors. It is not possible to transmit at a higher rate by any encoding system without a definite positive frequency of errors.

At the expense of bandwidth (which is a channel resource that by your reasoning would be "wasted" when FEC is used).

The reason it is used *is* to reduce the BER under less than ideal circumstances. For satellite systems there is indeed the valued advantage of power efficiency, and but under normal circumstances the minimum required BER could be obtained with FEC disabled. That would *not* require increased power, and

*would* release the extra bandwidth. The circuit would function quit normally most of the time. Unfortunately, for a sufficiently significant percentage of time it would become unreliable due to high BER caused by low SNR. Correcting that with added power utilization is not as cost effective as correcting it with added bandwidth utilization.

Sometimes... is about 95+% of the time. Which is not good enough if the specs call for 99.97% reliability.

It is *not* more efficient. It trades bandwidth for SNR. You pay with one or the other. It just happens that in most (not all though) instance the cost of bandwidth will be less on a satellite than the cost of power.

That statement is just as true if you reverse it and say that if you have FEC turned *ON* you are wasting channel resources.

On uses more bandwidth, off requires more power...

Get any decent book on communications link design and read it.

Here is one example:

bit rate 64 kb/ps to 44.736 Mbp/s

FEC encoding Rate 3/4 convolutonal encoding/Viterbi decoding

Modulation Four-phase Coherent PSK

Eb/No at BER (Rate 3/4 FEC) 10^-2 10^-7 10^-8 a. modems back to back 5.3 dB 8.3 dB 8.8 dB b. Through Satellite 5.7 dB 8.7 dB 9.2 dB

C/N (BER=10^-7) 9.7 dB

Nominal BER at operating point 1 x 10 ^ -7

Threshold BER 1 x 10 ^ -3

(Intelsat standards...)

Sure. See the massive confusion above...

All of the above, and a couple others too. "Make it work" was always at the top of my job description, and "it" was never well defined.

Whatever, I deleted the rest of your silly trivia and stupid insults.

--
Floyd L. Davidson            
Ukpeagvik (Barrow, Alaska)                         floyd@apaflo.com
Reply to
Floyd L. Davidson
Loading thread data ...

Digital signals needs to represent digits, and for that it pretty much needs quantization in both value *and* time. [And probably some notion of protocol: digital information is not necessarily self-evident or real-time.] Consider the output of a purely analog pulse-width-modulator. The output can't be considered digital, as the information content is continuous, and encoded in the transition times, even though the "values" (voltages) of the output only (nominally) take on two discrete values.

Sometimes official definitions need brushing up as use and techniques change and require finer distinctions.

--
Andrew
Reply to
Andrew Reilly

No, I don't have any experience with FDM or FM microwave.

A given noise floor seems necessary for any analog scheme to merely work. But assuming it works and fills the bandwidth, how does decreasing the noise floor allow FDM to always add more channel capacity?

Thanks.

Reply to
Ron N.

Except the values are represented by *transition* *times*, not voltages.

The fact that either a voltage or a time (or phase, or whatever) is continuously variable is *not* significant. The physical medium is virtually always continuously variable. The pertinent characteristic is the *value* represented, which is discrete for digital and continuous for analog.

For example, a TTL (binary) digital signal varies from 0 volts to 5 volts. The variation is continuous, and depending on various circuit parameters may change faster or slower (depending on circuit resistance and reactance).

But there is a definition for exactly two values, TRUE and FALSE. Essentially any voltage close to 5 volts is TRUE and anything close to 0 is FALSE.

Here is a typical state chart for TTL digital logic,

TTL input Value 0 voltage < 0.8 Value 1 voltage > 2.0

TTL output Value 0 voltage < 0.4 Value 1 voltage > 2.4

otherwise undefined

The voltage represents three distinct values (two of which are used). When a particular signal line transitions from one value to another, the voltage itself takes time to change and at some point in time during the transition is continuously varied between at least the TTL output limit for the initial logic state and the limit for the final logic state. None of that makes it an analog circuit or an analog signal.

But what needs brushing up is virtually always the way people (mis)understand digital vs. analog, not the definitions.

--
Floyd L. Davidson            
Ukpeagvik (Barrow, Alaska)                         floyd@apaflo.com
Reply to
Floyd L. Davidson

Stand back for a moment and look at that sentence. There's gotta be something wacky about terminology that says things will work as long as you are above capacity. No wonder people get confused. :-\

Steve

Reply to
Steve Underwood

I haven't thought about analog microwave for a *long* time...

The baseband channel will have a given bandwidth and noise floor that depends on the available bandwidth and received signal strength of the microwave system. And the noise floor will go up directly with the amount of signal power applied to the baseband channel.

With no baseband signal at all, the noise floor will be way down. As the baseband power is increased, the noise (which can be measured in any of the FDM channels) will also increase. The application of power in the baseband will cause the RF signal to be modulated of course, and the RF bandwidth transmitted will increase as the power in the baseband is increased.

The transmitter and receiver are bandwidth limited by design. When the actual transmitted bandwidth of the modulated RF signal reaches the bandwidth limit, further increase in baseband power will cause a non-linear change in the demodulator output because the bandwidth cannot be increased (but the spectral distribution within the existing bandwidth will change, which creates noise). The non-linear change results in a very dramatic increase in the noise floor. Instead of being linear, it becomes geometric.

For any given microwave system, the "baseband loading" (power at the baseband input) can be calculated, and measured with a suitable level meter. And it works out just about precisely in practice as the theory says it should, too. All is fine up to the knee of the threshold, and beyond that serious degradation takes place very rapidly with increased power in the baseband.

For example, on a "narrow band" system, say with 12 FDM channels, each channel can have 1/12th of the total power in the baseband, on an average, if they are all loaded equally. For various reasons that level is usually set to be -13 dBm0 (13 dB below the maximum test tone level for a channel). Data tones are set to that level, while signaling tones that are always on are set at -20 dBm0. Active voice channels are assumed to be about the same load as if they were a -13 dBm0 tone. All of which prevents such signals from adding up to enough to overload the entire system, even if every channel is actively being used.

Now consider a wideband FDM system with 1200 channels. The power of a full level test tone is the same, but for any one channel it represents 1/100th as much of the total baseband power as did a test tone on the 12 channel system. That has significance, because it takes 100 times as many full level test tones to overload the baseband on an 1200 channel system as on a

12 channel system. Which is to say it is unlikely on the 1200 channel system that 1 or two high level signals would have an effect, but on a 12 channel system that might well cause serious degradation of overall performance, and affect all of the 12 channels adversely.
--
Floyd L. Davidson            
Ukpeagvik (Barrow, Alaska)                         floyd@apaflo.com
Reply to
Floyd L. Davidson

The statement is clearly not correct, so it should not cause confusion. It merely *indicates* confusion...

--
Floyd L. Davidson            
Ukpeagvik (Barrow, Alaska)                         floyd@apaflo.com
Reply to
Floyd L. Davidson

That's a definition that I can heartily agree with.

I'm afraid that I didn't get that from the supposed governmentally approved definition, which is why I was picking what I perceived to be a nit.

To put the emphasis slightly differently: the pertinent characteristic is the value *represented*: a digit (or digit sequence) or a continuous analog of something.

I'm not convinced that it's necessarily possible to tell by looking at the signal itself, in isolation from its system and purpose. (Although in practice many systems allow a pretty reasonable guess.)

Cheers,

--
Andrew
Reply to
Andrew Reilly

It doesn't confuse comm people who are used to talking about capacity in those terms. e.g., "capacity" for QPSK with R = 1/2 is at ~0.2dB SNR (or Eb/No, it's the same in that case). So if the system is operating at or above ~0.2dB SNR it is possible to achieve error-free transmission in the channel.

So there's no problem with the statement.

Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions.

formatting link

Reply to
Eric Jacobsen

With the original statement, yes. With a correctly stated case, as you have now provided, no.

--
Floyd L. Davidson            
Ukpeagvik (Barrow, Alaska)                         floyd@apaflo.com
Reply to
Floyd L. Davidson

You forgot the "C =" in front of that, which will be important in a second.

And with just a teeny bit of algebra one can rearrange that expression to provide the SNR at which a modulation and coding scheme achieving C bps/Hz can operate error-free (i.e., achieve channel capacity). This is commonly done as it is a very practical way to assess the performance of a real scheme.

For a given modulation and coding scheme the spectral efficiency (in bps/Hz) and bandwidth occupancy will be known. So the only unknown, then, is SNR, or Eb/No, however one wishes to compute it. Rearranging the above capacity equation, (well, it's an equation if you take my correction of adding the 'C ='), then provides "capacity" as an SNR for any practical system. It isn't hard at all to normalize out the W, i.e., use W = 1Hz, and then C is still bps/Hz and SNR is per Hz (which is very convenient for computing capacity in terms of Eb/No).

So, contrary to your assertion, "capacity" is very often defined as an SNR and in those cases is a lower limit, below which error-free transmission is not possible.

No need to redefine, since they're the same capacity as I just explained.

So capacity doesn't apply to a twisted pair? That's pretty twisted, alright...

Sure it is. Apply the algebra to the form of the capacity equation you used above and compute capacity as an SNR for the spectral efficiency that the practical system achieves. It's the exact same capacity.

Capacity takes both into account, but when the bandwidth (or the spectral efficiency) is fixed by the modulation and coding then capacity can be easily expressed in terms of an SNR, as I explained above. There's no need to invent new definitions or try to say that they're different, as that's just confusing the issue and obscuring things way more than they need to be. For AWGN channels (which is what we've been discussing, I think), there's just one capacity formula, so there's just one concept of capacity. It can be expressed different ways (e.g., as a curve on a bandwidth-efficiency plane), but it's still the same capacity. That's the only capacity that I've been talking about.

But the coding does, and, as far as I could tell, the table you posted was for uncoded performance (aka, the matched filter bound). You could post the capacity (in SNR) for each modulation and then show how much coding gain one needs to bridge the difference between capacity and the matched filter bound.

Figure one shows up as a red-x for me, but the caption indicates that it's showing the performance of various channel codes for a given system. One could also show a line where "capacity" is (in SNR or Eb/No) for the scheme and show how much gap there is from the various coding schemes to capacity, i.e., how much channel opportunity is left to be exploited. Since none of the codes mentioned are capacity-approaching codes, the gap would be pretty significant.

Uh, yeah, you can. I'll say it again, turning off the FEC wastes channel resources.

If you have enough SNR to make a given modulation scheme work to some desired BER without coding (i.e., with the FEC off), then you have enough SNR to use a higher-order modulation scheme with the FEC turned on, and get more bits through the same channel, probably with better reliability. Again, use the FEC. You're not using the channel efficiently without the FEC turned on.

e.g., your crank the power up until you get your desired BER with uncoded QPSK. With a good code applied (I'm not going to bother to look up the common codes, but this is an easy experiment and a no-brainer with capacity-approaching codes), and without changing the symbol rate (and therefore without changing the bandwidth), you could use 8PSK with R = 2/3 or higher or 16QAM with R = 1/2 or higher. If you use 8PSK with R = 5/6, or 16QAM with R = 3/4, you'll have MORE throughput than you did with the uncoded QPSK and, with good coding, much better reliability.

So if you're ever using a channel with uncoded QPSK, tell me who the customer is and I'll give them more throughput with more reliability by using a higher-order modulation with FEC.

So the reduced information rate due to the FEC is much more than made up for by the increase in power efficiency wth decent codes. If you play the tradeoffs right, and understand them, you'll _always_ be better off with the FEC. This is why we use FEC, and we know that the only way to achieve capacity in a channel is by using powerful FEC.

As I showed earlier, if you understand what you're doing it's all the same capacity formula, so the analysis is consistent.

Hopefully by now you've understood that it is indeed possible, and correct.

Or as an SNR, or as a spectral efficiency. You can rearrange the formula to fit the appropriate unknown. The form that you quoted above, and just below here, defines capacity, C, in terms of spectral efficiency, not BER.

Note the phrase "with arbitrarily small frequency of errors". Pick an "arbitrarily small" error rate, and I'll arbitrarily pick one lower. ergo, error-free transmission.

Already been explained above.

And still waste channel resources. What you're missing is that you should be using a higher-order modulation in that channel. i.e., the channel has a LOT more capacity than you're using if you're transmitting uncoded (i.e., with the FEC turned off). So, yeah, you're wasting the channel resources in a big way.

Or just crank the modulation order up a notch, apply coding, and get even more throughput with better reliability.

Then use a higher-order modulation.

Sigh.

No. You don't seem to understand the concept of channel capacity or how it translates to managing practical channels with practical systems.

Sure, we used to help write the Intelsat standard specs at my old company. I'm not sure how it's relevant to a discussion of capacity, though. All you've shown above is a typical budget for a QPSK, R =

3/4 system with the usual k=7 convolutional code. That is not a capacity-approaching code by any stretch of the imagination.

That's like showing the formulas for the speed of sound in certain meteorological conditions and then showing how fast your Cessna 172 flies. So?

So you're not a theory person. I'm guessing you may not even have an engineering degree and so advanced math is something to which you may not have been exposed. This would make your confusion somewhat understandable.

Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions.

formatting link

Reply to
Eric Jacobsen

Doh! Correction: C is bps unless bandwidth is normalized to W = 1Hz, then it's bps/Hz.

Doh! Correction: As above, C is bps unless bandwidth is normalized to W = 1Hz, then it's bps/Hz. In neither case is the definition BER, rather throughput or spectral efficiency.

Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions.

formatting link

Reply to
Eric Jacobsen

I've read Shannon, and I've read all of the various "standard" definitions; guess what? They often contradict one another, or are at the very least extremely fuzzy sorts of things for which numerous counter-examples readily come to mind.

If it were ONLY me saying this, you'd have a point. But as to the various standards organizations being wrong - standards organizations don't actually produce the standards they publish, the individuals who make up their various committees do. And like just about any product of a committee, standards are seldom to be considered Holy Writ in any sense other than the "you do it this way, you're in compliance with this standard" one.

As I've already mentioned, I've spent a fair amount of my career rather heavily involved with a number of these same standard organizations, and can say with confidence that I'm reasonably familiar with the processes involved and the quality of the output. In many cases I've come across, I would have to say a very fervent "God help the poor sap who actually tried to learn the basics of this technology from reading the standard!" Like much of what's written by technical specialists, FOR other technical specialists (and therefore not intended as materials intended to train someone new to the field), standards very often contain a high "you already know what we really mean here" factor.

You say toe-may-to, I say toe-mah-to....

But it's still not preferable to a rational argument based on evidence and logic, for the simple fact that we have yet to come up with an "authority" who can realibly be considered to be infallible. If you want to continue to argue that any given group of experts will never be wrong in any pronouncement they make, I can point to any number of counter-examples there as well.

I beg your pardon, but on what basis do you make that judgement? That you disagree with me, and apparently are either incapable or unwilling to understand the arguments I have presented? Or merely because what I have said goes against what you've read in a book that YOU consider to be "Holy Writ"? Can you make an argument in support of your position which does NOT equate to "I'm right because it says so right here in this book!"?

Again, a circular argument. You are arguing for a given definition by saying that any other perspective MUST be wrong because it doesn't agree with that definition!

Bob M.

Reply to
Bob Myers

Not with my original statement, either. If you disagree please state why and make a supportable argument.

Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions.

formatting link

Reply to
Eric Jacobsen

OH? What about me? :-)

...

Jerry

-- "The rights of the best of men are secured only as the rights of the vilest and most abhorrent are protected." - Chief Justice Charles Evans Hughes, 1927 ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯

Reply to
Jerry Avins

Bob you make up a lot of really odd stories. Show me where there is a contradiction between Shannon and the FS-1037C Standard definition that I've quoted.

More bullshit. Show me where someone of any credibility is agreeing with you.

That goes for the rest of your entire article, which was illogical to the point of being nothing but a smoke screen that obfuscates rational discussion of this topic.

If you cannot post rational discussion, or show where someone with some degree of credibility supports something you want to claim, try stuffing it in a trash can, because it doesn't belong here.

--
Floyd L. Davidson            
Ukpeagvik (Barrow, Alaska)                         floyd@apaflo.com
Reply to
Floyd L. Davidson

We have been over that previously, there is no need to do it again. Your original statement is not the same as your clarification, and if you had merely excused it as an editing blunder or whatever, I think everyone would have accepted that as quite reasonable. But you are *still* insisting that it makes sense... it doesn't.

--
Floyd L. Davidson            
Ukpeagvik (Barrow, Alaska)                         floyd@apaflo.com
Reply to
Floyd L. Davidson

If you wish to apply for the job, there are forms located over there by the wall. Please simply walk across the pool and pick one up....:-)

Bob M.

Reply to
Bob Myers

I agree it a perfectly *normal* way of expressing it. Its the "no problem" part I can't agree with. :-) Do you think terminology which in plain English says exactly the opposite of what it means is a good idea? There's quite a lot of it as you look around various technologies, especially when mathematicians get involved.

Steve

Reply to
Steve Underwood

On the contrary. I've been consistent, but I think your misunderstanding of the concepts makes you think that there's a problem.

If you really think there's a problem, you should be able to point it out and explain why. I've countered all your erroneous "explanations" already, so those don't count.

Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions.

formatting link

Reply to
Eric Jacobsen

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.