One point they make ,"a few millivolts represent a large portion of the nominal 0.63-0.67 volt typical value.", I think refers to the fact that the base voltage/current relationship is exponential. For example , assume you have 1ma of base current and the base to emitter voltage is .6 volts. If you increase the base current to 10ma, the BE voltage increases by about 60 millivolts. If you increase the base current to 100ma, the BE voltage increases by another 60 millivolts. So for a 100 times increase in base current, the BE voltage doesn't increase 100x like it would in a simple resistor. Simply put, a large change in base current produces a small change in base to emitter voltage. A tenfold increase in base current produces about a 60mv change in base to emitter voltage. Typically when a transistor is just beginning to conduct, the BE voltage would be low, around .5 volts. When the transistor is conducting near maximum gain, the BE voltage is about .6 volts and when the transistor is near saturation, the BE voltage is about .7 volts. Over this 200mv range, the base current might have changed 1000 times or more.
My question is; how can a variation of a few millivolts have an appreciable impact on the base-emitter voltage when the bias can be set at 2,3,4 or more volts to begin with? I t would seem to be negligible in terms of the amount of main bias voltage present on the base Don't confuse the base to ground voltage with the base to emitter voltage. You won't have 2 or 4 volts from the base to emitter in a linear amplifier, you probably wouldn't ever see this unless something in the circuit failed. You will have voltage developed across the emitter resistor that is about Ic x Re, and the base will sit about .6 volts higher than that. When calculating the value of the base bias resistor that connects the base to Vcc normally you would use R = E/I where E is Vcc - Ere -.6 volts and I is typically 10 times the actual base current that you need. BUT!! - Because that .6 volts is only typical, the value you end up with for the bias resistor will almost always have to be tweaked later to get the desired collector current. Second question: Exactly how can the low voltage signal (say 100 milivolts peak) affecting the bias in any significant way to begin with? It doesn't in a practicle amplifier , unless the power to the load causes the power supply to sag. The input signal adds to and subtracts from the bias, it has to if you expect the circuit to pass signal. If Vcc varies, and your bias is supplied from Vcc, then you have sort of an AC bias applied to the base. Generally that's not a good thing, and if severe enough it causes an oscillation called motorboating. Third question: What impact is all this having on a common emitter circuit set up for linear amplification? Assume you have a common emitter voltage amplifier with a Vcc of 12 volts. If the transistor is biased so that the emitter sits at 2 volts, that leaves enough room for about a 10 volt peak to peak signal at the collector. The collector would ideally be at 7 volts with no signal, 7 volts is the collector Q point voltage . If the collector Q point voltage shifts by 1 volt up or down, then the output signal peaks get 1 volt clipped off. Shifting up would clip the positive peak, and shifting down would clip the negative peak. A typical common emitter RC coupled amplifier would need only about a 50 to 100mv change in base bias voltage to cause this shift. If the output signal were to be only 2 volts peak to peak, then a 1 volt shift in the collector Q point wouldn't cause any clipping at all. Generally, if the stages are AC coupled, the bias drift only affects the stage that it's in, but if the stages are direct coupled, the error is amplifed in later stages.