But I mean, really. The usually presented theory is crystal-clear: There is a phase detector which produces voltage proportional to phase difference of the input signals, which is then filtered and feed back to a VCO and the loop is closed.
I am trying to implement a software PLL and my simulations show the above is not true. For zero phase difference the phase error voltage is also zero, so it would require a VCO/filter with infinite memory, otherwise the VCO would return to its free-running frequency. It can be done in digital, but in the case of any analog VCO the control voltage *sets* the frequency, not *adjusts* it by a given amount. Therefore such a PLL requires a non-zero phase error to stay in lock, it is just "error-shaping" that keeps both frequencies in sync. This is done by sub-cycle inflation/deflation of the VCO waveform, i.e. by distorting the VCO signal. The high-freq part then integrates out to 0 and the low-freq part is what keeps the loop in lock.
This is how a free-running 50Hz PLL with a multiplying detector locks to 55Hz input (one second simulation):
and several output cycles magnified:
Blue/orange is the quardrature VCO, green is the input, red is the correcting voltage. The loop is in a perfect lock, the red waveform is sufficiently below 0 in order to make it happen, but the red oscillations is exactly what makes it work and *should not* be excessively filtered. This conclusion is again backed up by simulation: for a given cutoff frequency increasing the filter order makes the loop harder to stabilize and for a 3rd order RC filter with 1.5Hz cutoff I am unable to adjust the gain to make it lock. If the order increases, it must be compensated by the increase of the cutoff frequency.
If I am right then why all the books I know simply lie? :-)
Best regards, Piotr