This is leftover from a couple of discussions a week or two ago. There is something interesting that I don't understand.
In general, if I have an input clock and I want to generate an output clock, and the output clock is (much) slower than the input clock, I can do that with a FSM.
The jitter on the output clock can be up to 1/2 of the input clock off. (If it's off more, move it over by one.) If you are lucky and the numbers work out exactly, you get no jitter. (For example, dividing by 4.)
But how close is the frequency? The output frequency is out = in * X / Y Y is the number of states in the FSM.
Some combinations of X and Y give a better match to the target frequency. I'm pretty sure a math wizard would use continued fractions to explain it.
I know how to implement this if Y is a power of 2. That's just an adder and it generally fits well into FPGA. Given a minute or 3, I can work out the value of the constant to add.
It's easy to get closer to the target frequency by using more bits in the adder. If the bottom bit of the constant isn't a
1, then the adder will skip 1/2 (or 3/4 or..) of the states. So there are sweet spots where the bottom bit of the constant is a 1.This approach is also convenient if you want to make a sine wave rather than a square wave since you can feed the top N bits of the adder to a ROM lookup table.
But powers of 2 may not work as well as some simple pairs of X and Y. Is there a simple implementation technique for arbitrary Y that fits well into FPGAs? Is it something as simple as use an adder and reset it back to 0 after Y steps?
Is there a good web page or book that covers this area?
Next step is to understand the spectrum of the synthesised clock.