Does it make sense to consider a power supply as a signal and inject a pre-emphasis signal deliberatly to kill tones? I mean if you can pre- distort signals to make up for losses, surely you can do the same with a power supply? What kind of architecture would you use to drive such a low impedance signal? A bias tee with some sort of strong rf driver? (Supposing the tones are in the 33MHz -1GHz region)
Also, would it make sense to analyze the behaviour of the guts of an FPGA and supposing there's space, create a circuit that exactly balances out the transitions in the active circuit, so that the power consumption is more stable? So that the consumprion is always the same? Maybe the average will go up, but wouldn't it make it easier to decouple such a supply?
Or did someone spike my green tea this morning?
Thanks.