Time *finer* than the degree to which you can synchronize "doesn't exist". Just like time finer than the resolution of a "timer" doesn't exist (it exists but you can't say anything definitive about it -- other than, "this happened between X and Y")
If a device reports an observation/action at time X and some other device reports an observation/action (perhaps of a different event) at time Y, I want to be able to claim X preceded Y (or vice versa).
And, be able to cause an action to occur at a specific point in time *relative* to some other action/observation.
(I know I can't do this with picosecond resolution! But, milliseconds are effectively useless for many things!)
I can already get O(200) without adding extra hardwware for more precise timestamping *at* the PHY (effectively). But, that's only because I have control of lots of other aspects of the system -- who says what, when, etc. In a more generic deployment (e.g., COTS) this would be more like O(2us).
My goal is to see how fine I can get without dramatically increasing recurring costs -- as this allows the "solution" to be applied to a larger set of problems.
But, regardless, I need to be able to measure this OFF the workbench (i.e., deployed) to verify functionality in situ.
E.g., an EE testing a PLL can look at the reference signal and synthesized signal within *inches* of each other on a PCB. I want that sort of capability but where the distances involved are "fractions of a mile" (awful long 'scope leads!! :> )