Hi,
Modern network infrastructure uses *lots* of buffering; memory is (now) cheap enough to embed throughout the network fabric.
With that, fine-grained synchronization over (wired) networks becomes problematic -- there's no deterministic way for a processor in a particular node to have any idea of its relative packet time wrt any other node in the network (though it is pretty obvious that a packet arrives at its destination some time *after* leaving its source! :> )
Sure, things like NTP *try* to quantify this skew. But, its goals are much more long-term... if it is wrong on the short term, there is no significant consequence. (I also suspect the apparent precision and accuracy that NTP provides is largely delusional :-/ )
So, how *do* you achieve fine-grained synchronization nowadays? What is *practical*? And theoretically
*achievable* (without an a priori characterization of the network infrastructure and topology)?