Fine-grained synchronization on modern networks

Hi,

Modern network infrastructure uses *lots* of buffering; memory is (now) cheap enough to embed throughout the network fabric.

With that, fine-grained synchronization over (wired) networks becomes problematic -- there's no deterministic way for a processor in a particular node to have any idea of its relative packet time wrt any other node in the network (though it is pretty obvious that a packet arrives at its destination some time *after* leaving its source! :> )

Sure, things like NTP *try* to quantify this skew. But, its goals are much more long-term... if it is wrong on the short term, there is no significant consequence. (I also suspect the apparent precision and accuracy that NTP provides is largely delusional :-/ )

So, how *do* you achieve fine-grained synchronization nowadays? What is *practical*? And theoretically

*achievable* (without an a priori characterization of the network infrastructure and topology)?
Reply to
D Yuniskis
Loading thread data ...

IEEE 1588.

Basically, it is about getting/setting send/receive time for packets at the physical layer, as opposed to going through a whole networking protocol stack.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

D Yuniskis wrote:

Reply to
Vladimir Vassilevsky

Hey, Don:-

PTP.. IEEE-1588.

Reply to
Spehro Pefhany

[%X ---- Stuff about NTP ---- X%]

As well at the IEEE 1588 standard that has already been mentioned, you should look at LXI also. This builds on IEEE 1588 and is used for instrumentation purposes across networks.

See

--
********************************************************************
Paul E. Bennett...............
Forth based HIDECS Consultancy
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-510979
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
Reply to
Paul E. Bennett

And figuring out between the nodes which node has the best clock, and synchronizing the other nodes to the best clock with an algorithm that rejects network communication jitter but compensates for long term drift between clocks.

Reply to
Spehro Pefhany

Hi Spehro,

Yes, but this makes lots of assumptions (constraints) about deployment.

Let me rephrase my question in a more general way:

What level of synchronization can you expect to achieve in an unconstrained deployment environment? E.g., imagine you are trying to coexist within an *existing* network structure -- you can't tell the customer to replace his switches; or flatten his network; or stop running applications that generate lots of network traffic, etc.

I.e., you *can't* count on the user having special hardware to generate good timestamps; special switches to propagate this behaviour across the switch; "benign" applications that let the network quiesce during times when synchronization activities are happening; hosts that are NOT evenly loaded wrt network traffic; NICs vary from machine to machine; etc.

I.e., you have to coexist with what is rather than redesign what you would *like*...

Note that in days past, this was a lot easier to achieve as broadcasts tended to *truly* be broadcasts, propagation delays were simply K * distance * speed of light, etc.

Reply to
D Yuniskis

J = packet delivery jitter (sec) V = number of sync packets per sec D = drift of the local clock (Hz/sec)

D J^4 1 Time error (sec) = [-------] ^ --- 2 V^2 5

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky
[]

Question is: what are your trying to achieve? What really are your real-time constraints?

Is it soft real time? Is the timestamped data essentially historical? (i.e. late delivery updates a database of some sort. It is not used immediate calculations for process control) You need all the data, but late or out of order delivery does not create serious error conditions. IMO, This case could essentially be solved at the application layer. (Timestamps generated by the end point devices, correlated to the destination device's clock.)

Is it really hard real time? IOW, you are doing some kind of process control? Sorry, but ethernet being what it is does not provide any way to assure delivery times. That's why there tends to be distributed process control. Put as much intelligence as needed at the point of control.

Only if you happened to be on the same wire. Since hubs and bridges, it has never been that simple. Ed

Reply to
Ed Prochak

I want to know what is *possible* before even considering various communication media. E.g., if the best I can achieve with "ethernet" is O(100us) then I will see how that reflects to what the *product* can realistically claim. If that ends up looking like crap, then I rule out "ethernet" and move on to other technologies.

If I can get verifiable synchronization points *reasonably* often, I can calibrate the local oscillator to track time

*between* those synchronization points. Then I just have to characterize my local oscillator and controls wrt the implied accuracy and precision of the synchronization means.

Hubs, for the most part, were "fixed delays". And, all ports saw the same data at the same time. (contrast with switches that store and forward with elastic stores)

Reply to
D Yuniskis

For real-time applications one could consider ethercat.

Reply to
Dombo

Anything is possible. Providing everything is done right, snake networks achieve picosecond level of accuracy over 100M Ethernet. However, this doesn't work with WiFi because of unpredictable delays.

BTW, I've done project where I synchronized a network with microsecond accuracy over voiceband wireless link.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

I would not have bothered to reply if I had noticed who the OP was. All his posts seem to follow the general pattern "Let us hypothesize a universe constructed entirely of cheese. What is the possibility in such a universe of being eaten by a purple dragon between the hours of three and four on a Wednesday afternoon?"

Reply to
larwe

Was the existing network tweeked in any way? I.e., did you layer an application on top of an existing COTS network

*without* any special constraints placed on it? If not, what did you have to "retrofit" in terms of equipment and/or topology? I.e., how painful/expensive was it for the customer to make this adaptation?

If the network architecture/implementation was designed intentionally for this application, what penalties/constraints were imposed? What were their costs (i.e., what was "traded" for those constraints)?

I've looked at a proprietary wireless approach using a timing beacon and hardware designed specifically for the application and should be able to get O(50ns) by calibrating the receivers in manufacturing and just running things open loop. But, I haven't look at temperature effects on that accuracy and atmospheric effects (the wireless approach tends to suggest deployment over considerably larger areas -- square miles -- and I don't want to trade one set of problems for yet another :< "Gee, why can't we use this to replace our XYZ product, too?")

The wired approach silently and implicitly limits expectations (deploying miles of wire has a significant cost associated with it :> )

Reply to
D Yuniskis

Don,

I am consultant. If you have particular project, I can work on it. Contact is at the web site.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

D Yuniskis wrote:

Reply to
Vladimir Vassilevsky

I'll run this by client (after New Year's). Your location (KC) may be a problem for them (I'm sure you've dealt with that "issue" in thepast... seems like most consultants have! :< ). (BTW, is "Stevenson's" still in business? I can't recall if they were in KC, MO or KC, KS)

I may try some approaches on a personal project just to see how well they work "at no risk" (cf my "Bottom feeding..." posts elsewhere in this newsgroup). Timing constraints there aren't quite as important (since most folks have "tin ears" :> )

Thanks!

--don

Reply to
D Yuniskis

I remember enjoying Mr. Yuniskis' posts and the occasional interaction with him in the mid '90s, probably more on comp.realtime when it had a pulse. There are few folks I admire (you know who you are) and he's one of those. I don't remember when, but he went silent. Maybe I wasn't looking in the right places, but to me, he'd fallen off the map, or died, or something, based on the sudden silence. It's nice to see that he's back, if it's really him.

FWIW, I hold Don in high regard now from my perception of him then (again, the mid '90s). I understand what you're saying, but in my opinion, he deserves more respect than you are giving him, but your respect is yours to give and I respect that. Hey, you're a winner too!

--
Dan Henry
Reply to
Dan Henry

XMOS chips and the XLinks that they use for inter-core and inter-chip comms are completely deterministic:

http:

formatting link

Leon

Reply to
Leon

Which has nothing to do with=20

"embed throughout the network fabric"

unless the network fabric is VERY small.

Obviously, you use full TCP/IP between these chips so that you can run

"things like NTP *try* to quantify this skew"

--=20 Paul Carpenter | snipped-for-privacy@pcserviceselectronics.co.uk PC Services Timing Diagram Font GNU H8 - compiler & Renesas H8/H8S/H8 Tiny For those web sites you hate

Reply to
Paul Carpenter

They have Ethernet connectivity as well, with TCP/IP stacks available. They are fast enough to implement Ethernet in software.

Leon

Reply to
Leon

Whoosh...

Leon point

--=20 Paul Carpenter | snipped-for-privacy@pcserviceselectronics.co.uk PC Services Timing Diagram Font GNU H8 - compiler & Renesas H8/H8S/H8 Tiny For those web sites you hate

Reply to
Paul Carpenter

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.