Comparing phase of physically distant signals

Hi Joerg,

Tooth settled down, yet?

You'd have to locate a piece of kit at each end of each link. I.e., three devices (assuming a common reference point) to test two links. (Or, hope nothing changes in the minutes or fractional hours that it takes you to redeploy for the "other" link)

[E.g., the timing protocol updates itself at ~1Hz just to deal with things like drift in the local oscillator]

That's what I've feared. Once you start having to support "test kit" your TCO goes up, fast! ("authorized service personnel", etc.)

Part of the appeal of legacy devices (60's) was that a "tech" could troubleshoot problems with common bits of test equipment. E.g., the original ECU's could be troubleshot (?) with a VOM... (no longer true though that market is large enough that OBD tools are relatively inexpensive)

Yes. And, relies on a *wired* link (or equivalent hooks in a wireless implementation)

This has to happen "nearly coincident" with "a)." to be able to ignore short term variations in the signals. E.g., if you were troubleshooting an analog PLL, you wouldn't measure the phase of the input signal against some "reference"; then, some seconds later, measure the phase of the synthesized signal against that same reference. Rather, you would measure the synthesized signal against the input signal "directly" so the measurements appear to be concurrent.

Also coincident with a & b. I.e., you want to deploy to pairs of radios and this "tick measurement" device to take/watch an instantaneous measurement free of jitter, and short term uncertainty, etc.

I'm not claiming it can't be done. Rather, I'm trying to show the height at which it sets the bar for a "generic technician".

It may turn out that this becomes a specialized diagnostic procedure. I would just like to avoid that, where possible, as it makes such a system "less ubiquitous" in practical terms.

(Alternatively, *guarantee* that the system always knows when things are not in sync and have *it* report this problem)

At 256KHz sample rates, I think it is probably too slow to get the sort of resolution I would need (I'll examine the site more carefully to see if there is some "extended precision" scheme that could be employed)

--don

Reply to
Don Y
Loading thread data ...

Yes, either that or you have to make it part of the standard on-board equipment (which I understand you don't want). There is no other way. The locations have to broadcast their timer ticks and that requires hardware.

I find it's even better today. Back in the 60's, just the thought of schlepping a Tektronix "portable" (as in "has a handle but take a Motrin before lifting") could make you cringe. Then you had to find a wall outlet, plug in, push the power button ... TUNGGGG ... thwock ... "Ahm, Sir, sorry to bug you but where is the breaker panel?"

Nowadays they already have laptops for reporting purposes and all you need to do is provide a little USB box and software. TI has a whole series of ready-to-go radio modules, maybe one of those can be pressed into service here. The good old days are ... today. Doing this wirelessly was totally out of the question in the 60's. The wrath of FCC alone was reason enough not to.

Wired works, as long as the infrastructure in the wiring won't get in the way too much (Hubs? Switches? Routers?).

Not really. How should an RF path change much in just a few minutes? Unless someone moves a massive metal file cabinet near the antennas, but you'd see that.

Since it seems you are not after picoseconds I don't see where the problem is. You can do that simultaneously, it's not problem, but it isn't necessary.

Take a look at SCADA softare. Something like this could be pieced together and then all the tech would need to be told is "Hook all this stuff to the units, plug this gray box into a USB port your laptop, click that icon over here, wait until a big green DONE logo appears, then retrieve all the stuff".

That would be by far the best solution and if I were to make the decision that's how it would be done.

Mostly this only samples at 44.1kHz, 48kHz or 96kHz, depending on the sound hardware you use. Unless I misunderstand your problem at hand, that isn't an issue. AFAIU all you are after is a time difference between two (or maybe some day more) events, not the exact occurrence of an event in absolute time. So if each event triggers a sine wave at a precise time you can measure the phase difference between two such sine waves transmitted by two different units. Combining it with a complex FFT of sufficient granularity you can calculate the phase difference down to milli-degrees. 1/10th of a degree at 10kHz is less than 30nsec and to measure that is a piece of cake.

You can get much more accurate than that. In fact, one of the big projects I am involved in right now (totally different from yours) fully banks on that and we have our first demo behind us. Since the system aced it so well I tried to push it, measuring the phase shift in a filter when a capacitor goes from 100000pF to 100001pF. It worked.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

Because that's all you can realise, in theory and in practice. If you want more and achieve it, please do publish: you'll be famous (or infamous).

I'm sorry, I don't understand the question; but I don't that that is important.

Fine.

Because the mere act of physical distribution brings constraints that are not present in non-distributed systems. Sorry; welcome to this universe.

In general it is a mistake to create a non-distributed system with the assumption that, later on, it can be split into distributed components with no changes. In my experience the /ability/ to split has to be architected into the design right from the start.

Read and understand the references I gave you.

The most relevant concept to your issues may be "logical clocks" which aren't directly coupled to wallclock time.

You asked for a means of synchronising time so that you could do X, when that is neither possible nor necessary to achieve X.

Having synchronised time is irrelevant to that.

Understanding "logical clocks" [ibid] may help you.

Even if you could still have long elastic test leads, the phase (which is equivalent to time) of the PLL will change as the elastic lead is elongated. Now consider three nodes arranged in a triangle so that each node is attached to two test leads. If only one lead is lengthened, should the PLL phase change or not?

I've spent a significant part of my professional life in metrology and distributed systems, and the test equipment and harnesses are *always* part of the system.

I strongly suspect you can achieve your requirements without having synchronised time; some form of "logical clock" should be sufficient.

Reply to
Tom Gardner

Nonsense. Do you mean there is NO WAY for two physically separated systems to have a common notion of "the passage of time"? That a "second" (pick your favorite unit) somehow differs when experienced by one system and another system "nearby" or "remote"? That a chunk of quartz magically vibrates differently because it's in one place and not another?

[Let's not be pedantic. Assume STP and non-astronomical distances]

I.e., do the hands of Big Ben revolve at a different rate than the clock in Times Square?

We rely on time (in "systems") to:

- provide a calibrated sense of elapsed time

- to note the order in which observations occurred

- to delay actions wrt "events"

- to have a common sense of "now" and "then" etc.

It's already been done, products sold with these capabilities, industries *relying* on these capabilities, etc.

See above. Do you claim that a nondistributed system *can't* have an idea of "passing time"? Delays? etc.

Sure! Every physical system imposes constraints! You can't resolve a femtosecond on a modern PC. "Oh, my! The sky is falling!!" So what? You don't apply a modern PC to a problem that would *require* that capability.

Exactly. Hence designing in this capability FROM THE START.

Node A sends a message to node B: "This is message 1. Be prepared to begin the time synchronization protocol" This happens at some "real" (physical) time as well as some *local* (wrt A) time. (It also will have happened at some local time wrt B but B might not have received it, yet.)

Node B receives the message AFTER it was sent (since the sending of the message was the causal event). Because the system was designed knowing that being able to observe the local (i.e., wrt B) clock is important IN A TIMELY, DETERMINISTIC FASHION, node B makes a note of the time at which the message arrived (i.e., B's time).

[This can be done with or without hardware assist depending on how precise your timing requirements are]

Some time later, node A sends a followup message saying "I noticed that *my* local clock indicated XXX when that message hit the wire (i.e., cleared the network protocol stack, was copied into the NIC's output buffer and eventually pushed out through the PHY onto the wire)"

[Again, this can be done with or without hardware assist]

Because node B is ready for this exchange (he has parsed the previous message in a timely fashion), when B receives the second message, it initiates an exchange with node A and, some time later, knows *when* that message "hits the wire".

Node A, like node B, knows when it received B's message (wrt A's local clock).

With these 4 timestamps, we know the transmission delay between node A and node B along with the "skew" between A's and B's local clocks. Each exchange is strictly causal.

A and B can share a common notion of "what time is it" to a resolution no worse than the round-trip time from A to B and, in practice, no worse than the one-way transit time.

[Note that this can be done in any number of equivalent ways; I've just chosen to illustrate PTP's approach]

Old news.

You can relate a *physical* clock to wall time in exactly the same way.

The performance of this sort of algorithm depends on lots of implementation details:

- how stable are the local oscillators over time

- how much jitter your "timestamping mechanism" exhibits

- how stable the reference oscillator is

- how aggressively you apply corrections to each local clock (i.e., time can *never* go backwards!)

- how consistent transport delays are (do they vary over time; are they always routed the same way)

- how consistent individual message delays are (is the transport time in one direction different from the other direction; is the return path different from the forward path) etc.

But, it's relatively easy to get times (synchronization "tightness") O(1us). With care in controlling the traffic on the network/nodes *and* characterizing the NIC's involved along with other bits of network fabric, you can hit the ~200ns range without breaking a sweat.

Beyond that, hardware timestamps in NICs are necessary. Boundary clocks in switch fabric. etc.

You will note that neither NTP nor PTP violate the concepts he sets forth. Rather, they go beyond and allow you to put hard numbers on actual implementations of "time synchronization".

No. Note that I stated (original post) that I *already* do this:

"I synchronize the 'clocks' on physically distributed processors such that two or more different machines can have a very finely defined sense of 'synchronized time' between themselves."

I stated how I *measure* the goodness of my implementation:

"During development, I would measure this time skew (among other factors) by locating these devices side-by-side on a workbench interconnected by "unquantified" cable. Then, measuring the time difference between to "pulse outputs" that I artificially generate on each board."

drawing attention to the fact that my implementation cares not about how much interconnect cable is used -- where a foot of such wire could account for a skew of ~1.5ns!

And, how I exercise my implementation to certify its performance when challenged:

"So, I could introduce a disturbance to the system and watch to see how quickly -- and accurately -- the "clocks" (think FLL and PLL) come back into sync."

*Then*, I *ask*:

"How do I practically do this when the devices are *deployed* and physically distant (vs. "electrically distant" as in my test case)?"

I.e., electrically, nothing changes between deployment and the "test bench" -- the same amount of cable can be stretched out to allow the two nodes to be physically separated (by a distance not exceeding the length of that cable). But, I now can't

*practically* verify the relationship of the "clocks" between the two nodes because "my arms aren't long enough" (nor are the test probes that I was using!)

How so? You are claiming node A and node B don't need a common sense of time. I am claiming they *do* -- hence the question.

E.g., the physical oscillators on each node may operate at harmonically unrelated frequencies. The timebases in each device may similarly operate at different frequencies (and resolutions) -- one might treat time as 2ms quanta while the other treats it as 3ms quanta/

But, they both think a second is a second. And, that "now" is "now" on both devices.

You don't lengthen *one* -- you lengthen *both*. I.e., both signals experience the same delay in transit... the phase relationship is retained at the "extended" tips of the cables (which COULD be different than at the *source* of leads)

How do I ensure that the audio signal from one loudspeaker is "synchronized" with the audio signal emitted by another (served by a different device)? That the audio emitted by both of these is synchronized to the *video* being displayed by a third device?

If I have a node detonate an explosive charge, how do I note the precise times at which the shock wave arrives at two *other* nodes physically separated from each other and the first node? (i.e., what if the medium encountered from detonator to first node is different than medium from detonator to second)

Getting precise time synchronization is *inexpensive* so that not taking advantage of it only makes many tasks unnecessarily harder.

Reply to
Don Y

Note, that if you are properly receiving and using GPS signals to set your clock, you are correcting for the propagation delays (at least to the level they are predictable). GPS units will typically know the current time in the GPS time system to within a small number of nano-seconds, as much larger errors in time would make the position they compute very inaccurate.

The trick is getting this time out of the unit. GPS modules designed to provide ultra accurate times will have a signal that transitions at the precise roll over of the GPS time second (or a faction thereof), which the external system can try to lock onto. (your cheap consumer GPS won't even try to generate this).

The relativistic issue would be if the propagation time (of what ever is being used to observe) between the two systems exceeds the accuracy that you are trying to synchronize to, then different observers will conclude differently if you maintained synchronization or not.

Reply to
Richard Damon

I've always wondered if by creating the bubble field of enormous size, much greater than the size of the boat, and also of scale at least similar to the depth of the water, then the bubbles farther away might shield the center from the upwelling effect, and get the sinking effect.

Reply to
Richard Damon

Pop science tells us there is no such thing as '"real" (physical) time'. Whether A happens before B depends on the location of the observer.

Instead of all this philosophizing, could you instead say what practical problem you are trying to solve? Does it still involve lawn sprinklers? If not, then what does it involve? Supplying this info will surely bring some helpful clarity. Thanks.

Reply to
Paul Rubin

One could even imagine creating a ring of bubbles around the boat, the centre and boat then sink as they rush outwards into the bubbles, then the sea around the outside rushes back in.

Conceivable? Yes. Would I put money on it? Of course not, without some practical experimentation :)

Reply to
Tom Gardner

Precisely. I've tried to give simple examples that would lead to that understanding, but have failed. (The last one in my previous message was about three nodes + three wires in a triangle, but he read it too quickly and replied for two nodes + two wires).

I agree, but... his statement of his practical requirements always includes statements about the solution.

It would be more helpful if he could define /minimal/ "use cases" stating /only/ what he needs to /achieve/ by observations at specified nodes. If he did that he might be able to understand known techniques and solutions.

Reply to
Tom Gardner

You are confusing clocks and time.

You are confusing clocks and time.

There are some old expressions: "you can lead a horse to water, but you can't make it drink" and "those that don't understand history are condemned to repeat it".

Three nodes, three leads stretched tight. Now very move one node at non-relativistic speeds so that only one (perfectly elastic) lead changes length. Should the PLL change phase?

[I should have been more explicit about one node moving - apologies]
Reply to
Tom Gardner

It's not "philosophizing" -- it's a statement of an actual (standardized) implementation.

The problem I am trying to solve is:

*verifying* that an EXISTING, WORKING IMPLEMENTATION that synchronizes local clocks on physically (electrically) separated devices to BETTER THAN MICROSECOND LEVEL tolerances is *still* achieving that level of performance AFTER DEPLOYMENT... when the devices in question are no longer *physically* close enough together to be conveniently "probed" at the same time (as they are when examined on a test bench)

To state this, again, in a way that you can *physically* imagine:

I have two physical, tangible devices that consume power, generate heat, occupy space, etc. I have a wire from a digital output (under processor control) on each device. The software on each device will pulse this output periodically (a duty that serves no actual purpose in the operation of the device) when the local clock maintained by the software on the device thinks it is "time X" (X being an actual number). The crystal oscillators powering the two devices operate at different nominal frequencies. The two devices are interconnected by an ethernet (CAT5) cable that is >100 ft long (but of uncalibrated length).

When I turn both devices on and wait a while (depends on how I have the time constants for the capture loop set) and then connect the two digital outputs to a two channel scope, the difference in absolute time between the two pulses is O(

Reply to
Don Y

Temperature effects in the different locations will affect the common sense of timing.

You say you have implemented an LXI like protocol on 1588. You now need extra confidence that their is a phase correlation between the two nodes, to some fine accuracy, such that you can be certain of the difference in phases of the same actions being asked of both distant nodes.

Someone mentioned GPS as a timing reference, which, if you had such a receiver at each location, assist in establishing how common a time-frame you were working to. If you are monitoring the action in each node you could record the GPS time-stamp and the nodes own time-stamp as corrected by LXI in order to see how close you are to simultaneity. Other than that the equal length cabling installed from each node would be the only other way you could determine this.

--
******************************************************************** 
Paul E. Bennett............... 
 Click to see the full signature
Reply to
Paul E. Bennett

Clocks measure the passing of time (in units).

To repeat the first sentence of my post: I synchronize the "clocks" on physically distributed processors such that two or more different machines can have a very finely defined sense of "synchronized time" between themselves. Was it not obvious from this that the clock was a hardware/software device that was used to *track* time? And, that I (already) had an implementation in place that caused these "mechanisms" (clocks) on the different processors to reflect a common notion of "now"?

Could you suggest a more explicit way of stating this? Without being overly pedantic??

[In some contexts, clocks are hardware devices while timers are software mechanisms]

Yes -- if the *electrical* length of the lead has changed then the propagation delay through that lead will change and, depending on the wavelength of the (periodic) signal being propagated along its length, the phase will change accordingly.

[Assuming it is a periodic signal so the idea of phase makes sense]

Hence my original comment that measuring the "relationship" (phase if considering a periodic signal; absolute time for anything aperiodic) between two physically separate signals could only be easily accomplished with equal length "test lead extensions".

If the difference in "extension lengths" is a foot, then the signal traveling down *that* extension will appear to be ~1.5ns delayed from the signal traveling down the shorter lead -- compared to the measurement that would have been obtained by comparing the actual "non-extended" pins.

Reply to
Don Y

I'm confused. I wrote: > In general it is a mistake to create a non-distributed > system with the assumption that, later on, it can be > split into distributed components with no changes. In > my experience the /ability/ to split has to be > architected into the design right from the start. and you replied > Exactly. Hence designing in this capability FROM THE START.

If you had built in the capability from the start, you wouldn't have to be asking these questions.

I think you'll find they aren't doing what you think they are doing :)

Reply to
Tom Gardner

As will changes in the cable, how it is routed, etc. That's too much detail for this question...

To be clear:

I have verified the algorithm's performance "on the bench" -- with varying interconnect lengths, various "disturbances" to the control loops, etc. I.e., "it works". It's ready to deploy.

Now, I want to stretch out the interconnect cable so that it is no longer practical to make the same *relative* measurements between the nodes. E.g., put one node on the 3rd floor of one building and another on the 5th floor of the building adjacent. I.e., no way in hell that I can *now* observe both systems concurrently! Even though I "know" they will exhibit the exact same degree of synchronization (unless something in the system *breaks*)

Yes. But that assumes I can get a signal from a GPS cluster wherever I might deploy these devices (metal buildings, away from windows, basements, etc.). If I can't then I don't have any way of testing or troubleshooting AFTER DEPLOYMENT. (e.g., putting an antenna on the roof defeats the purpose! Now you're running a cable from the roof to each node, etc.)

Yes, but that seems impractical -- running a wire from A to B (up the stairwell, down the hall, etc.)

But, if you step back and look at the problem again, "schematically":

- you have two nodes separated by some physical, "difficult" distance

- you want to run a ("red") wire from each node to a measurement point

- you've already got a ("blue") wire somehow connecting these nodes Why not figure out a way to use the existing (blue) wire for this purpose? I.e., the installer has already gone through the hassle of routing it through the walls, ceiling, stairwell, to get from A to B. Why duplicate that effort with a second (red) wire?

That's the direction I am pursuing currently... I'm pretty sure a "cheap" solution lies down that path!

Reply to
Don Y

I can prove that the design and implementation works. But, I can't ECONOMICALLY troubleshoot it after deployment. That doesn't mean I can't take advantage of a closely synchronized timebase over physically distributed devices. It just means that any problems suspected of being caused by dissynchrony would be more expensive to troubleshoot than I would like. E.g., embelish the existing run-time diagnostics to provide more detailed information and design a special piece of test equipment and possibly require special training to use it. (of course, if the subsystem

*doesn't* fail in an invisible way, then this isn't necessary)

Great! Then you should be more than capable of telling us *why* this ISN'T the case! I can provide URLs if you'd like...

Reply to
Don Y

For example:

"WR combines IEEE1588-v2 timestamping with a synchronous physical layer based on Synchronous Ethernet. Sub-nanosecond accuracy is achieved by actively compensating the link delay using phase measurements done with Digital Dual Mixer Time Difference (DDMTD) phase detectors and on-line calibration of the gigabit transceivers' latencies. The measurements presented in the article show a long-term master-to-slave offset below 1 ns, while the precision is better than 20 ps rms for a 5 km single mode fiber connection."

Yeah, they use Gb fiber instead of my 100Mb copper but the idea of synchronizing (the devices' sense of) "time" to this level of precision is the same.

In the section entitled "Applications" (uncluttered by the technical description of the implementation):

"One field of applications of WR are distributed data acquisition systems, for example the OASIS system at CERN, which acts as a huge oscilloscope measuring thousands of signals coming from sources distant by several kilometers. WR can be used to accurately time tag the blocks of samples at the ADC cards and produce nanosecond-accurate time tags for the external trigger signals. Having the sample blocks with associated time tags, one can reconstruct the original time relations between the signals and the triggers in software and present the measurements to the operator as if it was displayed on a typical oscilloscope."

You will note the illustration (Fig 13) accompanying the text clearly shows physically distributed "ADC cards", trigger inputs, etc. *all* connected to the "White Rabbit Network" which provides the "synchronized (nanosecond level) notion of time" to all of these devices. "Thousands of signals" (no doubt from a large accelerator?) "coming from sources distant by several kilometers".

Sure *sounds* like what I'm trying to do (on a much smaller physical scale and coarser sense of time).

Time for me to get some *real* work done... Enjoy your reading!

--don

Reply to
Don Y

... as a non-distributed system.

So you didn't architect it and design it (and then implement it) as a distributed system.

But maybe distributed deployment has been added as a requirement at a relative late stage.

Reply to
Tom Gardner

No, as a distributed system. I can insert hundreds of feet of wire between nodes and *prove* it works. I can then stretch out that wire and demonstrate that it *still* works (because the only thing that has changed is the physical locations of the individual nodes).

How does that follow from my statements? If I use an expensive piece of test equipment, I can verify it operates in it's physically distributed state. I can use an *inexpensive* piece of test equipment to prove it works when electrically connected in the EXACT SAME MANNER (there are no "wireless" paths by which data are exchanged between these nodes).

I would like to be able to use a similar piece of inexpensive test equipment to troubleshoot WHILE DEPLOYED (and, in fact, I

*can* use the same piece of equipment but it requires bringing a long spool of wire along)

You *claim* to have been involved in the design of distributed systems. Did you design and test by locating one device at one end of a football field and another at the other end -- running back and forth throughout the days, weeks, months that it took to develop the software and hardware? Or, did you *simulate* a deployed configuration ELECTRICALLY, on a bench, so you could sit, comfortably, and interact with every device in the system without leaving your chair? Devising tests in that simulated (though electrically and functionally equivalent) environment to prove that your design works and applying those tests to the system *before* undertaking the cost of deployment?

Sure sounds like you feel backed into a corner and are trying to avoid defending your EXPLICIT *claims* regarding the original question. That's OK. We can wait for your clarification.

Until you have something pertinent to add... (note: the email addresses of the CERN folks are probably available if you'd like to lecture them on why their idea won't work...)

Reply to
Don Y

I architected designed and implemented a global order fulfillment system with lax time constraints other than happens-before at a single global synchronisation location.

I architected designed and implemented telco systems that have soft real-time timing constraints, are completely reliant on happens-before for correct operation, and, by definition, there is no possibility of a single system-wide global time. However, time differentials were directly translated into money: that was the whole point of the system.

Great care was taken to ensure high availability, which implies remote[1] logging/diagnosis, remote debugging, remote control/recovery and remote upgrading. In addition, I insisted on building in tracing techniques that were able to prove that our system behaved correctly and that faults lay in other companies' systems that were attached to our system - which was commercially highly beneficial.

I suspect that would be more than sufficient for your requirements.

[1] Remote implies in a different country (or even continent), some I wouldn't visit voluntarily.
Reply to
Tom Gardner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.