Comparing phase of physically distant signals

Time *finer* than the degree to which you can synchronize "doesn't exist". Just like time finer than the resolution of a "timer" doesn't exist (it exists but you can't say anything definitive about it -- other than, "this happened between X and Y")

If a device reports an observation/action at time X and some other device reports an observation/action (perhaps of a different event) at time Y, I want to be able to claim X preceded Y (or vice versa).

And, be able to cause an action to occur at a specific point in time *relative* to some other action/observation.

(I know I can't do this with picosecond resolution! But, milliseconds are effectively useless for many things!)

I can already get O(200) without adding extra hardwware for more precise timestamping *at* the PHY (effectively). But, that's only because I have control of lots of other aspects of the system -- who says what, when, etc. In a more generic deployment (e.g., COTS) this would be more like O(2us).

My goal is to see how fine I can get without dramatically increasing recurring costs -- as this allows the "solution" to be applied to a larger set of problems.

But, regardless, I need to be able to measure this OFF the workbench (i.e., deployed) to verify functionality in situ.

E.g., an EE testing a PLL can look at the reference signal and synthesized signal within *inches* of each other on a PCB. I want that sort of capability but where the distances involved are "fractions of a mile" (awful long 'scope leads!! :> )

Reply to
Don Y
Loading thread data ...

As long as pulse and response for the media test (ToF) happens via the same cable it makes no difference whether its characteristics change. Provided you repeat that ToF test before every transmitted timer tick of your measurement method.

It normally is. Unless you are next to a busy airport, freeway or high-speed rail. You also need to make sure the frequency/carrier you are using is free of interference, regardles of whether you use cables or wireless.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

That's fine provided the two devices are at the same point. If not, the concept of "before/after" is moot.

That's just fine since all observations and actions are occurring at one point.

Picoseconds matter inside ics :) E.g. there are ics which allow you to change the delay of digital signals in increments of, IIRC, 11ps, where 11ps is the internal delay of one gate. See the OnSemi NB6L295 for one example.

There's no conceptual problem with synchronising edges of repetitive signals to much less than the time-of-flight, but that's not synchronising the time.

Reply to
Tom Gardner

You might want to look into how NTP (the Network Time Protocol) works. It is based on having a fairly consistent delay (or making multiple measurements to average out the variations).

As I remember, the basic technique is to start with the two units A and B having their own idea of what time is, time is advancing at the same rate for the two units, but there may be a skew between them.

Unit A sends a message to B, with the time stamp of when it sent the message. Unit B then receives the message, marks it with the received time stamp, and then resends it to A, adding again the time stamp of the transmission, and when A get the answer back, it adds this time as the forth and final time stamp to the message.

From the 4 time stamps, you can calculate the time it took for a message to get between A and B, and the difference in their clocks, allowing A to adjust his clock to be in synchrony with B. The math assumes that the flight time in each directions is the same, any difference contributes in an increase resultant skew in clocks.

Reply to
Richard Damon

Bad news. Any measurement between two points is going to involve some medium in between. The trick is to not have the medium characteristics change during the measurement. That makes copper and dielectric a rather bad choice, and shoveling bits through repeaters, hubs and switches a good source of additional errors. Going though the air seems to offer the least drift, error, and jitter.

Please note that CAT5 or 6 is not much of an improvement over coax cable. The major source of drift is the elongation of the copper conductors with temperature. Whether that error is significant depends on the lengths involved, which you haven't disclosed.

I like Joerg's idea. Two reference pulses, sent by each end, measured at some known central location. You can also do it backwards. Have the known central location send a single reference pulse, and then have the two end points store and return the pulses after a stable and fixed delay.

Incidentally, that works nicely for playing radio location via hyperbolic navigation (same as LORAN). I used it to create a line of position for a VHF/UHF mobile radio talking through a repeater. I used two identical scanner receivers to listen to the repeater input and the repeater output. The delay through the repeater is fairly stable and easily measured. Therefore, the mobile transmitter was somewhere along a line of position defined by a constant time difference between the repeater and my receivers. Today, I would have a computer calculate the hyperbolic line of position. In 1979(?), I used a road map, two push pins, a loop of string, and a pencil. I still can't decode what you're trying to accomplish, but this might give you some ideas.

Yep. If you figure out how to reflect the injected signal, you can probably live without having to transmit back any timing information.

What I'm not seeing are any distances involved or accuracy requirements. Also, what equipment or devices you have to work with. I'm floundering with bad guesses as to your intentions.

--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
Santa Cruz CA 95060 http://802.11junk.com 
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

Hi Joerg,

Yes. I know folks who do variations of this with *hundreds* of pounds of cable! (since the cable has to be ruggedized if you want to be able to reuse it, etc.)

In some cases, yes. The third point ultimately gets tied into everything, though, as it is *the* reference (GrandMaster in PTP parlance).

But, if I can measure (test equipment, not *inside* the application) the skew between two distant devices, I can always bring the third device into the equation.

This doesn't happen during installation. The system self-calibrates. This is only an issue when you are troubleshooting something that

*isn't* working.

I.e., how do you verify that "time" is correct? How do verify that the local servo loop is tracking the current reference correctly? E.g., is the local clock currently *locked*? Is the loop filter currently configured correctly to track once locked (vs. acquisition)? What sort of jitter is the local loop experiencing? Is the reference seen as drifting? etc.

E.g., if the loops are out of sync (phase and/or frequency), the network speaker's output is out of sync with other network speakers (or network video), etc. Or, reproducing at incorrect pitch.

Internally, these manifest as buffer underruns or overruns (because the local notion of time differs from the system's notion of time). Is the problem *here*? Or, *there*?

You (I) want to be able to nudge the system and see how individual components react to those disturbances to give clues as to what's working and what isn't.

With calibrate network interconnects, the problem wouldn't exist! :> I'm trying to reduce the complexity of the measurement (verification?) system to *a* device -- preferably something non-esoteric (so folks don't have to buy a special piece of gear to verify proper operation and diagnose problems).

As I said (elsewhere), I can currently verify performance at the ~100-200ns level "with a scope". [I am trying to improve that by an order of magnitude.] But, that's with stuff sitting on a bench, within arm's reach (physically if not electrically).

So, I need a (external) measurement system that gives me at least that level of performance. Without necessitating developing yet another gizmo.

[E.g., the 'scope approach is great: set timebase accordingly; trigger off pulse from device 1; monitor pulse from device 2; count graduations on the reticule! This is something that any tech should be able to relate to...]
Reply to
Don Y

Don didn't mention any numbers with specific required tolerances but the NTP solution may work to an extent. If Don wanted the finer time difference granularity then he should look at LXI as a time protocol method (used in finely timed instrumentation) which can, I believe, attain better than 40ns.

--
******************************************************************** 
Paul E. Bennett............... 
Forth based HIDECS Consultancy 
Mob: +44 (0)7811-639972 
Tel: +44 (0)1235-510979 
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. 
********************************************************************
Reply to
Paul E. Bennett

It's way worse than that. Not only don't you manage to "declare" that ordering ... that orderings simply may not _exist_ in the first place. In other words, you may be chasing a unicorn there.

In the full theoretical extreme, there is no such thing as "synchronicity" across _any_ special distance other than zero. Einstein killed that notion pretty well.

Practically, there is no such thing as "synchronous" to a tolerance of delta_t within a set-up of (signal path length) diameter d if d/delat_t is lager than your signal propagation speed is

You've been strictly avoiding to give actual numbers, but you've mentioned "city block" and "dozens of nanoseconds". Even that puts you almost certainly beyond the limits of possibility. c can be memorized easily as one foot per nanosecond, cable-bound signalling tends to be

2/3 of that, i.e. 20 cm/ns. So unless your "city block" is much smaller than I think it would be, you're looking at a window of impossible synchronicity of around 500 nanoseconds. So no, there's no way you'll get to 100 ns or less.
Reply to
Hans-Bernhard Bröker

No, they're not. Because over that distance, they cannot be. It's just flat-out impossible, because the thing you're trying to measure doesn't physically exist.

Reply to
Hans-Bernhard Bröker

What is the actual problem ?

If one event occurred at 12:34:56.004.035 UTC and an other at

12:34:56.004.036 UTC, there should not be a problem to determine the sequence of event (SoE), provided that the local clocks are within one microsecond.

If you have minutes or even weeks time to synchronize the absolute time clocks in two devices, you should be able to get quite close reliable time stamps. Of course, when the system is started for the first times, within a few seconds of startup, the time stamps can be wildly off scale.

Reply to
upsidedown

[...]

The interconnects would calibrate themselves during the test. It doesn't matter whether it's a radio link or a wired link.

I am afraid you will have to develop a gizmo. Because there ain't enough of a market for what you are trying to do and I doubt there is off-the-shelf stuff. Except for very expensive test equipment that can probably be programmed to do this job (the stuff that costs as much as a decent car).

But you need a link to each unit. And this is also easy if a scope approach is acceptable:

a. Provide a full-duplex radio link to each unit, meaning two different frequencies each. Can purchased. Send tone burst out, measure phase difference when it comes back.

b. Do the same with unit B.

c. Listen for timer tick and measure delta.

d. Subtract difference found in steps a and b.

You might want to take a look at Daqarta. I found it to be very precise in measurement precision when it comes to phase delays:

formatting link

Not sure how far down in precision this app would go but it'll even give you a histogram of timing errors:

formatting link

The author is very responsive and knowledgeable when it comes to specialty applications you want to pursue. Speaking from experience here.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

Even if they start out like that, they won't stay synchronized over any period of time.

You're putting yourself in the realm where fundamental physics denies the existence of what you're trying to achieve. The fact that one device sits two storeys higher than the other could already disrupt your synchronisation in the long run.

You said elsewhere that for a problem this small you plan on ignoring the theory of relativity. Well, here's the bad news: that won't help you, because that won't make RT ignore you.

It's a fallacy to believe that SRT only concerns very big or very fast objects. As you drive your timing requirements down, and the size of things up, sooner or later you always hit relativistic boundaries. Silicon designers learned that about a decade ago, when they found out that they just couldn't push CPU core frequencies any higher because they couldn't maintain synchronicity of clock edges across a region as small as 1 cm^2. You're trying to cross essentially the same boundary.

Having a long time to achieve synchronization only makes the problem worse. Think about it: now you're expecting those clocks to maintain a tolerance of one microsecond across several weeks. Several weeks is at least a million seconds. I.e. now you're requiring 1e-6 / 1e6 = 1e-12 relative clock speed tolerance. You would need atomic clocks or better for that, which I'm willing to bet your devices don't have, nor anywhere near it.

Reply to
Hans-Bernhard Bröker

NTP is too coarse. Great for "people time" but not for "event time".

Yes, this is effectively what I have already implemented. I just cut some corners that pertain to interoperability as well as exploit other invariants that my system imposes so I don't have to add all the hardware that 1588 would "typically" require.

But, the problem I am trying to address falls more in the category of verification/troubleshooting.

I.e., system is in place. Nodes are physically separated (once deployed). Now, how do you verify that the "local clock" (sense of time) on node A is "closely synchronized" (frequency, phase and reference time) with the local clock on node B?

When the two nodes are sitting side by side on a bench, you can compare synthesized "output pulses". I.e., create some arbitrary even that happens often enough that you aren't waiting FOREVER for the next one to occur; yet not too often that confusion can arise as to whether the event/pulse you are observing from node A corresponds, wrt the reference time, to the same pulse you are observing from node B. (i.e., that the pulse sequence from one node isn't 360+ degrees out of phase wrt the other node)

You need some way to get the two "signals" to a point where they are physically close enough together to connect to *a* piece of test gear. Or, a way of comparing each, independently, to some reference (that is stable over the test period).

E.g., the naive approach of is two equal lengths of cable that can be used to "extend" the signals from the two nodes to some common place. You can then trim the lengths of the cables to match their propagation delays to a tolerance exceeding that required for your measurement. Easy to achieve a ~1.5 ft match between cable lengths and, thus, be able to ignore the differences in the cable lengths as a meaningful error source in your measurement.

OTOH, had you run *one* "extension cable" from node A over to node B, then the difference in signal path easily swamps the signal (time difference) being measured. E.g., at ~60 ft you're already up to 100ns. So, if side-by-side, node A and node B appeared to be within 100ns of each other, they now appear to be perfectly coincident *or* 200ns apart (depending on where the delay has been introduced).

Using radio as the "signal extender" has similar problems regarding propagation delay (different media paths) as well as requiring in situ calibration (wire can be cut to length ahead of time)

Beyond "verification", there are issues for troubleshooting, etc. E.g., how do you know that time *is* synchronized to the degree The System thinks it is? If something is broke, it is just as likely to be the servo that does the synchronization ("I think everything is perfectly synchronized, *now*, so I will take no action to tweak the servo loop")

--don

Reply to
Don Y

This only works in a confined space [such as a lab cylinder]. In open water the bubbles cause an upwelling current which pulls in water from the sides, supporting the column and pushing low density water upward thus supporting whatever is floating in it.

Some years ago, a team attempted to test the bubble theory by sinking a full size yacht in a harbor. The more bubbles they produced, the

*higher* the yacht road on the upwelling current. The best they were able to achieve was to create a confused sea state that managed to swamp the yacht when it was heavily overladen.

OTOH, one giant bubble is likely to cause a problem - effectively opening a hole in the water into which a ship may drop. A large ship that buries its nose or breaks in the middle is unlikely to survive. The question is whether bubbles of such enormity can actually form.

More likely rogue waves and pop-up severe storms are responsible for most sinkings in the triangle. But methane releases might be the reason for some aircraft losses. It only takes about 2% methane to choke an engine. It's possible that a large release could achieve sufficient concentration over a wide enough area that a plane flying through it would stall and not be able to restart. Methane also confuses pressure altimeters, which show the plane climbing as the methane concentration increases. At night or in low visibility, it's possible this might trick a pilot into diving right into the water.

YMMV, George

Reply to
George Neuner

Interesting and relevant.

However, having talked to people that work on North Sea gas rigs, one of the things they fear is a rupture of the pipe underwater. Gas in air => no helicopter rescue (as you point out below) and gas in water => their lifeboats sink. Whether their fear is justified is outside my field of knowledge, and I don't know how relevant the Deep Water Horizon explosion would have been.

That is a good question. I suspect that it might be possible under infrequent circumstances. AFAIK undersea hydrates are "fragile" in that they do tend to flash into gaseous methane. So, what could provoke a flash?

First question: are clathrates present there? Definitely, and in large quantities. Even if they don't come from decaying organic matter, there are obviously

Clathrates will of course form where they are only marginally stable, but then small disturbances will quickly destroy them. The deep sea environment is relatively stable and slowly changing so the clathrates could build up and not be "recycled" into methane until...

An earthquake would have sufficient power over a widespread area to cause clathrates to flash.

A mudslip ditto, with the added effect that clathrates flashing at the top of a slope could set off a small mudslip which then triggers flashing down the slope in a self-reinforcing avalanche. It is a moot point whether the avalanche causes the flash, or vice versa!

The debris from underwater avalanches is visible in several places in the "Bermuda triangle". 2+2 =?

Undoubtedly a major contender, but IIRC some mysteries occurred in relatively good weather. No, I can't remember details since I haven't looked at such things since the 70s :)

Visual height estimation when flying at night over water is notoriously error prone too.

Reply to
Tom Gardner

On 05/08/13 08:26, George Neuner wrote: > On Sat, 03 Aug 2013 20:27:49 +0100, Tom Gardner > wrote: > > >> Gas is nasty even when it doesn't ignite. It turns "green water" >> into "white water" which has a much lower density. So you become >> too dense to float >> >> IMHO that's a possible explanation for some bermuda-triangle-type >> phenomena: methane clathrates "flash" into methane and trigger nearby >> clathrates into flashing, thus causing sudden unpredictable bubbles of >> white water over relatively large areas. > > This only works in a confined space [such as a lab cylinder]. In open > water the bubbles cause an upwelling current which pulls in water from > the sides, supporting the column and pushing low density water upward > thus supporting whatever is floating in it. > > Some years ago, a team attempted to test the bubble theory by sinking > a full size yacht in a harbor. The more bubbles they produced, the > *higher* the yacht road on the upwelling current. The best they were > able to achieve was to create a confused sea state that managed to > swamp the yacht when it was heavily overladen.

Interesting and relevant.

However, having talked to people that work on North Sea gas rigs, one of the things they fear is a rupture of the pipe underwater. Gas in air => no helicopter rescue (as you point out below) and gas in water => their lifeboats sink. Whether their fear is justified is outside my field of knowledge, and I don't know how relevant the Deep Water Horizon explosion would have been.

That is a good question. I suspect that it might be possible under infrequent circumstances. AFAIK undersea hydrates are "fragile" in that they do tend to flash into gaseous methane. So, what could provoke a flash?

First question: are clathrates present there? Definitely, and in large quantities. Even if they don't come from decaying organic matter, there are obviously plenty of sub-surface sources (well, for the next few decades anyway!)

Clathrates will of course form where they are only marginally stable, but then small disturbances will quickly destroy them. The deep sea environment is relatively stable and slowly changing so the clathrates could build up and not be "recycled" into methane until...

An earthquake would have sufficient power over a widespread area to cause clathrates to flash.

A mudslip ditto, with the added effect that clathrates flashing at the top of a slope could set off a small mudslip which then triggers flashing down the slope in a self-reinforcing avalanche. It is a moot point whether the avalanche causes the flash, or vice versa!

The debris from underwater avalanches is visible in several places in the "Bermuda triangle". 2+2 =?

Undoubtedly a major contender, but IIRC some mysteries occurred in relatively good weather. No, I can't remember details since I haven't looked at such things since the 70s.

Visual height estimation when flying at night over water is notoriously error prone too.

Reply to
Tom Gardner

With GPS/GLONAS etc. as long as there are sufficient number of satellites in view, your local clock will be updated quite frequently, so in reality, the local clock short time stability is an issue, the long term (aging etc.) is not much an issue, provided that satellite navigation is available. With more and more countries running their own navigation systems, there is a reduced risk that a single country could disrupt the service by unilateral decision.

OK, so you want to put some relativistic issues into the picture.

A device in the upper floor will "orbit" the earth with about 10 m longer path than the device on the lower floor, thus the device on the upper floor has a 0.25 ppm higher velocity than the device on the lower floor. Now that the whole building rotates about 1 ppm of the speed of light, so it takes quite a long time, until some relativistic effects will be observed between the floors.

The gravity at upper floor is slightly lower than on the lower floor, so again, this will have some minor relativistic issues.

But again, with only daily time updates, these errors should be easily compensated.

Big parallel synchronous systems are going to fail simply because of propagation delays, long before relativistic issues are included.

There seems to be some misunderstanding here. I did not say that two measurements should be taken a week apart. I was suggesting that measurements should be as often as possible and the result should be averaged over a week or two.

Think about a geodetic surveying terminal that might sit a week or two in the same place will get the location of the receiver (actually antenna phase center) to within a millimeter or two. I would assume that the timing accuracy could also increase in a similar way.

My naivistic approach would be to get timing signals from the satellites as often as possible (say every second), calculate the difference from my local clock and use some Kalman filtering to get rid of bad samples. Assuming random timing errors and my local clock is synchronized with the reference, equal amount of "early" and "late" samples should be available, i.e. the sum of differences should be zero. When the sum of differences is not zero, you should adjust your own local clock.

Multiple manufacturers claim timing errors in the tens of nanoseconds range even a few minutes after switch-on, so apparently they use something better than my naivistic approach.

Reply to
upsidedown

Since you are building a distributed system, can I suggest that it would be beneficial to you if you are aware of the theory and practice that has been developed over the past

40 years. That might save you time by avoiding going down blind alleys.

In particular you will find that you probably *don't need*

*to synchronise time*, if all you are interested in is which event occurred first.

Most work stems from Lamport's seminal 1978 paper "Time, clocks, and the ordering of events in a distributed system" which is easily locatable via a google search, e.g.

formatting link

I've used this book, which is sufficiently well-regarded that it has been reprinted four times since 1994.

formatting link
In particular the chapter 10 "Time and Coordination" has useful info on the general problem, synchronisation, logical time and logical clocks.

No doubt other distributed system text books cover similar ground.

Reply to
Tom Gardner

And each of those updates to an individual station will _disrupt_ your carefully achieved synchronization. So that's not actually helping --- it's making things worse by introducing yet another source of variation.

And relativity is the law that tells you that those propagation delays will _never_ be smaller than distance / c. It sets a rigid upper limit worth keeping in mind.

Reply to
Hans-Bernhard Bröker

Who claimed that "all (I am) interested in is which event occurred first"?

Is that the only use you have for time in a *uniprocessor* system? Why would you *only* have that need in a *distributed* system?

My system is physically distributed because it is far more economical and practical to design it in that way. But, that doesn't mean I should not avail myself of the same sorts of capabilities that would have existed had it NOT been distributed.

How do you cause two effectors to be engaged "simultaneously" (or, with *any* known *fine* timing relationship to each other) unless you have a common sense of time?

How do you triangulate the location of an event if you don't know your positions accurately *and* a common reference against which to operate?

Note that the issue isn't *synchronizing* the clocks. There are well-known technologies that will do this.

The question I posed was how to measure/verify/validate/troubleshoot such a system ONCE DEPLOYED.

E.g., you can measure/verify/validate/troubleshoot how well a *hardware* PLL is working with a pair of probes on a bench (choice of test equipment can vary). But, in a physically distributed system where the distances are "longer than the test leads", just connecting to the signals of interest becomes a problem in itself! When does the "verification device" become a variable as well?

Reply to
Don Y

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.