Comparing phase of physically distant signals

That can be done via RF or cable. With RF you can send out a pulse train modulated onto a carrier whenever unit A says it is high noon. Then send a similar pulse train when unit B thinks it's high noon, but this one has to be on another carrier frequency so they can occur simultaneously.

The simplest form of comparing the transmission times would be zero-crossers for the demodulated signal. Complex FFT and correlation are other methods. All you need to know is the phase difference but you want to know it quite precisely.

I have recently done something similar but using the sound card of a PC. It was possible to reliably detect phase shifts in the milli-degree range. You have to make sure that the bandwidth after procesing is low, to knock the noise down far enough. No big deal, because you can let each unit transmit long enough.

I don't see why that would not work. Provided that any hardware between pinger and "pingee" is not flopping around to much in terms of phase noise.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg
Loading thread data ...

As long as pulse and response for the media test (ToF) happens via the same cable it makes no difference whether its characteristics change. Provided you repeat that ToF test before every transmitted timer tick of your measurement method.

It normally is. Unless you are next to a busy airport, freeway or high-speed rail. You also need to make sure the frequency/carrier you are using is free of interference, regardles of whether you use cables or wireless.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

You might want to look into how NTP (the Network Time Protocol) works. It is based on having a fairly consistent delay (or making multiple measurements to average out the variations).

As I remember, the basic technique is to start with the two units A and B having their own idea of what time is, time is advancing at the same rate for the two units, but there may be a skew between them.

Unit A sends a message to B, with the time stamp of when it sent the message. Unit B then receives the message, marks it with the received time stamp, and then resends it to A, adding again the time stamp of the transmission, and when A get the answer back, it adds this time as the forth and final time stamp to the message.

From the 4 time stamps, you can calculate the time it took for a message to get between A and B, and the difference in their clocks, allowing A to adjust his clock to be in synchrony with B. The math assumes that the flight time in each directions is the same, any difference contributes in an increase resultant skew in clocks.

Reply to
Richard Damon

Bad news. Any measurement between two points is going to involve some medium in between. The trick is to not have the medium characteristics change during the measurement. That makes copper and dielectric a rather bad choice, and shoveling bits through repeaters, hubs and switches a good source of additional errors. Going though the air seems to offer the least drift, error, and jitter.

Please note that CAT5 or 6 is not much of an improvement over coax cable. The major source of drift is the elongation of the copper conductors with temperature. Whether that error is significant depends on the lengths involved, which you haven't disclosed.

I like Joerg's idea. Two reference pulses, sent by each end, measured at some known central location. You can also do it backwards. Have the known central location send a single reference pulse, and then have the two end points store and return the pulses after a stable and fixed delay.

Incidentally, that works nicely for playing radio location via hyperbolic navigation (same as LORAN). I used it to create a line of position for a VHF/UHF mobile radio talking through a repeater. I used two identical scanner receivers to listen to the repeater input and the repeater output. The delay through the repeater is fairly stable and easily measured. Therefore, the mobile transmitter was somewhere along a line of position defined by a constant time difference between the repeater and my receivers. Today, I would have a computer calculate the hyperbolic line of position. In 1979(?), I used a road map, two push pins, a loop of string, and a pencil. I still can't decode what you're trying to accomplish, but this might give you some ideas.

Yep. If you figure out how to reflect the injected signal, you can probably live without having to transmit back any timing information.

What I'm not seeing are any distances involved or accuracy requirements. Also, what equipment or devices you have to work with. I'm floundering with bad guesses as to your intentions.

--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
Santa Cruz CA 95060 http://802.11junk.com 
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

Cisco has ieee 1588 gear, but you need both host cards and switch. There are easily half a dozen companies making the chips, but the hardware hasn't become mainstream yet.

For GPS timing, you need to ignore satellites at the horizon. Using a GPS for timing is a bit different than using one for position. Some of the GPS timing antennas are designed to ignore the horizon, but this isn't universal.

You can also synchronize from cellular signals, especially LTE.

Reply to
miso

Hi Joerg,

Yes. I know folks who do variations of this with *hundreds* of pounds of cable! (since the cable has to be ruggedized if you want to be able to reuse it, etc.)

In some cases, yes. The third point ultimately gets tied into everything, though, as it is *the* reference (GrandMaster in PTP parlance).

But, if I can measure (test equipment, not *inside* the application) the skew between two distant devices, I can always bring the third device into the equation.

This doesn't happen during installation. The system self-calibrates. This is only an issue when you are troubleshooting something that

*isn't* working.

I.e., how do you verify that "time" is correct? How do verify that the local servo loop is tracking the current reference correctly? E.g., is the local clock currently *locked*? Is the loop filter currently configured correctly to track once locked (vs. acquisition)? What sort of jitter is the local loop experiencing? Is the reference seen as drifting? etc.

E.g., if the loops are out of sync (phase and/or frequency), the network speaker's output is out of sync with other network speakers (or network video), etc. Or, reproducing at incorrect pitch.

Internally, these manifest as buffer underruns or overruns (because the local notion of time differs from the system's notion of time). Is the problem *here*? Or, *there*?

You (I) want to be able to nudge the system and see how individual components react to those disturbances to give clues as to what's working and what isn't.

With calibrate network interconnects, the problem wouldn't exist! :> I'm trying to reduce the complexity of the measurement (verification?) system to *a* device -- preferably something non-esoteric (so folks don't have to buy a special piece of gear to verify proper operation and diagnose problems).

As I said (elsewhere), I can currently verify performance at the ~100-200ns level "with a scope". [I am trying to improve that by an order of magnitude.] But, that's with stuff sitting on a bench, within arm's reach (physically if not electrically).

So, I need a (external) measurement system that gives me at least that level of performance. Without necessitating developing yet another gizmo.

[E.g., the 'scope approach is great: set timebase accordingly; trigger off pulse from device 1; monitor pulse from device 2; count graduations on the reticule! This is something that any tech should be able to relate to...]
Reply to
Don Y

Mine isn't a "compliant" implementation. Much less expensive for the same level of performance (because I have other constraints that I can impose on The System -- that Cisco et al. can't!)

I'm not sure how well those external sources would work in an arbitrary structure, etc. E.g., I can't operate any of my GPS units indoors, here (residence) -- all of them need a clear view of the sky. I suspect they wouldn't work in a conference room deep inside some generic building -- undoubtedly fabricated out of lots of METAL!

[OTOH, a radio *designed* for this (my) purpose would have to take that into account]
Reply to
Don Y

Don, If you don't find an answer here you might try the "time nuts" forum.

George H.

Reply to
George Herold

Sure you can operate GPS indoors. Many of the current chipset are sufficiently sensitive to work through windows.

I live in the worst case location, in the middle of a forest surrounded by 50 meter trees. My view of the sky is through a few holes in the canopy which had grown over the house and road. Yet, various GPS boards work just fine. However, I cheat. I have lots of glass in the walls and plastic skylights in the ceiling.

Google for "gps repeater".

You can also build a GPS repeater. I've built 3 so far in various buildings. Put a really good antenna on the roof, add 30dB of gain, run some coax, maybe another 10dB of power gain, an indoor antenna, and you have GPS. However, that only works for timing, not location. The problem is with such a derangement, the indicated location is the phase center of the GPS antenna and NOT the location of the receiver. Even if you put the antenna on the roof, and run coax cable directly to your receivers, you'll still get the location of the rooftop antenna, not the receiver.

--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
Santa Cruz CA 95060 http://802.11junk.com 
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

Don didn't mention any numbers with specific required tolerances but the NTP solution may work to an extent. If Don wanted the finer time difference granularity then he should look at LXI as a time protocol method (used in finely timed instrumentation) which can, I believe, attain better than 40ns.

--
******************************************************************** 
Paul E. Bennett............... 
Forth based HIDECS Consultancy 
Mob: +44 (0)7811-639972 
Tel: +44 (0)1235-510979 
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. 
********************************************************************
Reply to
Paul E. Bennett

[...]

The interconnects would calibrate themselves during the test. It doesn't matter whether it's a radio link or a wired link.

I am afraid you will have to develop a gizmo. Because there ain't enough of a market for what you are trying to do and I doubt there is off-the-shelf stuff. Except for very expensive test equipment that can probably be programmed to do this job (the stuff that costs as much as a decent car).

But you need a link to each unit. And this is also easy if a scope approach is acceptable:

a. Provide a full-duplex radio link to each unit, meaning two different frequencies each. Can purchased. Send tone burst out, measure phase difference when it comes back.

b. Do the same with unit B.

c. Listen for timer tick and measure delta.

d. Subtract difference found in steps a and b.

You might want to take a look at Daqarta. I found it to be very precise in measurement precision when it comes to phase delays:

formatting link

Not sure how far down in precision this app would go but it'll even give you a histogram of timing errors:

formatting link

The author is very responsive and knowledgeable when it comes to specialty applications you want to pursue. Speaking from experience here.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

My protocol is essentially PTP (much finer-grained control than NTP). A GPS-based time reference *could* work -- if signal was always available inside building (steel construction, etc.)

But, I would still need some way of interconnecting the reference with the UUT -- i.e., a low jitter PPS output that I could use as a reference against which to compare the corresponding reference from the UUT.

Then, repeat the experiment at other node and hope nothing in The System has changed in the intervening time.

Again, I'm using PTP (sort of). And, have no particular concern as to how well the "System Time" (as experienced on each of the nodes in question) relates to "Wall (Clock) Time". The point is to have a common timebase and reference throughout the system (then, worry about how this relates to "Human" time)

Reply to
Don Y

Thanks! I'll dig through their archives and see what sorts of subjects they address (along with getting a feel for the SNR, zealotry, etc.)!

--don

Reply to
Don Y

Just examine the way the NIST used to do it in the POTS modem days.

Theu account for your machine delays and everything. All taken from the send/receive pings and returns and timing those delays, etc.

we used to use an app called "Timeset" on our modem, and they could resolve and set your time accurately to less than 0.1 second with that, consistently.

Doing it over the net is likely a bit more difficult, but the same process, nonetheless.

Reply to
TunnelRat

Hi Joerg,

Tooth settled down, yet?

You'd have to locate a piece of kit at each end of each link. I.e., three devices (assuming a common reference point) to test two links. (Or, hope nothing changes in the minutes or fractional hours that it takes you to redeploy for the "other" link)

[E.g., the timing protocol updates itself at ~1Hz just to deal with things like drift in the local oscillator]

That's what I've feared. Once you start having to support "test kit" your TCO goes up, fast! ("authorized service personnel", etc.)

Part of the appeal of legacy devices (60's) was that a "tech" could troubleshoot problems with common bits of test equipment. E.g., the original ECU's could be troubleshot (?) with a VOM... (no longer true though that market is large enough that OBD tools are relatively inexpensive)

Yes. And, relies on a *wired* link (or equivalent hooks in a wireless implementation)

This has to happen "nearly coincident" with "a)." to be able to ignore short term variations in the signals. E.g., if you were troubleshooting an analog PLL, you wouldn't measure the phase of the input signal against some "reference"; then, some seconds later, measure the phase of the synthesized signal against that same reference. Rather, you would measure the synthesized signal against the input signal "directly" so the measurements appear to be concurrent.

Also coincident with a & b. I.e., you want to deploy to pairs of radios and this "tick measurement" device to take/watch an instantaneous measurement free of jitter, and short term uncertainty, etc.

I'm not claiming it can't be done. Rather, I'm trying to show the height at which it sets the bar for a "generic technician".

It may turn out that this becomes a specialized diagnostic procedure. I would just like to avoid that, where possible, as it makes such a system "less ubiquitous" in practical terms.

(Alternatively, *guarantee* that the system always knows when things are not in sync and have *it* report this problem)

At 256KHz sample rates, I think it is probably too slow to get the sort of resolution I would need (I'll examine the site more carefully to see if there is some "extended precision" scheme that could be employed)

--don

Reply to
Don Y

Yes, either that or you have to make it part of the standard on-board equipment (which I understand you don't want). There is no other way. The locations have to broadcast their timer ticks and that requires hardware.

I find it's even better today. Back in the 60's, just the thought of schlepping a Tektronix "portable" (as in "has a handle but take a Motrin before lifting") could make you cringe. Then you had to find a wall outlet, plug in, push the power button ... TUNGGGG ... thwock ... "Ahm, Sir, sorry to bug you but where is the breaker panel?"

Nowadays they already have laptops for reporting purposes and all you need to do is provide a little USB box and software. TI has a whole series of ready-to-go radio modules, maybe one of those can be pressed into service here. The good old days are ... today. Doing this wirelessly was totally out of the question in the 60's. The wrath of FCC alone was reason enough not to.

Wired works, as long as the infrastructure in the wiring won't get in the way too much (Hubs? Switches? Routers?).

Not really. How should an RF path change much in just a few minutes? Unless someone moves a massive metal file cabinet near the antennas, but you'd see that.

Since it seems you are not after picoseconds I don't see where the problem is. You can do that simultaneously, it's not problem, but it isn't necessary.

Take a look at SCADA softare. Something like this could be pieced together and then all the tech would need to be told is "Hook all this stuff to the units, plug this gray box into a USB port your laptop, click that icon over here, wait until a big green DONE logo appears, then retrieve all the stuff".

That would be by far the best solution and if I were to make the decision that's how it would be done.

Mostly this only samples at 44.1kHz, 48kHz or 96kHz, depending on the sound hardware you use. Unless I misunderstand your problem at hand, that isn't an issue. AFAIU all you are after is a time difference between two (or maybe some day more) events, not the exact occurrence of an event in absolute time. So if each event triggers a sine wave at a precise time you can measure the phase difference between two such sine waves transmitted by two different units. Combining it with a complex FFT of sufficient granularity you can calculate the phase difference down to milli-degrees. 1/10th of a degree at 10kHz is less than 30nsec and to measure that is a piece of cake.

You can get much more accurate than that. In fact, one of the big projects I am involved in right now (totally different from yours) fully banks on that and we have our first demo behind us. Since the system aced it so well I tried to push it, measuring the phase shift in a filter when a capacitor goes from 100000pF to 100001pF. It worked.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

It's late, so pardon that I've not really though this idea through complete ly... But I'm just wondering if one could cobble-up a phase lock loop (or loops) using one or more broadcast FM stations as the "timebase". FM receivers ar e cheap. Of course, the solution relies upon 3rd party actions you can't c ontrol so this might not be the optimum solution.

Reply to
mpm

I do my best work under cover of darkness.

Sure. One doesn't really need any manner of specialized signal from the FM, TV, WWV, paging, VHF WX, or whatever radio signal. All that's needed is a common reference signal (i.e. start timer) that can be heard by both ends of the measuring system. The reference doesn't even need to be accurately time. A random pulse or carrier shift will suffice. That's the nice thing about using an independent third reference... there's no accuracy requirement. However, some care must still be paid to avoid noise induced jitter and trigger drift.

As you suggest, one could also phase lock to an FM broadcast carrier. With a narrow BW PLL, that will also produce a suitable reference that can be heard indoors. However, there's no guarantee two such receivers with identical delay through the IF filter. It can probably be done with a narrow IF filter with a Gaussian filter response (minimum group delay) and an AFC to center the FM signal. Keeping FM modulation artifacts out of the IF bandpass may be a problem[1]. It's also more complicated than just a simple pulse trigger system and timer, but it can be made to work.

[1] I had this problem with a Doppler direction finder (AN/SRD-22).
--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
Santa Cruz CA 95060 http://802.11junk.com 
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

The analog TV-signal was a good source of timing reference, locked to some high quality network rubidium oscillator and used a complex structure from which to find easy reference points. For TV reception, an outdoor antenna was often used, reducing the problem with multipath.

However with indoor reception, there are often quite deep multipath nulls, some so narrow it takes out only few of the broadcast FM sidebands, causing severe distortion in audio or taking out a whole communication quality FM channel (12.5-25 kHz).

At the null point two multipath signals cancel each other at 180 degree path difference. Only a few kHz below this null, one of the multipath is stronger, while at a frequency above the null, the other multipath is stronger and it has different path length.

A practical, but not so accurate way of using the FM signal is to monitor the 19 kHz pilot tone, but still you need some method of finding out which cycle is which. Some data service at 57 kHz might be usable to identify cycles. Frequent phase disruptions in the pilot tone is an indication of multipath nulls and a better antenna placement may be needed.

Reply to
upsidedown

I think that last sentence is the key. The devices already have that capability. And, already do it -- in a cryptic way (i.e., if you examined the control packets for the protocol, you could infer this information just like the "resident hardware/software" does in each node.

So, maybe that's the way to approach it!

I.e., the "pulse" output that I mentioned is a software manifestation. It's not a test point in a dedicated hardware circuit. I already

*implicitly* trust that it is generated at the right temporal relationship to the "internal timepiece" for the device. [This is also true of multi-kilobuck pieces of COTS 1588-enabled network kit: the "hardware" that generates these references is implemented as a "finite state machine" (i.e., a piece of code executing on a CPU!)]

So, just treat this subsystem the same way I would treat a medical device, pharmaceutical device, gaming device, etc. -- *validate* it, formally. Then, rely on that "process" to attest to the device's ability to deliver *whatever* backchannel signals I deem important for testing!

At any time, a user can verify continued compliance (to the same specifications used in the validation). Just like testing for characteristic impedance of a cable, continuity, line/load regulation for a power supply, etc.

Then, instead of delivering a "pulse" to an "unused" output pin on the board, I can just send that "event" down the same network cable that I am using for messages! (i.e., its a specialized message, of sorts)

These are almost always in "accessible" locations (the actual

*nodes* that are being tested may be far less accessible: e.g., an RFID badge reader *protected* in a wall.

And, for high performance deployments they are "special" devices that are part of the system itself. E.g., the equivalent of "transparent switches" and "boundary switches" -- to propagate the timing information *across* the nondeterministic behavior of the switch (hubs are friendlier in this sort of environment!).

Can you be sure that you can get from point A to point B in "just a few minutes"? (This was why I was saying you have to deploy *all* the test kit simultaneously -- what if A is in one building and B in another)

What if an elevator happens to move up/down/stops its shaft (will you be aware of it?)

I'd *prefer* an approach where the tech could just use the kit he's already got on hand instead of having to specially accommodate the needs of *this* system (does every vendor impose its own set of test equipment requirements on the customer? Does a vendor who *doesn't* add value to the customer?)

See above (validation). I think that's the way to go.

I.e., how do you *know* that your DSO is actually showing you what's happening on the probes into the UUT? :>

[I've got a logic analyzer here that I can configure to "trigger (unconditionally) after 100ms" -- and, come back to it 2 weeks later and see it sitting there still saying "ARMED" (yet NOT triggered!)]

Define, *specifically*, how this aspect of the device *must* work AS AN INVARIANT. Then, let people verify this performance on the unit itself (if they suspect that it isn't performing correctly)

The "pulses" are currently just that: pulses (I am only interested in the edge). Currently, these are really *infrequent* (hertz) -- though I could change that.

Yeah, I worked with a sensor array that was capable of detecting a few microliters (i.e., a very small drop) of liquid (blood) in any of 60 test tubes for a few dollars (in very low quantities). It had to handle 60 such sensors simultaneously as you never knew where the blood might "appear".

Interesting when you think of unconventional approaches to problems that would otherwise appear "difficult" and suggest "expensive" solutions!

Reply to
Don Y

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.