Comparing phase of physically distant signals

Presumably, these folks *can* tell whether it works or not in the context of several big piles o' gear. If there is no aggregate measure available to estimate the delta-entropy from

*not* having this capability, then there's no way to estimate the gain from it in the first place.

And if you don't think those folks don't tilt at the odd windmill...

Meanwhile, the audio signal on most cable systems is quite un-synchronized from the picture.

Can't you use a GPS (or four) to provide a reference? Isn't 1588 just a way to provide GPS-like services without the GPS?

--
Les Cargill
Reply to
Les Cargill
Loading thread data ...

Them, silicon manufacturers, standard developers, other entire

*industries*? They're *all* smoking something??

And you like that and would like to see even more variability added to it? What about when you have a live audio+video feed? ("Hey, let's NOT take any precautions to ensure they remain synchronized...")

You only need a GPS in your *deployed* system if you want to synchronize precisely *to* "GPS time". If, instead, what you want is "consistent, synchronous 'time' WITHIN your system", then why incur the cost of a GPS-based GrandMaster Clock?

I.e., if you don't mind that all the "clocks" in your home are "off" from your neighbors *BUT* are all consistent with each other (i.e., the clock in the kitchen shows the same time as the clock in your bedroom), then as long as they stay synchronized, you can live with that discrepancy (until your neighbor invites you to dinner and wonders why you are early/late arriving! :> ) Your neighbor's house is *outside* your "system" and, presumably, of no concern to you.

Without 1588 (et il.), then you would *need* something like a GPS at each node to provide a reference time that each node could consult *knowing* it was synchronized to the other nodes. (the functionality of each of these nodes would then be conditional on being able to get a GPS signal *at* that node -- not at an "antenna location" some distance away from it)

[This is the same problem that I have with the idea of using GPS receivers as "test equipment" in this sort of application]

I.e., 1588 addresses a real issue in real systems in the real world. Look at what the CERN folks have been playing with to see how far folks are willing to push this idea. Then, think of how you would do what they want to do *without* this notion of "synchronized time".

Doctor's appointment... (sigh)

--don

Reply to
Don Y

Absolutely. Not always, but things happen. To be clear - I don't think 1588 is bogus, but let's just say I'm skeptical. It's nearly certainly not a magic bullet that erases entropy. It may move it around.

I am saying that they don't (or can't) necessarily bother in a consistent manner.

I am saying use a GPS as part of your deployed system as a device under test. Test it locally with GPS units to make sure it works, then separate them.

You can't do this on all nodes not equipped with the time reference.

Right.

I don't really know. The stuff I have worked on does not need it, generally. The handful of times we did need it, we were able to add aggregate measures ( usually related to BER ) to various things to estimate clock quality.

--
Les Cargill
Reply to
Les Cargill

OK, 1 microsecond, finally an actual number.

Well how long is it really? Is it fair to say 50 meters? I.e. several of us have asked you to state once and for all the actual numbers from your real system, so we can use them instead of speaking in hypotheticals.

Why do you want to show or verify - determining the location of a moving target within an arena

I don't see where 1us comes into this either.

Or in those.

But it doesn't make physical sense.

That doesn't make sense either. The physical world is not a shared memory system where there's a single register with the current time, that can be read instantaneously from everywhere. If A and B are 1km apart, then saying they happened within 1 ns of each other is meaningless. The best you could say "globally" is that they were within

3 us of each other (= 1km at the speed of light). You could say that events at A and B were observed within 1 ns of each other at some specific location C (say midway between A and B). Maybe that matches your actual requirement. There's no location-independent simultaneity.

People have suggested radios, GPS, cables, etc. Lasers or LED's seems like another possibility. I don't see what the problem is, as long as you concretely say what you what to measure, and it seems to me you've been vague. 1us at 50m doesn't sound like it needs high tech methods.

Reply to
Paul Rubin

I just don't believe all this energy (silicon developers, standards developers, switch manufacturers, application industry experts, etc.) is all smoke and mirrors.

And, when *I* can see 200ns with *my* 'scope that I refuse to believe has been tampered with by some "black ops" guys in the middle of the night...

Finally, when you look at the theory (of the various different schemes that address this issue), there's nothing glaringly wrong. I.e., everything "mechanically" in the signal path is deterministic in nature (software, NIC, switch, etc.). The only source of randomness comes with the introduction of the switch -- competing traffic vying for its resources.

If you put a pair of PTP-enabled NIC's back-to-back with a crossover cable, you'd have no problem believing that they could compensate for the transport delay in that cable of uncalibrated length. Right? (within the realm of clock jitter).

So, you introduce boundary or transparent switches that compensate for the temporal variance *across* the switch ("store and forward") and you've effectively put a "dual ported" PTP-enabled device between the devices connected to the switch. I.e., each link from switch is now temporally deterministic and the "bit in the middle" enforces that relationship as packets transit across it.

Yeah, a deployed 1588 system could spontaneously fail. *Every* control packet could mysteriously/coincidentally be corrupted until the local oscillators drift out of lock. It's also possible that a cow could nibble on the power cord to the grandmaster clock...

(i.e., all systems are probabilistic at some level)

Perhaps because its unimportant to them -- because consumers will tolerate this level of wander between the audio and video.

Note, however, that there is a (evolving?) standard regarding the use of 1588 for A/V applications. Obviously someone in that industry sees the need and has bought into the "smoke and mirrors" (?)

I can test units locally (on a bench) without the need for a GPS. E.g., use a 'scope.

That;s what I did during development. Then challenged the system:

- withdrawing processor resources to verify the timing daemon continued to operate even as the system is stressed beyond 100%

- heating one device while cooling another (force XO's to move in different directions)

- intercepting control packets for the timing service to verify how it copes with "loss of signal" over different time intervals

- corrupting control packets to verify they can't corrupt the timing service

- shutting down a node and verifying the system notes this loss of sync

- powering up a node to determine how quickly it syncs to the timing reference etc.

So, I've assured myself that the algorithms and implementation are correct. I don't need reassurance that when I move those nodes apart from each other that they are *still* exhibiting the same degree of synchronization.

But, what do you do when there are 1,200 such devices deployed in a hospital setting? Drag any suspected devices (1? 2? 500?) into the conference room and setup a 'scope to verify they are operating properly? Does the act of removing them from their deployed locations affect their (mis)performance?

Much better if a technician ("on staff") can just use standard techniques to troubleshoot the system in situ -- they're already on site and have "test equipment" to service the instruments in use on the premises!

This gets expensive (the appeal of lots of *cheap* SoC's/SBC's). And, limits where you can deploy the nodes.

In my case, I've more often had to deal with external "reference" events/signals than not. So, the idea of syntonous and synchronous servos pervades many of my control algorithms. E.g., sampling a capacitive sensor array syntonous with the *instantaneous* AC line frequency to null out AC line noise coupled from nearby flesh, etc.

In your case, you still need some way to establish a reference -- even if you try to control accuracy and drift. And then have to consider the period of time over which your assumptions will remain valid.

We have no problem dealing with the idea that wall clocks are synchronized (by imperfect human beings) "roughly" (surely within 15-20 minutes of each other -- even accounting for those folks who set their clocks "fast") across the country. Or, within a few minutes within a single residence. That "computers" can be synchronized to a handful of milliseconds globally. I.e., it *is* possible to synchronize timepieces. The degree to which we want that synchronization varies -- along with the cost we are willing to bear for it.

How would the CERN folks deploy instrumentation over many kilometers to collect and control an accelerator if they *couldn't* distribute a sense of time and provide for *local* (distributed) use of that? Should they run thousands of multi-kilometer long cables for each sensor/actuator terminated in a single location (to avail themselves of a *single* "clock")? Then, determine the propagation delay of each cable so they can adjust the "centrally recorded time" of a particular observation to reflect its transit delay? *Anticipate* (by the transit delay to an actuator) when an actuator should be fired to ensure that the actual "activation signal" is emitted early enough to arrive at that actuator exactly when needed?

Wouldn't it be *so* much easier to put local intelligence/signal conditioning/control along that multi-kilometer path and preload commands with predetermined times RELATIVE TO THE START TIME OF THE EXPERIMENT? Just *rely* on having a stable, shared "time reference" (in terms of "frequency" and "phase") that reduces the problem of each of those nodes to something easily managed?

It seems incredibly obvious to me that 1588 (et il.) are the way things will be done in the future. Folks will just pick what degree of (time) control and resolution they are willing to "purchase" in their designs. Akin to tolerances on electronic components (e.g., resistors, capacitors, *crystals*...)

Of course, there will still be apps that fit in a single processor but I think more and more will "distribute" themselves to take advantage of these economies.

--don

Reply to
Don Y

I think that last sentence is the key. The devices already have that capability. And, already do it -- in a cryptic way (i.e., if you examined the control packets for the protocol, you could infer this information just like the "resident hardware/software" does in each node.

So, maybe that's the way to approach it!

I.e., the "pulse" output that I mentioned is a software manifestation. It's not a test point in a dedicated hardware circuit. I already

*implicitly* trust that it is generated at the right temporal relationship to the "internal timepiece" for the device. [This is also true of multi-kilobuck pieces of COTS 1588-enabled network kit: the "hardware" that generates these references is implemented as a "finite state machine" (i.e., a piece of code executing on a CPU!)]

So, just treat this subsystem the same way I would treat a medical device, pharmaceutical device, gaming device, etc. -- *validate* it, formally. Then, rely on that "process" to attest to the device's ability to deliver *whatever* backchannel signals I deem important for testing!

At any time, a user can verify continued compliance (to the same specifications used in the validation). Just like testing for characteristic impedance of a cable, continuity, line/load regulation for a power supply, etc.

Then, instead of delivering a "pulse" to an "unused" output pin on the board, I can just send that "event" down the same network cable that I am using for messages! (i.e., its a specialized message, of sorts)

These are almost always in "accessible" locations (the actual

*nodes* that are being tested may be far less accessible: e.g., an RFID badge reader *protected* in a wall.

And, for high performance deployments they are "special" devices that are part of the system itself. E.g., the equivalent of "transparent switches" and "boundary switches" -- to propagate the timing information *across* the nondeterministic behavior of the switch (hubs are friendlier in this sort of environment!).

Can you be sure that you can get from point A to point B in "just a few minutes"? (This was why I was saying you have to deploy *all* the test kit simultaneously -- what if A is in one building and B in another)

What if an elevator happens to move up/down/stops its shaft (will you be aware of it?)

I'd *prefer* an approach where the tech could just use the kit he's already got on hand instead of having to specially accommodate the needs of *this* system (does every vendor impose its own set of test equipment requirements on the customer? Does a vendor who *doesn't* add value to the customer?)

See above (validation). I think that's the way to go.

I.e., how do you *know* that your DSO is actually showing you what's happening on the probes into the UUT? :>

[I've got a logic analyzer here that I can configure to "trigger (unconditionally) after 100ms" -- and, come back to it 2 weeks later and see it sitting there still saying "ARMED" (yet NOT triggered!)]

Define, *specifically*, how this aspect of the device *must* work AS AN INVARIANT. Then, let people verify this performance on the unit itself (if they suspect that it isn't performing correctly)

The "pulses" are currently just that: pulses (I am only interested in the edge). Currently, these are really *infrequent* (hertz) -- though I could change that.

Yeah, I worked with a sensor array that was capable of detecting a few microliters (i.e., a very small drop) of liquid (blood) in any of 60 test tubes for a few dollars (in very low quantities). It had to handle 60 such sensors simultaneously as you never knew where the blood might "appear".

Interesting when you think of unconventional approaches to problems that would otherwise appear "difficult" and suggest "expensive" solutions!

Reply to
Don Y

Why do you care? Note I claimed "tens to hundreds of nanoseconds". You *buy* the level of performance you want/need for the application you are solving.

As to "finally" implying that this has been a closely held "secret" that was only "reluctantly divulged", I remind you that my initial post was at 10:40AM on 8/3. At 1:21PM on that same day (i.e., 2.5 hrs later, 3 days *before* your message) I had already claimed:

"My software can easily achieve synchronization between nodes to a level better than 1us -- without doing anything fancy."

At 1:41PM I further qualified this:

"That depends on the level of resources I want to apply to the problem. For "free" I can get O(200ns). If I want to more carefully specify my hardware, I can drop that to O(20ns)."

Earlier, at 11:56AM, I had already declared "a city block".

Perhaps you haven't been paying close attention?

A CITY BLOCK. For my test, I chose a ~100ft cable. I could have chosen a 200 ft cable. etc. The length of the cable does not matter! That's the whole point of the algorithm! And, why I intentionally and repeatedly claim it is "of uncalibrated length" (didn't you think it unusual that I deliberately drew attention to this fact? Maybe that's *significant*???)

If you want to pick nits, you could possibly ask if that city block is planar or three-dimensional -- and, in this latter case, what limits exist on the Z axis.

(Let's make it simple. Pretend there is 1/10 - 1/8 mile separating nodes. I.e., they are deployed -- possibly *uniformly* -- over 1/100 to 1/64 sq mile of real estate)

Each device obviously has its own concept of "local time". That's the point of "synchronizing" them! I am drawing attention to the fact that a single observer in a single location with both devices physically observable from that position would note > If I lengthen the cable between the two devices and repeat the

No. Two devices on a bench. Two outputs -- one from each -- connected to the *same* 'scope. Some uncalibrated (there's that word, again!) length of cable electrically connects those two devices. There is no other way they can communicate other than by this cable.

Both devices are allowed to "stabilize" (i.e., the FLL and PLL to settle into lock). Then, the time difference between two pulses intended to be omitted coincidentally is measured. I.e., both devices share a common concept of "current time" -- even though they are not residing in the same "CPU" clocked by the same oscillator, etc. They are both told to emit a pulse at t=12345678 (units being irrelevant -- as long as they can be correlated between the two devices).

Now, power everything off, replace the interconnect cable with one that is longer/shorter and repeat the process. The time difference between the pulses doesn't change. I.e., the algorithm compensates for the length of the cable.

[There is some sleight of hand in this explanation but it is essentially correct -- the delay *will* vary by 1/2 the underlying timebase, maximum]

But there have been claims that we *can't* synchronize "time" at all! Just note "which comes before/after". In that context,

1ms is just as "impossible" as 1us! [Note, also, that you are actually looking to "tune" devices to a fraction of a foot as the voice coil of a tweeter is often located at a different "depth" than that of a woofer/squawker]

Put a radio in a room. Let it emit a signal of its own choosing at a time of its own choosing. Tell me where it is *without* a common sense of time. (there are actually ways to do this but they typically *add* RF to the environment)

How do you precisely control the phase of a 100KHz signal with a e.g., 1ms timebase? Even a 20KHz (lower end of ultrasound) signal has a total *period* of only 50us. I.e., 1us gives you ~7 degrees of phase resolution.

Now, look at the echo/detect aspect. How do you determine the return angle of an echo if you can't resolve the temporal differences between the many transducers in the array?

(there are ways around this, as well)

Why? Is "now" somehow different for you than it is for me based solely on where we are located? If the sun goes supernova, doesn't it happen at *a* point in time? Regardless of where we are each located?

If the leading edge of an earthquake occurs at some fixed point in time, isn't it a *single* event occurring in "Time"? Despite the fact that two observers may "notice" this event at different points in "Time". Don't we explicitly rely on a synchronized sense of time to locate where the event occurred? E.g., if it happened at time=27 and I claim I noticed the event at time=blue; this would provide no useful information. And, if I said I noticed it at time=26 that would be equally useless. (i.e., having the same

*units* of measurement *and* reference point in time)

No. You're getting caught up in relativity theory.

If I put two devices 1 inch apart and separate them by

10,000km of interconnect cable (e.g., fibre) and shine a light on both of them (each has a photodetector), do you agree that "now" is the same point in "Time" for both units?

Now, one of the units sends a message to the other saying "Wow! Someone JUST turned on the light!". That message arrives at the second unit ~30 ms later!

But, both units understand that this was a single point in Time that has been made to appear skewed simply because of the transmission delay in the cable.

If the second unit had a precise notion of "1ms", then it could accurately and reliably predict the *instant* the message from the first unit would arrive.

Similarly, if the second device could not *see* the light but was designed (specified!) to make a noise 50ms after the light was illuminated, it could wait for the message from the first unit, delay 20 additional ms and successfully make that noise exactly 50ms after the light was illuminated!

Now, if you move that second unit to a point in space

10,000 km distant from the first unit and repeat the experiment, the results remain the same. A blind but not deaf observer at that remote location, on hearing the noise, would *know* that the light was turned on exactly 50ms ago. 50 of *his* ms being the same unit of measure as the ms *at* the light!

How has their physical separation changed the instants at which each of these events actually occurred?

(sigh) Yet again: I want to measure the skew (phase difference if you want to think in terms of periodic signals) between two times *intended* to be coincident over a long distance to a high tolerance.

You might want to *try* it before dismissing it :> With a (metal?) wall between you and it and very few dollars to spend. Everything is always easy until it's *your* problem to solve with real constraints! E.g., try synchronizing two "clocks" (located side by side so it doesn't upset your relativistic sensibilities!) on devices interconnected by 2km of fibre (i.e., 2us propagation delay) and

*no* intervening hardware (e.g., network switch) to better than 1us. Heck, you can even use a "present day" PC with all its resources available to you and all that cost that entails -- for free! :> Once you're done, figure out how you can prove this fact when I drag one of the PC's into another room down the hall...
Reply to
Don Y

Over here we don't have city blocks, so we use double-decker busses as the unit of length :) How many double-decker busses are there in a city block? :)

No, there haven't. You really don't understand.

Depends on the propagation delay between them. If 1ns then it has meaning to discuss a common time with a 1us resolution. If 1s delay, then it would be meaningless.

No. Absolutely not.

Yesterday there were reports of us seeing the first "kilonova" which happened 4bn years ago. So, yesterday = 4bn years ago, with no possibility of communicating sooner

Yes, because they are at the same place.

Repeat the experiment with the cable being straight (so they are 10,000km apart), and the answer becomes no. Or rather it means "time" has to be qualified by "as observed at location..."

Reply to
Tom Gardner

Suppose we have two identical perfect clocks that showing identical time (to an observable resolution of 1fs) when they are next to each other, at point A.

Now move one clock so that it is one light-millisecond away, to point B.

At 12:00:00.000 "foo" occurs at point A

At 12:00:00.001 "foo" is observed at point B, which causes "baz" to occur one Planck time later.

At 12:00:00.002 "baz" is observed at point A.

Q: did "foo" and "baz" occur simultaneously or not? A: yes, repeat no.

Obviously yes at B and no at A (and everywhere else).

If time /was/ common everywhere and they occurred simultaneously, it would require that

12:00:00.000 = 12:00:00.001 = 12:00:00.002 which is paradoxical.

Note that framing the events in terms of "happens-before" avoids such paradoxes. And that is why "logical clocks" are used in real systems of communicating distributed processors.

Reply to
Tom Gardner

Then you may be home already, almost for free.

A way to do something like this is signature analysis. You watch each unit while it sends out a certain pattern that's always the same. The pattern has to come in response to some other pattern that gets sent to it, and the response time (turn-around time) must be very determined and validated. Now you can correlate the heck out of it and get almost any precision you want.

Yeah, but you don't want to have to temporarily re-shuffle a customers wiring installation. It'll be labor-intense and also interrupt their normal business.

Tricky but not impossible. That's why the test should run for a longer time, to see if anything changes. You also have another tool, RSSI. If the RSSI is markedly different from when the system was installed then something in the path has changed.

[...]

If this doesn't add value to the customer, why do it in the first place? If calibration is required then that is of value to the customer.

Yup. Didn't know that it was leagally or technically possible in your case.

[...]

The method above (sine wave trains) is usually better. Lower bandwidth, more SNR, better accuracy, much cheaper hardware.

That's where engineering begin to be fun :-)

The best comment I ever got after finishing a prototype that then did exactly what the client wanted, after one of their engineers looked into the rather sparse collectioon of parts: "You mean, THAT's IT?"

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

It is actually easier than I thought this morning:

You can perform round-trip calibration and timing in rapid succession, for both units. Unless the elevator has a rocket motor and goes at Mach-2 it should be accurate enough.

Ok, if an F-16 roars through the path at full throttle you'd have a problem :-)

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

I did not say *all*. I said it happens.

I don't know what you mean by that. All I am trying to get to you is that there is almost certainly a way to close the timing loop for test purposes.

Oh dear. No - *NEARLY NOTHING* in that path is purely deterministic to any significant resolution.

Maybe. It all depends on the switch.

Try it. Spend some time with it. I can tell you, but you'll select different hardware or a slightly different O/S and it'll be different.

If you need uncertainty of < 1ms, you really need to do something else.

Right.

There's a book's worth of reasons. Not the least is that nobody complains.

Right. I never said it was smoke and mirrors; I am saying I doubt some of the claims for it. I have done basic research on distributed timing, and some variation on NTP, no matter how awesome, will surprise me if it provides significant accuracy.

And again, I don't know why anybody needs this. If you need a high degree of sync, colocate the equipment.

If you can't, hope your budget is up to it.

I see.

So maybe you pick a couple of victim-systems and add instrumentation to verify those nodes.

Probably; or some combination thereof.

I would not be surprised. It sounds really cool. But I have had a good look at what this entails, at least for < 19kM distributions.

Uhhhh... there's a software stack in there, too.

We'll see. But I still feel this is relatively esoteric.

--
Les Cargill
Reply to
Les Cargill
[much elided]

"Test" on a *bench* is easy. Test once in situ is a bit more difficult. E.g., kilobuck equipment manufacturers want you to run coax from the node under test back to the test set. "Um, what if the node is 100 ft away up two flights of stairs?"

I.e., they assume you remove the node, carry it to the proximity of the test set and test/validate it *there*. Which I can do... I would just like to be able to test things in situ *while* they (may be) misbehaving...

Sure it is -- if you set out with that in mind! They are all state machines (not "analog circuits subject to random noise, etc.). You minimize the variance in timing between when a packet is passed to the NIC from the software; *know* how long the packet takes to transit the NIC and hit the wire from the PHY; etc.

If you take the naive approach of picking a generic NIC, a generic network stack, etc. then you have *no* idea as to what's under the hood. I.e., taking a timestamp in "user-land" as you pass a packet to the network stack bears absolutely no relationship as to when that packet will percolate down through the stack, get transferred to the NIC and, once under the NIC's control, ultimately get dequeued from teh NIC's internal queue and placed on the wire.

Similarly, timestamping a packet on the receiving end *after* it has percolated up through the NIC, network stack, and into user-land adds even more variation.

OTOH, if you select a PTP-enabled NIC (that hardware timestamps packets as they are sent and received) -- or, a "well defined" NIC -- then then you have the time the packet actually hit the wire (or was peeled off the wire)... to the resolution of the oscillator in the NIC.

Then, all you have to worry about is variations in actual transport. I.e., if you had a dedicated link between *two* nodes, there is none. OTOH, if you have a switch in the middle, then the traffic on other network nodes serviced by that switch can affect how long *your* packet sits in the switch before it gets forwarded to its destination. (E.g., imagine if a packet is currently being delivered *by* that switch to your destination while your packet is waiting. Then, the next time you send a packet, there is NOTHING ahead of you in the queue -- the transit times across the switch can vary significantly!

Yes. Hence the use of transparent switches and boundary switches. I.e., switches that are aware of this use and *participate* in the protocol (instead of messing with it!)

That's what I've already done (though I also have a switch in the middle). But, it's *my* hardware -- not something you can select at random! And, *my* network stack (designed to offer deterministic behavior). And, my *network* (i.e., I know what traffic coexists with the protocol traffic -- I don't have to tolerate asynchronous uses at the whim of some "user")

Exactly. There's no reason to provide better service. Until someone *does* and market pressures cause folks to demand that same level of performance from others (who will undoubtedly adopt the newer technology as a matter of course -- normal upgrades, etc. E.g., there are already standards actions in the works to codify this particular application for 1588 cuz its *so* obvious)

It's a question of cost and practicality. If you are monitoring signals in different locations, do you run "really long wires" to get all those signals to a common measurement point?

Look at the power industry. A 60 Hz signal. Slow, right? They need event resolutions about 4 times that in order to successfully do a post-mortem of a failure in the grid (e.g., like a blackout that puts hundreds of square miles in the dark).

Located within substations are DME's (disturbance measurement equipment) that record "events". These are specified to be able to record events to an absolute accuracy of better than

1ms. I.e., over thousands of miles. [Yes, you can get this with a GPSDO at each substation. But, within the substation you also have to guarantee the same accuracy -- regardless of the physical size of the substation. Do you put a GPS at each relay, event recorder, etc. *in* the substation? Run all the wires to one central "reporting and recording" station? Then, add some *other* mechanism to *communicate* with the substation?]

You can purchase a reference design from any number of manufacturers and "challenge" it yourself for a few hundred bucks. It hasn't been "cutting edge" for more than a decade!

There are just too many applications ready for this sort of facility. And, more and more SoC vendors producing cheap silicon with the hooks already in place to implement these things.

Yes. You don't buy a "generic resistor" when you need 1% at 1/4W. Similarly, you don't pick a "generic network stack" when you want deterministic performance from it (and PTP support)!

Have a look at the Freescale, TI, etc. offerings. IIRC there are several ethernet ARMs with PTP-enabled NICs already.

--don

Reply to
Don Y

Out of curiosity, are your applications using TAI or UTC for their definition of time?

Reply to
Tom Gardner

You might find this of interest and useful if you had a couple of fibres in your installed bundle. I was chatting to one of the guys visiting JET from CERN today and he stated that the timing accuracy is 1ps on phase and 1ns on time synchronicity.

--
******************************************************************** 
Paul E. Bennett IEng MIET..... 
Forth based HIDECS Consultancy............. 
Mob: +44 (0)7811-639972 
Tel: +44 (0)1235-510979 
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. 
********************************************************************
Reply to
Paul E Bennett

Yeah, amazing what deep pockets can do, eh? :>

I'm hoping I can get 100 - 1,000 times worse performance with

100,000 - 1,000,000 times less *money*! :>

Did you, perhaps, manage to ask how they *verify* this performance while the nodes are deployed? (i.e., physically not collocated) Or, do they just rely on the system to report its own level of performance? ("Trust me...")

I'm guessing (?) their physical layout (and budget) would make it relatively easy for them to deploy a long "wire" (of known length) just to make in situ measurements easier.

What has their experience been like with the whole notion of distributed SCADA? Any surprises that they didn't expect? (pleasant or otherwise)

[sorry if I'm hammering you with questions best directed to *them*; but, "you're all I've got!" :> ]
Reply to
Don Y

If I get to chat to the person most involved in that I'll make a point of asking him that question. One of his work colleagues pointed me to the project but didn't know enough about the testing regime for verification. Knowing them they certainly made sure of the timing.

--
******************************************************************** 
Paul E. Bennett IEng MIET..... 
Forth based HIDECS Consultancy............. 
Mob: +44 (0)7811-639972 
Tel: +44 (0)1235-510979 
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. 
********************************************************************
Reply to
Paul E Bennett

As I've said, I can see how to "prove" proper functionality "on a bench" (with a coil of cable -- in their case, fibre -- under the bench). And, that "stretching out" (uncoiling) that cable should not, in itself, alter the operation of the system -- i.e., unless you physically damaged something as you moved it from point A to point B, any followup "testing" should simply be a confidence building exercise.

What I'm trying to figure out is how to design a "troubleshooting plan" (avoiding the word "test") that would prescribe how a "technician/troubleshooter" would approach such a *deployed* system in its physically distributed state to figure out what might have "gone wrong" with a system that is *apparently* not working properly.

E.g., the analogy to troubleshooting something electronic on the bench: verify all connections are intact; verify the availability of suitably conditioned power at each input; apply prescribed inputs; observe specified outputs; etc. and, based on these actions, come to the following conclusions...

I suspect they (CERN) will have a process that does NOT immediately suspect the timing system. Or, that quickly performs some cursory tests of the device (node) in situ RELYING ON IT'S EXPECTED PROPER OPERATION. E.g., inspecting performance statistics that the node itself collects/generates to see if things "look right" (from the node's perspective). Implicitly counting on whatever may be "wrong" with the system to manifest itself there, first.

For example, if the node claims that it can't obtain a frequency lock on the reference timebase, then *assume* that it really can't and go looking at those inputs to see if they are, perhaps, missing or corrupt.

Only when everything *looks* right -- yet still *isn't* -- do you (they) resort to dragging out a mile of cable...

Sort of like relying on the automation on a DSRV to tell you what

*it* thinks it's problems are -- instead of pulling it back to the surface just so you can take "it's opinion" out of the loop.
Reply to
Don Y

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.