triggering things with ethernet

Of course we can get picosecond precision in a big time distribution/trigger system using fiber; I've done that.

But CAT5 cables are cheap and easy to get installed. My question is, what would time response be like? If I send a UTP broadcast packet to my system, the GO trigger to a dozen boxes, does a switch relay them all in parallel?

It's hard to find numbers.

Reply to
John Larkin
Loading thread data ...

Sure, but something like PTP adds a lot of complexity on both ends. A standard Dell PC doesn't do PTP. A Raspberry Pi Pico doesn't either.

Reply to
John Larkin

Looks like I'll have to do that. Googling, and discussing here, doesn't seem to provide numbers. There are some ethernet camera vendors who hint at hundreds-of-microseconds skew but the setup isn't clear. Maybe I can email them and pretend to need a lot of cameras.

I did once make a bunch of proto PCBs that snoop a pair of RJ45s, out to coax connectors, so I can scope the ethernet at a few destination devices. I don't know how intelligable that would be.

My receivers will have to be designed to recognize the GO trigger packet fast, and suitable switches would be needed. I can image that some are better than others.

Reply to
John Larkin

it is not the code that is the main issue.

100Mbit requires a switch and it may not send broadcast packages to all the down stream ports at the same time with 10Mbit you can use a hub so all ports hear everything thing all the time
Reply to
Lasse Langwadt Christensen

People don't make hubs any more. It's hard to even get the chips.

Reply to
John Larkin

Sure, there are ways to do that, but they get complex on both ends.

Reply to
John Larkin

Let's PRETEND that you are *not* being "deliberately" obnoxious.

Oh WELL, I was ASKING about "sort of" QUANTIFYING *all* of THAT for "my" product.

It is harder to type in obnoxious mode.

Reply to
John Larkin

Probably RHEL.

Ten years ago, 20 microseconds first-bit-in to last-bit-out latency was typical, because the switch ingested the entire incoming packet before even thinking about transmitting it on. It would wait until the entire packet was in a buffer before trying to decode it.

Now days, cut-through handling is common, and transmission begins when the header part has been received and can be parsed, so first-bit-in to first-bit-out is more like a microsecond, maybe far less in the bigger faster switches. These switches are designed to do wirespeed in and out, so the buffering delay is proportional to a short bit of the wire in question. There is less blockage due to big packets ahead in line. It all depends.

But when compared with RHEL churn, at least 200 microseconds, the switch is not important.

The modern equivalent of a "hub" is an "unmanaged switch". They are just that, but are internally buffered. If one chooses a gigabit-capable unit, the latency will be quite small. For instance, consider a NETGEAR 5-Port Multi-Gigabit (2.5G) Ethernet Unmanaged Switch:

.

formatting link

The datasheet specifies the latency for 64-byte packets as less than

2.5 microseconds. Again, this ought to suffice. Web price from Netgear is $150. Slower units are cheaper, with increased buffering latency.

The unspoken assumption in the above is that the ethernet network is lightly loaded, with few big packets getting underfoot.

Also unmentioned is that non-blocking switches are not required to preserve packet reception order. If the key packets are spaced far enough apart, this won't cause reordering.

The wider world is going to PTPv2.1, which provides tens of nanoseconds (random jitter) and maybe 50-100 nanoseconds average offset error (can be plus or minus, depending on details of the cable plant et al). But all this is quite complex and expensive. But in five or ten years, it'll be common and dirt cheap.

Joe Gwinn

Reply to
Joe Gwinn

That's encouraging. Thanks.

I like the idea of the switch forwarding the packet in microseconds, before it's actually over.

A short UDP packet should get through fast.

My users usually have a private network for data aquisition and control, and I can tell them what the rules are.

I don't need nanoseconds for power supplies and motors. If I were to try to phase coordinate, say, 400 Hz AC sources, 10s of usec would be nice.

The clock on the Raspberry Pi is a cheap crystal and is not tunable. It might be interesting to do a DDS sort of thing to make a variable that is a local calibrated time counter. We could occasionally send out a packet to declare the time of day, and the little boxes could both sync to that and tweak their DDS cal factors to stay pretty close until the next correction. All software.

Reply to
John Larkin

Sent 3 copies: one exact, one with your address within <> (originally sent without these as usual by my mistake), and one like the second but Cc-ed to an address of mine. I got the Cc.

Reply to
Dimiter_Popoff

There's an algo for that in the guts of NTP since before the Flood, I believe. It even dorks the cal factor to ensure phase continuity in the timer as it slews to the new offset value.

Cheers

Phil Hobbs

Reply to
Phil Hobbs

Welcome.

Yes. The shortest UDP packet is ~64 bytes.

Ahh. The usual dodge is to have a "realtime" LAN (no big trucks or coal trains allowed), plus a everything-goes LAN where latency is uncontrolled. These two LANs are logical, and may both be created by partitioning one or more network switches, so long as those switches are hunky enough.

OK.

I don't know that Raspberry Pi units are all that good as clocks.

The logic clocks in computers are pretty temperature-sensitive, but one can certainly implement a kind of DDS.

Phil H mentioned antediluvian frequency lock loop algorithm from NTP, which I have in the past adapted for a like purpose.

Basically, one counts the DDS output cycles between 1PPS pips, and change the DDS tuning word to steer towards zero frequency error. But this is done like steering a sailboat - steer to a place far ahead and readjust far slower than the response time of the boat to the helm. If one gets too eager, the boat instead swings widely instead of proceeding steadily towards the distant objective.

Joe Gwinn

Reply to
Joe Gwinn

No, that's the point of doing a DDS sort of correction to the event timebase. The Pico has a crystal and two caps, the classic CMOS oscillator, and I'd suspect it could be off by 100 PPM maybe.

It deserves to be simulated. But if it seesaws the effective clock frequency some 10s of PPM, but is long-term correct, that would so.

Reply to
John Larkin

The 64 byte minimum Ethernet frame size is from the 10Base5 vampire tap Ethernet so that collisions could be reliably detected.

The problem is that if an other big frame has been started to be transmitted when the "GO" frame is received, the previous frame is transmitted fully before the GO packet. Things get catastrophic, if 9 Kbyte Jumbo frames are allowed on the network.

IIRC the maximum IP frame size can be limited to 576 bytes, this reducing the maximum Ethernet frame size from 1500 to under 600 bytes.

<snip>

If the crystal has a reasonable short term stability but the frequency is seriously inaccurate, some DDS principle can be applied. Assuming the crystal drives a timer interrupt, say nominally every millisecond and the ISR updates a nanosecond counter. If it has been determined that the ISR is activated every 1.001234 milliseconds, the ISR adds

1001234 to the nanosecond counter. Each time when the million changes, a new millisecond period is declared. Using two or more slightly different adders and fractional nanoseconds can be counted.

Of course using a binary counter with say 0x8000 for no frequency error would simplify things.

Reply to
upsidedown

I can tell my users: don't do that.

Yes, something like that. On a dual-core 130 MHz ARM, one of the cores could run a reasonable periodic interrupt at, say, 50 KHz. I've run non-trivial IRQs on an ARM at 100 KHz with a 70 MHz clock.

The option is to clock the Pico externally, which can probably be done, or at least fire an IRQ externally. That adds a VCXO and some other parts to the board, which isn't terrible. Adds a little hardware in place of a lot of thinking and software; better path to done.

Reply to
John Larkin

Depending on the frequency that you want to generate and the jitter that you can tolerate you can sometimes get away with calibrating local slave system clock ticks per reference second (or 10 seconds) in each unit. Assuming here that you do have a good reference frequency available in the master.

I have used a free running loop counter in some entirely software driven low power clock devices to allow fractional corrections every 1,2,4,8,

16 times around the loop so that you can adjust phase by one CPU cycle every 16. Actually it had a once per day fiddle in the same vein.

You only need to be able to trim out about +/- 50ppm or so (often less). It worked well enough that I never bothered to temperature compensate it since the lab environment was so always so close to 20C.

Reply to
Martin Brown

<snip>

Typically a timer interrupt increments by one. Why not add a semi constant value to the counter, it is not much slower than INCin a counter.

Windows has a 63 bit time counter which counts 100 ns time steps, which is updated by the clock interrupt only 100 times a second, by adding a number close to 100 000 at each interrupt.This addend number can be adjusted with system calls, making implementing the NTP client easy.

Using two addend vales N and N+1 and applying each in every other clock interrupt, the jitter can be further reduced.

Reply to
upsidedown

Given a periodic interrupt interrupt based on a cheap non-adjustable clock, evey tick just do

Time = Time + Kcal

where Time and Kcal are both long floats, and Kcal is near 1.00000.

This is basically a DDS concept.

The progression of Time can now be tuned to parts-per-trillion at production calibration, and tweaked later on if some external correction is available.

The actual interrupt rate could be trimmed in a similar manner if some hardware counter-timer is available with enough bits, maybe in an FPGA. Hybrid tricks are possible.

If the interrupt rate is high, one could also just skip the occasional IRQ to get the apparent rate exact. Nobody will notice.

Same idea.

Reply to
John Larkin

lørdag den 22. april 2023 kl. 16.26.03 UTC+2 skrev John Larkin:

floats might not be the best idea

Reply to
Lasse Langwadt Christensen

Why not? It's sure easy. Other processes can just use the int part of TIME.

Reply to
John Larkin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.