Binary protocol design: TLV, LTV, or else?

It wasn't physically possible to do that in all environments unfortunately.

Consider, for example, some possible office environments from the 1990s. These days, if someone disrupts their own connection, it's only their own device which is affected, but in that timeframe you might have had a 10Base2 connection going from device to device within a region of a building.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
[Note: email address not currently working as the system is physically moving] 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley
Loading thread data ...

Of course! Nor is it likely that you'll have a dozen or more nodes for a single individual (or, an entire subnet, for that matter)!

Being able to use a (bus) network *in* a product instead of having to run control cables to a central "electronics cabinet" (star) makes a *huge* difference in installation and maintenance costs!

E.g., a licensed electrician is required to "run cable" in most facilities. You want to run sense leads from thermocouples, dew point sensors, anemometers, etc. to a "controller" and you spend several days of that electrician's time routing each cable to the equipment cabinet. And, those costs vary depending on how easy it is to get from points A,B,C... to that cabinet. It also determines where you can *locate* that cabinet (without "optional" supplemental signal conditioning).

OTOH, if you can wire all the field devices at the manufacturing facility and just have *one* cable that the electrician has to route (besides "utilities"), then installation costs drop by several kilobucks!

Of course! But, in my case, they're *all* "my" connections. And, I'd be aware of what sort of traffic is live on the network when I opted to disconnect a host (which can be done without interrupting the rest of the segment provided you aren't *moving* that host and necessitating a "cable adjustment").

I see more issues with twisted pair wiring because it "looks innocent"; people aren't "intimidated" by it. And, the connectors are total crap. Worse yet, they *almost* work when the locking tab snaps off -- until the connector works its way loose (because someone moved the piece of equipment into which it was plugged).

Then, we have all the home-made cables to contend with (it seems much easier to build a robust BNC-terminated cable than a twisted pair... for one thing, you don't need a magnifying glass to inspect your work!)

[I received an accusatory message the other day claiming that *I* "broke the printer". I replied: "Your handyman was there drilling holes in the counters. Wanna bet he moved the printer to do that? Wanna bet there's a cable to/from the printer that is now not seated properly in its jack?" Long silence. "Um, next time you're here, could you please fix the printer cable for us?"]

(And, we'll ignore the unfortunate "compatibility" with RJ11's...)

One thing that was great about orange hose was that **nobody** messed with it! :>

Reply to
Don Y

The nasty thing about 10Base2 is that the cable shield should be grounded at _exacltly_one point, usually at one of the terminator resistance.

Thus if the BNC connector touched a grounded metallic cable duct, the network failed. Thus, you had to cover the connectors with some insulating material and also make sure that any T-connector disconnected from a device did not make contact with any grounded objects.

Reply to
upsidedown

Since branches are not allowed in 10Base2, you have to run the bus via _all_ devices, one cable to the T-connector and an other cable back, quickly extending past the 200 m limit.

In the 10Base5 days, the thick cable was run the shortest way around the building and long AUI cables were run from each computer to the vampire tap transceiver sitting on the RG-8 bus cable.

Later on external 10Base2 transceivers with AUI 15 connectors could be placed optimally along the shortest bus path and again connect the device via the AUI cable to the transceiver.

With the use of integrated transceivers and T-connectors, you had to route the Ethernet traffic back and forth, loosing most of the benefits of a bus structure.

Reply to
upsidedown

I don't design aircraft carriers! :> 10m is more than enough to run from one end of a piece of equipment to the other -- stopping at each device along the way. 10Base2 was a win when you had lots of devices "lined up in a row" where it was intuitive to just "daisy chain" them together. E.g., imagine what a CAN bus deployment would look like if it had to adhere to a physical star topology (all those "nodes" sitting within inches of each other yet unable to take advantage of their proximity for cabling economies -- instead, having to run individual drops off to some central "hub/switch")

[As we were rolling our own hardware, no need for T's -- two BNC's on each device: upstream + downstream.]

But AUI cables were *long*, of necessity. You simply couldn't route (as in "bend") the coax to get everywhere the bus wanted to *be*!

You could create a "spoked wheel" distribution pattern -- each spoke being a network segment. E.g., when I ran 10Base2 here, I ran a cable into each room to service just the nodes within that room. No need to "return" from the (electrically) far end of the spoke... just let the segment end, there!

In a typical office environment, you don't have the same sort of "high node density" that I have (simply because I have less space to cram everything into! :< ) So, the ability to run a cable from one device to the next device SITTING RIGHT BESIDE IT was a huge win -- instead of having to run wires from each of these to a *third* point that tied everything together.

For example, I just wired a "computer lab" where the machines sit next to each other (~4 ft apart). Almost exactly 200 ft of cable despite the fact that the two machines farthest apart are less than 15 ft as the crow flies -- and could easily have been tethered together with ~40ft of coax.

Reply to
Don Y

Wiring TR was a PITA and the NICs initially were too complex to be reliable ... but that got fixed and TR's predictable timing made analyzing systems and programming reliably timed delivery - particularly across repeaters - easier even than on CAN.

DDI rings had the same good features (and, of course, the same bad ones).

YMMV, George

Reply to
George Neuner

Connectors were expensive. But, with a centralized MAU/hub/switch, the same sort of "star topology" related issues prevail.

At one time, I did an analysis that suggested even 4Mb TR would outperform 10Mb ethernet when you were concerned with temporal guarantees.

Of course, you can develop a token passing protocol atop ethernet. But, kind of defeats most of the reasons for *using* ethernet! (esp if you don't want to constrain the network size/topology ahead of time)

Not fond of optical "switches"? :>

Reply to
Don Y

Which was one of the touted features of TRN. Unfortunately for TRN, approximately zero users actually cared about that.

Reply to
Robert Wessel

It's too bad that "fast" has won out over "predictable" (in many things -- not just network technology).

IIRC, SMC was the only firm making TR silicon. (maybe TI had some offerings?) Not sure if they even offer any, currently.

[I think I still have some TR connectors, NICs and even a "hub" stashed... somewhere]
Reply to
Don Y

Heck, I've still got a ring running...

I'm not sure how much presence SMC had in TRN, Thomas Conrad and Madge were the big non-IBM players. IBM, of course, had its own chipsets, and they did sell them to other vendors.

Reply to
Robert Wessel

Token rings, e.g. FDDI and others, had config/management problems that largely negated predictability guarantees, e.g. dropped tokens, duplicated tokens, complexity (have a look at all the FSMs!)

CSMA/CD is much easier to manage and faultfind.

Reply to
Tom Gardner

Grrr... I misremembered! It was ARCnet that SMC supported. (cheaper)

Reply to
Don Y

The problem is you have to layer a *different* protocol onto those media if you want deterministic behavior. AND, prevent any "noncompliant" traffic from using the medium at the same time.

E.g., you could have "office equipment" and "process control equipment" sharing a token-passing network and *still* have guarantees for the process control subsystems. Not the case with things like ethernet (unless you create a special protocol stack for those devices and/or interpose some bit of kit that forces them to "behave" properly).

Reply to
Don Y

And Arcnet still exists too... (Although Arcnet was token-bus, not token-ring).

Reply to
Robert Wessel

As you are no doubt aware, what happened there was Ethernet switching and it well and truly solved the collision problem.

Most, if not all packets live on a collision domain with exactly two NICs on it - except for 802.11>x< , where just about any concept smuggled form the other old standards doubtless lives in the air link.

--
Les Cargill
Reply to
Les Cargill

And given that most 100Mb and faster (wired) Ethernet links are full duplex these days, there's effectively no collision domain at all.

OTOH, that doesn't prevent the switch from dropping packets if the destination port is sufficiently busy.

Reply to
Robert Wessel

It's not collisions that are the problem. Rather, it is timeliness guarantees. A node on an ethernet switch has no guarantee as to when -- or *if* -- its packets will be delivered.

Switches have fixed size memories. There are no guarantees that packets sent to the switch ever *go* anywhere.

By contrast, token passing networks gave assigned timeslots. You *knew* when your "turn" to use the media would come along. And, were *guaranteed* this by the basic design of the network itself (not some other protocol layered on top of it).

Ever seen any timing guarantees for a generic network switch? It's left as an exercise for the user: look at the packet buffer size in the switch, determine whether it is a store-and-forward switch or whether it can exploit cut-through technology, whether it is blocking/nonblocking, *and* the datacomm characteristics of all the other nodes on your network (will they cooperate with each other's needs? Or, blindly use as much as they can get??) and then try to come up with a hard-and-fast number to determine the expected latency for a specific packet on a specific node.

Repeat the exercise for a token passing network.

[If you think that ethernet makes those guarantees, then you can elide all the acknowledgements in the protocols and still be VERY CONFIDENT that everything STILL works properly :> ]
Reply to
Don Y
[attrs elided]

Or for the actions of one node influencing the delivery of traffic from *another* node! ("Betty in accounting is printing a lengthy report -- the CNC machines have stopped as their input buffers are now empty...")

Reply to
Don Y

Did you by chance have one male and one female BNC on the device ?

And what happens, when someone wants to pull out the device ? _All_ traffic in the net is disrupted, until the person finds how to join the two cables together, i.e. finds a female-female adapter after a 15 minute search :-)

Reply to
upsidedown

Even without errors, the constraints weren't very tight - your node could well get the next available slot after the 200 other nodes waiting to transmit a 4K* frame, even with priority reservations (admittedly that could be limited by controlling the number of nodes on the ring). That would be the better part of two seconds. And if there was any sort of token recovery action going on, several seconds of disruption were normal.

*4472 bytes for 4Mb TRN, 17800 for 16Mb
Reply to
Robert Wessel

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.