RS485 CSMA/CD protocol

Ethernet is used in hard RT vital safety-critical systems all the time. It's a matter of system design to make sure that collisions or a network failure doesn't result in unsafe conditions. Of course, if a Token advocate designed a CSMA/CD system you would end up with a disaster, because they would take the simplicity of working collision detection/backoff algorithms that work and layer on all sorts of crap to guarantee timeslots etc and end up with a system that doesn't work.

In fact I would be very critical of a safety-critical system where a network failure (here I'm lumping "excessive collisions" in with "network failure" which isn't too far off for Ethernet but of course the Token advocates will jump all over me for this) results in unsafe conditions.

Of course, I work in an industry where we take pride in our vital relays and vital processors having to meet more stringent failsafe standards than the detonation systems in thermonuclear weapons :-). But unlike say aerospace or road vehicle systems, my industry has the advantage that setting all signals to stop, dropping speed commands, and applying full-service brakes is the failsafe condition. It's hard to claim that shutting down an airplane's engines or applying full braking on a road vehicle is a failsafe condition...!

Tim.

Reply to
Tim Shoppa
Loading thread data ...

the big transition for ethernet was adding listen before transmit (and adapting t/r cat5 hub/spoke)

my vague recollection was early ethernet was 3mbit/sec, didn't do listen before transmit and had this big thick cables ... looked a lot like the pcnet cables (which i believe did 1mbit/sec but used a tv head-end type implementation).

when we had come up with 3-tier architecture and were out pitching in in executive presentations

formatting link

we were getting a lot of push-back from the saa and token-ring folks. some characterized the saa effort as trying to put the client/server genie back into the bottle ... aka maintain the terminal emulation operation

formatting link

and since we were pitching enet ... the token-ring people were also getting really upset. some t/r person from the dallas engineering & science center had done a report that showed enet typically only got

1mbyte/sec thruput (we conjectured that they based the numbers on old 3mbit/sec implementation before listen before transmit).

research had done a new bldg. up on the hill ... and it was completely wired with cat5 supposedly for t/r ... but they found that they were getting higher thruput and lower latency using it for star-wired

10mbit ethernet (even compared to 16mbit t/r). adapting the t/r hub&spoke cat5 configurations to ethernet tended to reduce the worst case latency on listen before transmit. this improved further by making the hub active ... so worst case was longest leg to the hub rather than latency across the hub between two longest legs.

then a paper came out in 88 acm sigcomm showing that a typical 10mbit ethernet star-wired hub configuration with all stations doing worst case, low-level device driver loop transmitting minimum sized packets was getting aggregate effective thruput of 85 percent of media capacity.

misc. past refs:

formatting link
"Mainframe" Usage
formatting link
Ethernet efficiency (was Re: Ms employees begging for food)
formatting link
Ethernet efficiency (was Re: Ms employees begging for food)
formatting link
OT - Internet Explorer V6.0
formatting link
Buffer overflow
formatting link
Microcode? (& index searching)
formatting link
Microcode? (& index searching)
formatting link
ibm time machine in new york times?
formatting link
ibm time machine in new york times?
formatting link
Rewrite TCP/IP
formatting link
Fast TCP
formatting link
Window field in TCP header goes small
formatting link
packetloss bad for sliding window protocol ?
formatting link
were dumb terminals actually so dumb???
formatting link
were dumb terminals actually so dumb???
formatting link
FAST TCP makes dialup faster than broadband?
formatting link
IBM 3614 and 3624 ATM's
formatting link
Successful remote AES key extraction
formatting link
practical applications for synchronous and asynchronous communication
formatting link
Development as Configuration

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Reply to
Anne & Lynn Wheeler

A protocol that can be used with RS485 buffers, at high bit rates is SDLC. This is a superset of HDLC. It also uses token passing. Each nodes transmits one bit time later what it receives.

Regards Anton Erasmus

Reply to
Anton Erasmus

When well defined response times are needed, I would not mess with any tokens but use a traditional fixed single master multiple slave system.

For applications, in which the need is for some high priority emergency messages and ordinary messages, I would use CAN and not mess with tokens that would require lost token handling etc.

Paul

Reply to
Paul Keinanen

ref:

formatting link
Ethernet, Aloha and CSMA/CD
formatting link
Ethernet, Aloha and CSMA/CD

the whole saa & terminal emulation forever

formatting link

overflowed into a number of areas.

romp/pcrt

formatting link

had done a customer 16bit 4mbit/sec t/r card ... and then the group was mandated that they had to use the PS2 microchannel

16mbit/sec t/r card for RIOS/6000.

the problem was that the PS2 card had the SAA and terminal emulation design point where configurations had 300 PCs per t/r lan; bridged, sharing common theoritical 16mbit (but in acutally much less), not routers, no gateways, etc. SNA didn't have a network layer ... just table of physical mac addresses ... modulo when APPN was introduced. We use to kid the person responsible for APPN that he should stop wasting his time trying to further kludge up SNA (the SNA group had non-concurred with even announcing APPN, there was a several week escalation process and the APPN announcement letter was rewritten to not imply any relationship between APPN and SNA).

In any case, the pc/rt & rios market segment was supposedly high-performance workstations, client/server, and distributed environment. The custom pcrt 16bit 4mbit/sec t/r card actually had higher per card thruput than the PS2 32bit 16mbit/sec t/r card (again the saa terminal emulation paradigm.

the pcrt/rios market segment required high per card thruput for high performance workstations and servers (in client/server environments where traffic is quite asymmetrical).

in this period, a new generation of hub/spoke enet cards were appearing (with new generation of enet controller chips like the 16bit amd lance), where each card was capable of sustaining full 10mbit (aka a server could transmit 10mbit/secs serving a client base having aggregate 10mbit/sec requirements).

by comparison, the microchannel 16mbit t/r environment actually had lower aggregate thruput and longer latencies ... AND the available cards had per card thruput designed to meet the terminal emulation market requirements (and one could say that the lack of high thruput per card also inhibited the evoluation of client/server ... as well as the 3-tier middle-layer/middleware paradigm that we were out pushing).

my wife had co-authored and presented a response to a gov. request for high integrity and operational campus-like distributed environment ... in which she had originally formulated a lot of the 3-tier principles.

formatting link

we then expanded on the concepts and were making 3-tier and "middle layer" presentations at customer executive seminars ... heavly laced with high-performance routers aggregating large number of enet segments. instead of having 300 machines bridged, sharing single

16mbit t/r, you had 300 "clients" spread across ten or more enet segments ... with servers having dedicated connectivity to the high-speed routers. other components then were used to stage and complete the 3-tier architecture. a couple of past posts in answer to question on the origins of middleware thtp://
formatting link
middle layer thtp://
formatting link
middle layer

this also contributed to the work that we did coming up with the ha/cmp product

formatting link

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Reply to
Anne & Lynn Wheeler

... and the street price of the new generation of 16bit enet cards capable of sustaining 10mbit/sec/card was heading towards $49 ... while the ps2 microchannel 16mbit t/r cards (where you were lucky to get much more 1mbit/sec/card, aks the per card sustained thruput was less than the pc/rt 16bit 4mbit/sec t/r card) were holding in at over $900.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Reply to
Anne & Lynn Wheeler

Oops, sorry officer. I was tuning my radio and I guess I caused the brakes to stop working for a while...

Reply to
Peter Flass

I have implemented a multi-master packet oriented framework over RS485 using HDLC-style framing and byte stuffing, checksums and positive acknowledgement.

It didn't use receive monitoring during transmit, instead it waited for the line to become idle (inactivity in the receiver) for a short sustained period, then transmitted into the wilderness and used timers, random backoffs and retries upon non-receipt of an ACK.

Collisions were actually quite rare, and when they did happen the checksums and ACKs took care of them.

I doubt it would win any awards, but it was trivial to implement and worked well enough for the environment it was in.

Regards, Paul.

Reply to
Paul Marciano

Ok, while most of you where elaborating about token vs. CSMA/CD , you forgot the Q... and I built a prototype network with 5 nodes - and it worrks very well! I will post C source code soon. The aim of the project is to build a HOME AUTOMATION network, no video, no brakes , not driving planes!!! just control of the light, HVAC and usual HA stuff. Like Tim Shoppa pointed " I think the original poster was looking to avoid such mistakes by asking for example code that does things the right (not wrong) way."

Please help in that direction!

THX.

Reply to
therion

Ok, while most of you where elaborating about token vs. CSMA/CD , you forgot the Q... and I built a prototype network with 5 nodes - and it worrks very well! I will post C source code soon. The aim of the project is to build a HOME AUTOMATION network, no video, no brakes , not driving planes!!! just control of the light, HVAC and usual HA stuff. Like Tim Shoppa pointed " I think the original poster was looking to avoid such mistakes by asking for example code that does things the right (not wrong) way."

Please help in that direction!

THX.

Reply to
therion

Ok, while most of you where elaborating about token vs. CSMA/CD , you forgot the Q... and I built a prototype network with 5 nodes - and it worrks very well! I will post C source code soon. The aim of the project is to build a HOME AUTOMATION network, no video, no brakes , not driving planes!!! just control of the light, HVAC and usual HA stuff. Like Tim Shoppa pointed " I think the original poster was looking to avoid such mistakes by asking for example code that does things the right (not wrong) way."

Please help in that direction!

THX.

Reply to
therion

It was also *tremendously* more dificult to implement in routers and such. The volume of code required to add token ring was phenomenal. (and FDDI, which participate heavily in what we called "token ring brain damage").

Cheaper to buy a bit more network bandwidth and use a simpler protocol, as it turns out. Ethernet works fine under even rather heavy loads in the real world.

Using a *network interconnect* for something requiring realtime response is pretty dumb, though.

--
David Dyer-Bennet, , 
RKBA:  
Pics:  
Dragaera/Steven Brust:
Reply to
David Dyer-Bennet

The big change was IMO, the DIX, DEC, Intel, Xerox, spec with 10Mb over coax, and the tightened up DIX II. The original cable was a special, and was changed to RG-forget. This also got rid of the small embarsment of the cable exceding Tempest specs and in theory being unexportable from the US. (The fix was to change the cable spec.)

There was a 3 Mb `Xerox Wire' that preceded DIX.

--
Paul Repacholi                               1 Crescent Rd.,
+61 (08) 9257-1001                           Kalamunda.
                                             West Australia 6076
comp.os.vms,- The Older, Grumpier Slashdot
Raw, Cooked or Well-done, it's all half baked.
EPIC, The Architecture of the future, always has been, always will be.
Reply to
prep

As far as I remember, the only special thing about the yellow DIX cable was the markers that dictated the minimum distance where you could put your vampire taps. Quite a few installations used the standard RG-8 (currently known as RG-213), but you had to measure the minimum tap distance yourself.

Paul

Reply to
Paul Keinanen

Over what distance have you tried this? In my experience it could work fine on a short bus, but fail over longer distances. I've had buses with about 20 nodes over a distance of 2-3 meters where collisions occurred due to hardware failure. If the desired signal was input at one end and a disturbing signal at the other then the nodes close to the good transmitter would still reliably receive that signal, no collision would have been detected in that end of the bus.

[snip]

It's not a matter of using clever code, the electrical interface will not work predictably in such a setup. To compensate you will have to use a pretty complex protocol in addition to all the CSMA/CD stuff.

If you use the termination to hold the "high" level and only drive the "low" level actively (or vice versa) you would probably stand a better chance, It would perhaps require a lower data rate, but in home automation I guess it could be fast enough.

Even if that direction is a dead end?

/Henrik

Reply to
Henrik Johnsson

Only there wasn't any satellite.

Peripheral stations would broadcast uncoordinated toward the central station; the central station would broadcast - on a different frequency channel - acks to the peripherals (if reception was ok) and, of course, packets for the peripherals.

On top of that, virtual serial lines were implemented that could address RJE or printers or whatever either at another station or at the central site, which included a gateway to a satellite link to ARPAnet at some point - which might be the reason for the confusion.

But yes, many of the peripheral stations would not be able to hear one another; so carrier sense was impossible, and explicit collision detection wasn't done either. The central station (as any other) would discard packets with wrong CRCs and send an ACK for good packets.

Regards, -is

--
seal your e-mail: http://www.gnupg.org/
Reply to
Ignatios Souvatzis

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.