RS485 CSMA/CD protocol

Hi, I would need to implement an CSMA/CD L1 protocol code for RS485 in my new AVR project for home automation. Any link, hint or help for the similar source code?

Reply to
therion
Loading thread data ...

My hint would be...Don't.

I've been curious about this sort of thing myself, and as far as I can tell from web research no one out there has ever got it working satisfactorily for real, as opposed to demonstration projects.

AFAIK it's just not feasible to reliably detect collisions on a real-world bus with "standard" UART and RS485 hardware.

This is what CANBus is for.

Reply to
richardlang

Hmmm. It is possible, but you have to enable both Tx and Rx, and monitor that Tx = Rx. If not, back off and try again later.

Steve

formatting link

Reply to
Steve at fivetrees

I got to the same conclusion.

Btw. what are the differences between RS485 / CAN2.0 on physical layer ?

However, here is a thought:

  1. sender waits until bus is idle and then sends SOF (say 0) and bit-bangs message ID at low speed with software CD.

  1. if no collision is detected, the sender continues sending the content at desired speed using HW UART.

Even with simple mechanism of pulling bus active before sending the message and checking for idle before doing so decreses the probability of later collision substantially so the bit-banged priority is just a fail-safe mechanism in rare case multiple senders decide to pul bus active within same microsecond.

If system does not need 100% bandwidth and can withstand latency of occasional 1-2 retries, you may succeed with the active start symbol and random back-off timer.

Reply to
rziak

Yeah, yeah, that's how you would do it *in theory*

In practice I'm not at all confident that "standard" RS485 drivers & UART Rx. hardware will reliably detect and report the effects of another RS 485 driver at the other end of a sub-optimal (long, heavily loaded) real-world bus trying to drive to a conflicting signal level

Reply to
richardlang

CANBus drivers only actively drive one signal level, relying on termination resistors to passively assert the other level in absence of any other nodes trying to actively drive the bus. This allows CAN's non-destructive collision arbitration mechanism.

Some early CAN systems used RS485 drivers, but with the TTL Tx. signal from the higher level controller wired to the drivers chip enable line rather than the Tx. input.

Reply to
richardlang

However, you can use CAN bus Transcievers, and UART uC, especially if it is all your own network.

RS485 drivers have LowZ outputs, and so with real cable impedances, you cannot reliably sense contentions.

CAN drivers are open-collector, so OR tie, and can detect collisions within the transit times of cables.

So a vanilla uC can send, checking the RX to verify no differences, and if it detects a difference, it has to go to collision mode.

UART systems will take longer to detect/respond to a collision than CAN, but in a simple network, with low traffic, the lower cost of devices might make that worthwhile.

Since most uC UARTS have 9 bit modes, you can still use the CAN_Txcvr & collide detect, but have a master that allocates timeslots, if you need deterministic performance.

-jg

Reply to
Jim Granville

That kind of setup doesn't work reliably in my experience. What you can do is to only drive one level actively and have the termination of the bus define the other level. Then you connect outgoing data to the output enable of the RS485 driver. But then we're not within the RS485 spec any more. I have seen this work for synchronous communication at megabit speeds. At least some members of the QUICC/PowerQUICC family from Freescale has a mode called HDLC Bus that can be used for this. It is really designed for a direct connection between MCUs but with a few tricks it can be used to work with RS485 drivers.

/Henrik

Reply to
Henrik Johnsson

This could work quite reliable.

However, you would need a XOR gate between the TTL level Rx and Tx signals to detect coincidence, but you would still have some spikes at each bit transition due to the limited rise and fall times and possibly due to reflections. You would have to filter out these spikes, but how do set the threshold, in order to quickly detect when two transmitters starts to send the same character at say 1/4 bit time after each other.

If the collision is done purely in software after the UART, much more timing information is lost, especially if a FIFO is used on Rx and Tx. If the protocol always starts with the same character, say STX, how do you know, if that STX you got from the Rx FIFO is from your transmission or not.

Starting the message with _your_ address (not the destination address as in Modbus) would at least get some assurance that you got your own ID back, get the ID of an other station or a garbled character signifying collision, but still there would be false triggering due to time shifts by 2, 4 or 6 bits. On way around this would be to limit the number of stations to 8 and have station addresses with only one 0 bit, i.e. EF, DF, BF, 7F, FE, FD, FB, F7 (hex).

With low expected collision rates, it might even be acceptable to detect collisions from failing CRCs at the end of frame.

Paul

Reply to
Paul Keinanen

How many devices and how much comms? I have seen people implement servers which poll all devices and end up with just as good throughput

Reply to
DarkD

This collision protocol only makes sense with multiple masters, not in a master-slave setup. And if you have multiple masters, then better implement a token protocol. IMO, this collision protocol was one of the biggest mistakes in history of IT.

Rene

--
Ing.Buero R.Tschaggelar - http://www.ibrtses.com
& commercial newsgroups - http://www.talkto.net
Reply to
Rene Tschaggelar

May be, but it also may have been an unavoidable mistake - I can not think of a better approach for Hawaii's original ALOHA radio network on which Ethernet/CSMA/CD where partially based. (Especially when you consider the technology available at the time.) Each station in the Aloha network could not listen to the others, both because of the terrain of the island and because the antennas were directed to the "common medium", the satellite, so they had to broadcast blindly and wait for the satellite to echo it back to all the stations + acknowledge, as an indication that there was not collision. Of course the satellite or a dedicated ground station could have worked as masters distributing tokens, allocating time slots, etc. but that would have required them to adjust to changing network configurations and use some of the available bandwidth. (9.6 Kbits/sec!!) Again, think of the hardware available on the early 70's...

Excerpts from the paper "Ethernet: Distributed Packet Switching for Local Computer Networks" Robert M. Metcalfe and David R. Boggs Xerox Palo Alto Research Center CACM July 1976 (almost 30 years!)

".....

  1. Design Principles Our object is to design a communication system which can grow smoothly to accommodate several buildings full of personal computers and the facilities needed for their support.

Like the computing stations to be connected, the communication system must be inexpensive. We choose to distribute control of the communications facility among the communicating computers to eliminate the reliability problems of an active central controller, to avoid creating a bottleneck in a system rich in parallelism, and to reduce the fixed costs which make small systems uneconomical. Ethernet design started with the basic idea of packet collision and retransmission developed in the Aloha Network [1]. We expected that, like the Aloha Network, Ethernets would carry bursty traffic so that conventional synchronous time-division multiplexing (STDM) would be inefficient [1, 2, 21, 26]. We saw promise in the Aloha approach to distributed control of radio channel multiplexing and hoped that it could be applied effectively with media suited to local computer communication. With several innovations of our own, the promise is realized.

Ethernet is named for the historical luminiferous ether through which electromagnetic radiations were once alleged to propagate. Like an Aloha radio transmitter, an Ethernet transmitter broadcasts completely-addressed transmitter- synchronous bit sequences called packets onto the Ether and hopes that they are heard by the intended receivers. The Ether is a logically passive medium for the propagation of digit signals Is and can be constructed using any number of media including coaxial cables, twisted pairs, and optical fibers.

3.1 Topology We cannot afford the redundant connections and dynamic routing of store-and-forward packet switching to assure reliable communication, so we choose to achieve reliability through simplicity. We choose to make the shared communication facility passive so that the failure of an active element will tend to affect the communications of only a single station. The layout and changing needs of office and laboratory buildings leads us to pick a network topology with the potential for convenient incremental extension and reconfiguration with minimal service disruption. ...."

A few highlights:

"to accommodate several buildings full of personal computers..."

Several buildings, not a city, not the Internet, not the world.

"We expected that, like the Aloha Network, Ethernets would carry bursty traffic so that conventional synchronous time-division multiplexing (STDM) would be inefficient [1, 2, 21, 26]. "

Not teleconferencing, video-streaming, etc.

"we choose to achieve reliability through simplicity"

And that was indeed achieved.

(x-posted to alt.folklore.computers - I'm sure somebody will have something interesting to comment over there, before drifting off-topic for the next 3000 follow-ups... ;-) )

Roberto Waltman.

[ Please reply to the group, return address is invalid ]
Reply to
Roberto Waltman

Indeed I once saw an ethernet installation where the medium was the open air via a number of stubby ariels. It was a warehouse robot control system with a number of robots and a control computer communicating by ethernet with no wires.

--
C:>WIN                                      |   Directable Mirror Arrays
The computer obeys and wins.                | A better way to focus the sun
You lose and Bill collects.                 |    licences available see
                                            |    http://www.sohara.org/
Reply to
Steve O'Hara-Smith

Too bad the frames in CAN are so tiny.

Yes, I know the reason why, and in some applications it does make sense. In applications where higher latencies can be tolerated, having an option to use 256 or 512 byte frames would cut way down on the overhead.

--
Grant Edwards                   grante             Yow!  The FALAFEL SANDWICH
                                  at               lands on my HEAD and I
                               visi.com            become a VEGETARIAN...
Reply to
Grant Edwards

Why? It works brilliantly for Ethernet. In my experience, token passing is horribly complex.

--
Grant Edwards                   grante             Yow!  My vaseline is
                                  at               RUNNING...
                               visi.com
Reply to
Grant Edwards

The token ring advocates always pointed out how certain circumstances could lead to collision detection not working nicely. They were also extremely unconfortable with the lack of guaranteed bandwidth for each master etc.

But the token ring vs collision detection wars for general-purpose networking were fought and token ring lost. In real life the concerns that the token ring advocates had about collisions just don't happen, even on highly saturated ethernets.

Now, you can make up some really stupid collision detection/back-off algorithms that just don't work. CSMA/CD can be stupidly implemented such that it doesn't work well. Usually these implementations were done by committees who thought too hard about a simple problem and worked hard to come up with an insane list of requirements. I think the original poster was looking to avoid such mistakes by asking for example code that does things the right (not wrong) way.

Curiously I've seen CSMA/CD done the "wrong way" most often when the committee designing it has a lot of token ring advocates on it. They add all sorts of arbitrary and unnecessary requirements about guaranteed bandwidth etc. and make the result useless.

Token ring still lives on in many special-purpose protocols, not necessarily because collision detection won't work but just because it wasn't used.

Tim.

Reply to
Tim Shoppa

No it doesn't work brilliant. It is crap. It doesn't have a deterministic responsetime. The token protocol such as in Arcnet is much better. Each node gets a slot time every 150ms or so. At the time the two battled, the Ethernet was 10MBit, and ARCNet was 2.5MBit. But under load Arcnet performed much better. While Arcnet was standing at somewhat below 2.5MBit over the bus, independent on the number of nodes and traffic, Ethernet went right down to zero with increasing load.

But the marketing guys just saw the 10MBit vs 2.5MBit. Too bad. Ethernet improved the bandwidth but the responsetime is still not deterministic, unless all nodes are connect to a switch. The switch avoids collisions, of course.

Rene

--
Ing.Buero R.Tschaggelar - http://www.ibrtses.com
& commercial newsgroups - http://www.talkto.net
Reply to
Rene Tschaggelar

The point is the behaviour under load. A token protocol doesn't require a token ring. A bus is sufficient, ARCnet works on twisted pair and over coax.

Tokenring failed for commercial reasons. It was single source and the price was beyond reasonable.

When you have a network and it should work under heavy load, eg all nodes want to transfer huge binaries at the same time, then a token protocol distributes the bandwidth of the medium over the nodes while the colision detect system just detects endless collisions and does endless retries.

It is less that the tokenring advocates were uncomfortable. A realtime system requires a defined response time that suits the physical installation this system should control. A car control system requires responsetimes in the millisecond region and you wouldn't want your cars cpu to retry some bullshit while you want the car to stop. Realtime response has nothing to do with being comfortable, it has to do with lives.

Rene

--
Ing.Buero R.Tschaggelar - http://www.ibrtses.com
& commercial newsgroups - http://www.talkto.net
Reply to
Rene Tschaggelar

Exactly, so for this type of application, it may be a wise engineering choice NOT to use Enet for communication. THis does not make it bad! I don't think a blanket statement about it being the worst decision ever is warranted. If you used Ethernet in a hard RT system and did not design it properly, thats your fault, not the protocols..... John

Reply to
John Hudak

one of the things we did in the hsdt

formatting link

in the 80s was design and deploy tdma earth stations ... we got a transponder on sbs-4 ... and even got to go to the launch party at the cape (complicated by the scrubs):

formatting link

slightly related post (at the very bottom mentioning AT&T)

formatting link
Data communications over telegraph circuits

we were getting into some fancy dynamic bandwidth re-allocation on superframe boundary. the stations were also capable of agile frequency hopping.

a little topic drift ... may wife is listed as co-author on early token-passing patent

formatting link
internet preceeds Gore in office.
formatting link
IBM 3614 and 3624 ATM's
formatting link
Cerf and Kahn receive Turing award
formatting link
practical applications for synchronous and asynchronous communication
formatting link
Development as Configuration

there is even a tie-in between the rios chip design work going on in austin and the lsm & eve out in the valley

formatting link

for even more drift:

formatting link
Chip Emulators - was How does a chip get designed?
formatting link
Multics hardware (was Re: "Soul of a New Machine" Computer?)
formatting link
LSM, YSE, & EVE
formatting link
asynchronous CPUs
formatting link
Ping: Anne & Lynn Wheeler
formatting link
US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
formatting link
CKD Disks?
formatting link
360 longevity, was RISCs too close to hardware?
formatting link
[Lit.] Buffer overruns

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Reply to
Anne & Lynn Wheeler

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.