Suggestions for custom application-layer protocol? - Page 3

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
Re: Suggestions for custom application-layer protocol?

Quoted text here. Click to load it



There is a multi-drop (I think thats the term, my brain is a bit frazzled today)
which was used to connect many terminals to a host controller.

ISO/ASYNC I think it was called.  Relied on a host polling each terminal
in turn, at which point they were allowed to speak.

Was used extensively with serial driven ATMs - most are TCP/IP these days
I believe.

But we digress...

Glyn
--

------------------------------------------------------------------------
Glyn Davies / snipped-for-privacy@plig.net / www.technobobbins.com / Insert quote here?
We've slightly trimmed the long signature. Click to see the full one.
Re: Suggestions for custom application-layer protocol?

Quoted text here. Click to load it

I've never had the misfortune of having to deal with such a thing.

Quoted text here. Click to load it

I've once seen an ATM display an mswindows dialog box with a DHCP
related error message.

--
Måns Rullgård
snipped-for-privacy@inprovide.com

Re: Suggestions for custom application-layer protocol?
<snip>

Quoted text here. Click to load it


:-)
By the time I got to it we were moving people off them to dedicated lines.
Protocol stayed though as it worked fine.

Quoted text here. Click to load it


Yeah - they obviously didn't have the clever* stuff we knocked up
to stop that kind of thing happening.

Glyn
--

------------------------------------------------------------------------
Glyn Davies / snipped-for-privacy@plig.net / www.technobobbins.com / Insert quote here?
We've slightly trimmed the long signature. Click to see the full one.
Re: Suggestions for custom application-layer protocol?

Quoted text here. Click to load it

A byte stream is a byte stream.  The serial (as in RS-232) byte
stream isn't reliable, but I cann't see any difference between a
serial comm link and a TCP link when it comes to message framing.

--
Grant Edwards                   grante             Yow!  How do I get HOME?
                                  at              
We've slightly trimmed the long signature. Click to see the full one.
Re: Suggestions for custom application-layer protocol?

Quoted text here. Click to load it



Nope, there is no general difference...
The serial streams were generally 7bit though, which would stop you
using a binary length header.

Also, as you point out the serial stream is unreliable.
That makes a difference in the protocol you choose.  With STX/ETX framing if you
get
garbage you can resync at the next STX. If you are using length headers
and you get some noise on the line, you are lost and have no way to
resync.

Glyn

--

------------------------------------------------------------------------
Glyn Davies / snipped-for-privacy@plig.net / www.technobobbins.com / Insert quote here?
We've slightly trimmed the long signature. Click to see the full one.
Re: Suggestions for custom application-layer protocol?

Quoted text here. Click to load it

Except for Modbus RTU style framing, in which the time gaps between
bytes _are_ the actual frame delimiters. Maintaining these over a
TCP/IP link would be a bit problematic :-).

Paul


Re: Suggestions for custom application-layer protocol?
Quoted text here. Click to load it

True.  Modbus RTU's 3.5 byte time delimiter sucks.  You're
screwed even if all you want to do is use the RX FIFO in a
UART.

--
Grant Edwards                   grante             Yow!  It's a lot of fun
                                  at               being alive... I wonder if
We've slightly trimmed the long signature. Click to see the full one.
Re: Suggestions for custom application-layer protocol?
Quoted text here. Click to load it

Hardly any of the Modbus/RTU programs on PC:s handle the
frame timing correctly.

Modbus framing is one of the best examples how framing
should not be done. The only thing that competes with
it is the idea of tunneling Modbus datagrams with TCP
instead of UDP.

--

Tauno Voipio
tauno voipio (at) iki fi


Re: Suggestions for custom application-layer protocol?

Quoted text here. Click to load it

As long as they're the master, or it's a full-duplex bus, they
can get away with it.  Being a slave on a half-duplex bus
(everybody sees both commands and responses) is where the
problems usually happen.

I once talked to somebody who used an interesting scheme to
detect Modbus RTU messages.  He ignored timing completely (so
he could use HW FIFOs), and decided that he would just monitor
the receive bytestream for any block of data that started with
his address and had the correct CRC at the location indicated
by the bytecount.  It meant that he had to have a receive
buffer twice as long as the max message and keep multiple
partial CRCs running, but I guess it worked.

Quoted text here. Click to load it

I'd have to agree that Modbus RTU's framing was a horrible
mistake. ASCII mode was fine since it had unique
start-of-message and end-of-message delimiters.

Quoted text here. Click to load it

I don't even want to know how they did message delimiting in
Modubs over TCP...

--
Grant Edwards                   grante             Yow!  What GOOD is a
                                  at               CARDBOARD suitcase ANYWAY?
We've slightly trimmed the long signature. Click to see the full one.
Re: Suggestions for custom application-layer protocol?


Quoted text here. Click to load it

Being a slave on a multidrop network is the problem, a half-duplex
point to point connection is not critical.

Quoted text here. Click to load it

As long as you are not using broadcast messages, it should be
sufficient for a multidrop slave to have just one CRC running (and
check it after each byte received). This calculation should be done on
all frames, not just those addressed to you.

From time to time, a message frame may be corrupted when the master is
communicating with an other slave and after that, your slave does no
longer make any sense of the incomming bytes, it might as well ignore
them and set a timeout that is shorter than the master retransmission
timeout.

As long as the master communicates with other slaves, your slave does
not understand anything what is going on. When the master addresses
your node, the first request will be lost and master is waiting for a
response, until the retransmission timeout expires.

The slave timeout will expire before this, synchronisation is regained
and your slave is now eagerly waiting for the second attempt of the
request from the master. If there are multiple slaves that have lost
synchronisation, they all will regain synchronisation, when one
addressed slave fails to respond to the first request.

If the bus is so badly corrupted, that all slaves get a bad frame, the
communication will timeout anyway, thus all slaves will regain
synchronisation immediately. Only in situations (e.g. badly terminated
bus) that the master and the actively communicating slave does not get
an CRC error, but your slave will detect a CRC error, your slave will
be out of sync, until the master addresses your slave and the timeout
occurs.

Thus, the only harm is really that broadcasts can not be used, as all
out of sync slaves would loose them.

Quoted text here. Click to load it

Since Modbus over TCP is really point to point, the situation is
similar to the serial point to point case.

Some converter boxes also convert Modbus RTU to Modbus/TCP before
sending it over the net. The Modbus/TCP protocol contains a fixed
header (including a byte count) and the variable length RTU frame
(without CRC).

The strange thing is why they created Modbus/TCP for running the
Modbus protocol over Ethernet instead of Modbus/UDP.

However, googling for "Modbus/UDP" gives quite a lot of hits, so quite
a few vendors have implemented some kind of Modbus over UDP protocols
in their products in addition to Modbus/TCP.

Paul


Re: Suggestions for custom application-layer protocol?

Quoted text here. Click to load it

In my experience the problems usually occur only on a
half-duplex network.  If the slave doesn't see the other slaves
responses, it has no problem finding the start of commands from
the host.

Quoted text here. Click to load it

How so?  If you don't know where the frame started because you
can't detect timing gaps, you have to have a seperate CRC
running for each possible frame starting point, where each
"starting point" is any byte matching your address (or the
broadcast address).

Quoted text here. Click to load it

The problem is that you don't kow where the frames start so the
phrase "on all frames" isn't useful.

Quoted text here. Click to load it

Ah.


Using UDP would seem to be a lot more analgous to a typical
multidrop

Quoted text here. Click to load it

Interesting.

--
Grant Edwards                   grante             Yow!  What a
                                  at               COINCIDENCE! I'm an
We've slightly trimmed the long signature. Click to see the full one.
Re: Suggestions for custom application-layer protocol?


Quoted text here. Click to load it

<some reordering>

Quoted text here. Click to load it

If the master starts after all slaves are ready waiting for the first
message, they all know were the first message starts. When the correct
CRC is detected, the end of the first frame is known and it can now be
assumed that the next byte received will start the next frame :-).
This continues from frame to frame.

Everything works well as long as there are no transfer errors on the
bus or any premature "CRC matches" within the data part of a message.
If the sync is lost, unless broadcasting is used, is there any need to
regain sync until the master addresses your slave ? The master will
not get an answer to the first request, but it will resend the request
after the resend timeout period. The slave only needs to be able to
detect this timeout period, which would usually be a few hundred byte
transfer times.

If the slave pops up into an active bus, it will regain
synchronisation, when the master will address it first time and it
will resend the command after a timeout period, since it did not get a
reply. The slave just needs to detect the long resend timeout.

Quoted text here. Click to load it

A shift, bit test and xor operation is needed for each data bit
received to calculate the CRC into the CRC "accumulator". This
calculation can be done for all received bytes when the end of frame
gap is detected or eight bits can be calculated each time a new byte
is received (actually calculate with the byte received two bytes since
the current byte).

From a computational point of view, both methods are nearly equal. The
only difference is that in the first case, the CRC accumulator is
compared with the last two bytes in the frame (in correct byte order)
only after the gap was detected, however in the latter case, the
updated CRC accumulator must be compared with the two most recently
received bytes each time a byte is received. Thus, the computational
difference is only two byte compares for each received byte.

Thus a single CRC accumulator is sufficient, if no broadcasts are
expected.

However, if it is required to receive broadcasts, then it might be
justified to use about 260 parallel CRC accumulators (if the maximum
size frames are expected). Each accumulator starts calculating from a
different byte in the buffer. Each time a new character is received,
the last two bytes (the CRC candidates) are compared with all 260 CRC
accumulators and once a match is found, the start of that frame is
also known and synchronisation is regained. After synchronisation is
regained, one CRC accumulator is sufficient for keeping track of the
frame gaps.

Since the Modbus CRC is only 16 bits long, relying on solely the CRC
calculations can cause premature CRC detections with the likelihood of
1/65536 or about 16 ppm, so additional message header checks should be
employed to detect these, before declaring a true CRC end of frame.

Paul


Re: Suggestions for custom application-layer protocol?

Quoted text here. Click to load it

Ah.  Right.  I should have thought of that.  [I never used the
scheme in question -- I always had an interrupt for each rx
byte, and did the gap detection according to the spec.]

Quoted text here. Click to load it

Yup.  And in most of the control systems I've run into,
timeouts are usually on the order of a second or two, so
detecting that isn't a problem -- even with a FIFO.

Quoted text here. Click to load it

Hmm. I seem to have forgotten how broadcast messages work in
the spec [I haven't done Modbus for a few years].  In our
systems, we implimented a non-standard broadcast scheme by
reserving address 0 as a broadcast address. One would only
needed to start a CRC when a byte was seen that was 0 or that
matched the slave address.

Quoted text here. Click to load it

Good point.  Thanks for the detailed explination.

--
Grant Edwards                   grante             Yow!  This ASEXUAL
                                  at               PIG really BOILS
We've slightly trimmed the long signature. Click to see the full one.
Re: Suggestions for custom application-layer protocol?

Quoted text here. Click to load it

STX/ETX over serial is used to steady the line noise
that occurs in rs485 communication, because when slave
switches it's transmit line ON, it generates noise on the line
as a sideffect that could be missineterpred, so it's a good
practice for serial messages (bin/text) to start with several
STX chars and end with several ETX chars - message itself should
have some crc check...

best regards,
  Mario


Re: Suggestions for custom application-layer protocol?
On Mon, 30 May 2005 14:15:58 +0200, Mile Blenton

Quoted text here. Click to load it

Why should turning on the transmitter cause any noise ? In any
properly terminated RS-485 system, the line is drawn by resistors to
the Mark (idle) state when no transceiver is active driving the bus.
Turning the transmitter on in the low impedance Mark state does not
change the voltage levels. The voltages change when the transmitter
start to send the start bit (Space).

However, if the RS-485 line in a noisy environment that is used
infrequently, i.e. there are long idle periods between messages, the
line is more prone to random errors due to the high impedance idle
state than the low impedance active Mark or Space state. The noise
will often cause false start bit triggerings (often seen as 0xFF bytes
in the UART).

When the protocol frame always starts with a known character (such as
STX), it is quite easy to ignore any noise received during the idle
period.

In fact this also applies to actively driven RS-232/422/20 mA lines in
very noisy environments if there are long pauses between messages.

However, the STX detection fails, if there is a Space noise pulse less
than 10 bit times ahead of the STX (at 8N1 characters). The Space is
assumed to be a start bit and the UART starts counting bits. While the
UART is still counting data bits, the start bit for the true STX
character is received, but it is interpreted as a data bit by the
UART. When the UART is finally ready to receive the stop bit, actually
some of the middle bits of the STX will be received. If this bit
happens to be in the Mark state, the UART is satisfied with the stop
bit and is waiting for the next Mark to Space transition, which is
interpreted as the next start bit. However, if a Space data bit is
received when the UART expects the stop bit, the framing error occurs.

In both cases, the STX character will not be detected and usually the
whole frame will be lost.

BTW, the Modbus RTU specification (which does not use STX) specifies,
that the transmitter should be turned on at least 1.5 bit times before
the start of the transmission. This assumes that while errors may
occur in the passively maintained Mark state, the actively driven low
impedance Mark state will keep the line clean for at least 15 bit
times. Thus, there should be no false start bits too close to the
actual message, thus the first byte is always decoded correctly.

Quoted text here. Click to load it

Using multiple STX characters makes sense only if there are more than
10 bit actively driven Mark (idle) between the STX characters. Even if
the first STX is lost due to the false start bit, the second will be
reliably detected. Sending multiple STX characters without a time gap
would just cause a few framing errors, but it is unlikely to regain
synchronisation. In fact, it would make more sense to send a few 0xFF
bytes, so if the first is lost due to a premature start bit, the UART
would get a few bits in the Mark state and assume that the line is
idle and correctly detect the start bit of the next 0xFF byte.
  
Quoted text here. Click to load it

A known end character (such as ETX) is used, since a simple interrupt
service routine can independently receive characters until the end
character is detected and the whole message can be passed to a higher
level routine as a single entity.

How would multiple ETX characters help ? The end of message is already
detected.

Paul


Re: Suggestions for custom application-layer protocol?
Hi,  Too much answers,seemingly, but one more why not. try to see the
RFC 3117 and the protocol BEEP (the old BXXP) www.beepcore.org It is
XML based but I think it can help you. good luck


Re: Suggestions for custom application-layer protocol?
I didn't read all the answers, just butting in, but take a look at NMEA0183.
That is a simple text based protocol to send formatted data, used by marine
equipement (GPS for instance). Is is more or less one way but you can simply
add the other way if you like. Use it over UDP and add some acknowledge
messages for instance.

John



Site Timeline