Suggestions for custom application-layer protocol?

I don't know of any HTTP, SMTP, NNTP, POP3, IMAP, etc. server that does anything with IAC sequences. Generally servers that care initiate by sending IAC commands EOR, NAWS and TTYPE etc to the client.

It's "big" because it's there already so noone has to write a simple client to test a service with. If SMTP had been written using netstrings, then a telnetstr command would be available everywhere and this "big" thing would be worth nothing. A similar thing is happening with HTTP, it's not "good"[1] but there are a significant number of tools that understand a HTTP stream ... so people hack it into places it shouldn't be so they can leverage those tools.

Say I connect to an SMTP service and keep sending 'a', the other end has to keep accepting data and parsing it upto the amount it limits itself to accepting for a single line. With something like a netstring the remote end can decide within 10 characters if it's going to just drop the connection/message or handle it.

It isn't just a problem of arbitrary data, but of different clients/servers parsing the same data in different ways. This is obviously a bigger problem the more implementations of clients and servers you have, and how compatible you want to be ... but even with just one client and one server it wouldn't be unique to have silent bugs where someone typed \n\r or r\\n or just \n instead of \r\n at one point in the code. At which point a third application implementing the protocol, or a supposedly compatible change to either the server or client can bring out bugs (often in the edge cases) ... something like netstrings is much less likely to have this kind of problem.

[1]
formatting link
--
James Antill -- james@and.org
http://www.and.org/vstr/httpd
Reply to
James Antill
Loading thread data ...

Except for Modbus RTU style framing, in which the time gaps between bytes _are_ the actual frame delimiters. Maintaining these over a TCP/IP link would be a bit problematic :-).

Paul

Reply to
Paul Keinanen

I've run into a few telnet clients that aren't as well behaved as the Unix ones. The Unix ones typically won't initiate any negotiation if they're connecting to a non-telnet port. Several of the windows clients I've tried aren't as polite.

--
Grant Edwards                   grante             Yow!  Psychoanalysis?? I
                                  at               thought this was a nude
 Click to see the full signature
Reply to
Grant Edwards

True. Modbus RTU's 3.5 byte time delimiter sucks. You're screwed even if all you want to do is use the RX FIFO in a UART.

--
Grant Edwards                   grante             Yow!  It's a lot of fun
                                  at               being alive... I wonder if
 Click to see the full signature
Reply to
Grant Edwards

Hardly any of the Modbus/RTU programs on PC:s handle the frame timing correctly.

Modbus framing is one of the best examples how framing should not be done. The only thing that competes with it is the idea of tunneling Modbus datagrams with TCP instead of UDP.

--
Tauno Voipio
tauno voipio (at) iki fi
 Click to see the full signature
Reply to
Tauno Voipio

As long as they're the master, or it's a full-duplex bus, they can get away with it. Being a slave on a half-duplex bus (everybody sees both commands and responses) is where the problems usually happen.

I once talked to somebody who used an interesting scheme to detect Modbus RTU messages. He ignored timing completely (so he could use HW FIFOs), and decided that he would just monitor the receive bytestream for any block of data that started with his address and had the correct CRC at the location indicated by the bytecount. It meant that he had to have a receive buffer twice as long as the max message and keep multiple partial CRCs running, but I guess it worked.

I'd have to agree that Modbus RTU's framing was a horrible mistake. ASCII mode was fine since it had unique start-of-message and end-of-message delimiters.

I don't even want to know how they did message delimiting in Modubs over TCP...

--
Grant Edwards                   grante             Yow!  What GOOD is a
                                  at               CARDBOARD suitcase ANYWAY?
 Click to see the full signature
Reply to
Grant Edwards

Being a slave on a multidrop network is the problem, a half-duplex point to point connection is not critical.

As long as you are not using broadcast messages, it should be sufficient for a multidrop slave to have just one CRC running (and check it after each byte received). This calculation should be done on all frames, not just those addressed to you.

From time to time, a message frame may be corrupted when the master is communicating with an other slave and after that, your slave does no longer make any sense of the incomming bytes, it might as well ignore them and set a timeout that is shorter than the master retransmission timeout.

As long as the master communicates with other slaves, your slave does not understand anything what is going on. When the master addresses your node, the first request will be lost and master is waiting for a response, until the retransmission timeout expires.

The slave timeout will expire before this, synchronisation is regained and your slave is now eagerly waiting for the second attempt of the request from the master. If there are multiple slaves that have lost synchronisation, they all will regain synchronisation, when one addressed slave fails to respond to the first request.

If the bus is so badly corrupted, that all slaves get a bad frame, the communication will timeout anyway, thus all slaves will regain synchronisation immediately. Only in situations (e.g. badly terminated bus) that the master and the actively communicating slave does not get an CRC error, but your slave will detect a CRC error, your slave will be out of sync, until the master addresses your slave and the timeout occurs.

Thus, the only harm is really that broadcasts can not be used, as all out of sync slaves would loose them.

Since Modbus over TCP is really point to point, the situation is similar to the serial point to point case.

Some converter boxes also convert Modbus RTU to Modbus/TCP before sending it over the net. The Modbus/TCP protocol contains a fixed header (including a byte count) and the variable length RTU frame (without CRC).

The strange thing is why they created Modbus/TCP for running the Modbus protocol over Ethernet instead of Modbus/UDP.

However, googling for "Modbus/UDP" gives quite a lot of hits, so quite a few vendors have implemented some kind of Modbus over UDP protocols in their products in addition to Modbus/TCP.

Paul

Reply to
Paul Keinanen

In my experience the problems usually occur only on a half-duplex network. If the slave doesn't see the other slaves responses, it has no problem finding the start of commands from the host.

How so? If you don't know where the frame started because you can't detect timing gaps, you have to have a seperate CRC running for each possible frame starting point, where each "starting point" is any byte matching your address (or the broadcast address).

The problem is that you don't kow where the frames start so the phrase "on all frames" isn't useful.

Ah.

Using UDP would seem to be a lot more analgous to a typical multidrop

Interesting.

--
Grant Edwards                   grante             Yow!  What a
                                  at               COINCIDENCE! I'm an
 Click to see the full signature
Reply to
Grant Edwards

If the master starts after all slaves are ready waiting for the first message, they all know were the first message starts. When the correct CRC is detected, the end of the first frame is known and it can now be assumed that the next byte received will start the next frame :-). This continues from frame to frame.

Everything works well as long as there are no transfer errors on the bus or any premature "CRC matches" within the data part of a message. If the sync is lost, unless broadcasting is used, is there any need to regain sync until the master addresses your slave ? The master will not get an answer to the first request, but it will resend the request after the resend timeout period. The slave only needs to be able to detect this timeout period, which would usually be a few hundred byte transfer times.

If the slave pops up into an active bus, it will regain synchronisation, when the master will address it first time and it will resend the command after a timeout period, since it did not get a reply. The slave just needs to detect the long resend timeout.

A shift, bit test and xor operation is needed for each data bit received to calculate the CRC into the CRC "accumulator". This calculation can be done for all received bytes when the end of frame gap is detected or eight bits can be calculated each time a new byte is received (actually calculate with the byte received two bytes since the current byte).

From a computational point of view, both methods are nearly equal. The only difference is that in the first case, the CRC accumulator is compared with the last two bytes in the frame (in correct byte order) only after the gap was detected, however in the latter case, the updated CRC accumulator must be compared with the two most recently received bytes each time a byte is received. Thus, the computational difference is only two byte compares for each received byte.

Thus a single CRC accumulator is sufficient, if no broadcasts are expected.

However, if it is required to receive broadcasts, then it might be justified to use about 260 parallel CRC accumulators (if the maximum size frames are expected). Each accumulator starts calculating from a different byte in the buffer. Each time a new character is received, the last two bytes (the CRC candidates) are compared with all 260 CRC accumulators and once a match is found, the start of that frame is also known and synchronisation is regained. After synchronisation is regained, one CRC accumulator is sufficient for keeping track of the frame gaps.

Since the Modbus CRC is only 16 bits long, relying on solely the CRC calculations can cause premature CRC detections with the likelihood of

1/65536 or about 16 ppm, so additional message header checks should be employed to detect these, before declaring a true CRC end of frame.

Paul

Reply to
Paul Keinanen

Ah. Right. I should have thought of that. [I never used the scheme in question -- I always had an interrupt for each rx byte, and did the gap detection according to the spec.]

Yup. And in most of the control systems I've run into, timeouts are usually on the order of a second or two, so detecting that isn't a problem -- even with a FIFO.

Hmm. I seem to have forgotten how broadcast messages work in the spec [I haven't done Modbus for a few years]. In our systems, we implimented a non-standard broadcast scheme by reserving address 0 as a broadcast address. One would only needed to start a CRC when a byte was seen that was 0 or that matched the slave address.

Good point. Thanks for the detailed explination.

--
Grant Edwards                   grante             Yow!  This ASEXUAL
                                  at               PIG really BOILS
 Click to see the full signature
Reply to
Grant Edwards

Are you commenting on anything beyond the obvious fact that the code uses strcpy and sprintf?

None of these are inherently simpler than HTTP, and you don't get the advantages that a web browser's sophisticated support for HTML confers.

-SEan

Reply to
Sean Burke

Speaking as one who has done it, adapting HTTP instead of using a custom protocol has many advantages besides the above. Think of all the proxies and filters out there, the tools that snoop the wire, making a nice graphical display sorted into request and response sequences (e.g. HTTPlook). Consider the existence of client-side libraries ready to use in any language (libwww or libcurl for C, java.net.* or Jakarta HttpClient for Java, lots of Perl modules, etc). None of this is available to a custom protocol, however easy to implement.

--
Henry Townsend
Reply to
Henry Townsend

STX/ETX over serial is used to steady the line noise that occurs in rs485 communication, because when slave switches it's transmit line ON, it generates noise on the line as a sideffect that could be missineterpred, so it's a good practice for serial messages (bin/text) to start with several STX chars and end with several ETX chars - message itself should have some crc check...

best regards, Mario

Reply to
Mile Blenton

I didn't read all the answers, just butting in, but take a look at NMEA0183. That is a simple text based protocol to send formatted data, used by marine equipement (GPS for instance). Is is more or less one way but you can simply add the other way if you like. Use it over UDP and add some acknowledge messages for instance.

John

Reply to
John Smith

Why should turning on the transmitter cause any noise ? In any properly terminated RS-485 system, the line is drawn by resistors to the Mark (idle) state when no transceiver is active driving the bus. Turning the transmitter on in the low impedance Mark state does not change the voltage levels. The voltages change when the transmitter start to send the start bit (Space).

However, if the RS-485 line in a noisy environment that is used infrequently, i.e. there are long idle periods between messages, the line is more prone to random errors due to the high impedance idle state than the low impedance active Mark or Space state. The noise will often cause false start bit triggerings (often seen as 0xFF bytes in the UART).

When the protocol frame always starts with a known character (such as STX), it is quite easy to ignore any noise received during the idle period.

In fact this also applies to actively driven RS-232/422/20 mA lines in very noisy environments if there are long pauses between messages.

However, the STX detection fails, if there is a Space noise pulse less than 10 bit times ahead of the STX (at 8N1 characters). The Space is assumed to be a start bit and the UART starts counting bits. While the UART is still counting data bits, the start bit for the true STX character is received, but it is interpreted as a data bit by the UART. When the UART is finally ready to receive the stop bit, actually some of the middle bits of the STX will be received. If this bit happens to be in the Mark state, the UART is satisfied with the stop bit and is waiting for the next Mark to Space transition, which is interpreted as the next start bit. However, if a Space data bit is received when the UART expects the stop bit, the framing error occurs.

In both cases, the STX character will not be detected and usually the whole frame will be lost.

BTW, the Modbus RTU specification (which does not use STX) specifies, that the transmitter should be turned on at least 1.5 bit times before the start of the transmission. This assumes that while errors may occur in the passively maintained Mark state, the actively driven low impedance Mark state will keep the line clean for at least 15 bit times. Thus, there should be no false start bits too close to the actual message, thus the first byte is always decoded correctly.

Using multiple STX characters makes sense only if there are more than

10 bit actively driven Mark (idle) between the STX characters. Even if the first STX is lost due to the false start bit, the second will be reliably detected. Sending multiple STX characters without a time gap would just cause a few framing errors, but it is unlikely to regain synchronisation. In fact, it would make more sense to send a few 0xFF bytes, so if the first is lost due to a premature start bit, the UART would get a few bits in the Mark state and assume that the line is idle and correctly detect the start bit of the next 0xFF byte.

A known end character (such as ETX) is used, since a simple interrupt service routine can independently receive characters until the end character is detected and the whole message can be passed to a higher level routine as a single entity.

How would multiple ETX characters help ? The end of message is already detected.

Paul

Reply to
Paul Keinanen

That isn't enough? See:

formatting link
Programmers _cannot_ get this right.

If, when I look outside, there is water falling from the sky I do not need to walk outside to know I'm going to get wet ... and if you are arguing that the drops of water are small and have large gaps between them, I am still not going to feel compelled to walk outside to see if I get wet.

Which web browser? As I said, in theory you can get something "simplish" that looks like a HTTP/1.0 server from the right angle ... and it might even work with mozilla (as that client is very forgiving), but making it a real HTTP/1.0 server isn't trivial and supporting HTTP/1.1 is very hard. Also if you need state to cross message boundaries you'll have to implement a lot more code on the server side.

--
James Antill -- james@and.org
http://www.and.org/vstr/httpd
Reply to
James Antill

Hi, Too much answers,seemingly, but one more why not. try to see the RFC 3117 and the protocol BEEP (the old BXXP)

formatting link
It is XML based but I think it can help you. good luck

Reply to
ivaylo.ganchev

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.