Could anyone lend some light into which protocol or technique is used to actually send I/O from a legacy serial port device onto a TCP/IP network? I see there are many products out there, but none of the product sites describe the actual protocol used on the IP network side. An RFC search also came up with nothing, though I could have missed it as well... I know that on the client side (such as a Windblows machine) the IP packets can also then be accessed as a COM port from the software side...
AFAIK there is no 'standard' for the transport of serial traffic over an IP network. Each case is application-specific. (There may be a few 'system management' standards - I see 'SoL' bandied about - but nothing generic as far as I can tell).
Usually a generic solution would 'tunnel' serial data over raw UDP/TCP packets via a proprietory driver, optionally with a PC API for accessing the serial data at the other end of the network connection. For example, the API/driver would abstract a virtual serial port over the network.
The actual payload of course is application-specific, and how you choose to process the data after extracting from the IP packet is completely up to you.
There's a couple of open-source projects involving serial over ip.
Other 'turn-key' solutions offer boxes that you connect either end of a network connection to transparently transport serial data. They're using their own abstraction and it's not visible to the user.
Thanks Frieder, I suspect that it is indeed simply a Telnet connection, but no-where can I confirm that's how it's done, so I imagine that single character mode is used, and the recieve end does it's own message framing for the application on the other end. I'll have to look at how TCP/IP data payloads are then directed to COM port based applications.
As far as ... " ... Other 'turn-key' solutions offer boxes that you connect either end of a network connection to transparently transport serial data. They're using their own abstraction and it's not visible to the user. "
Telnet isn't a bad way to think of (and debug) it, but a common method is really just plain TCP to an arbitrary port (with which a telnet client is sort of backwards compatible).
If your serial link is fast, you probably don't want to put each character in it's own packet, instead wait a short amount of time and then send as many characters as you have collected from the serial line. Going the other way, when a packet comes in your push all its characters out the serial - optionally waiting to acknowledge the packet until you have done so.
You also need to decide what to do on the serial link if your TCP packets aren't being acknowledged. Do you just let incoming serial ->
ethernet characters collect in a buffer? What do you do when it overflows?
One commercial vendor uses something in the style of the old modem "AT" command set to control the ethernet link from the serial side.
Frieder, Thanks again. The reason I mention using singel character mode is that if the system is to be "blind" (to the serial protocol) through the TCP transport, then you have to transmit a single character at a time, since some messages may be a single character, but I suppose a good bet for better efficiency would be to send what data you have after something like 2-3 character times elapse since the last recieved character at the serial end. One last thought ... how does one choose an arbitrary TCP or UDP port? I'll take a look through the RFC's
I've worked on and with quite a few different serial to Ethernet products, and they use all sorts of different protocols. Some of them don't even run on top of IP.
It is in some products, some of the time. Search for products that mention RFC 2217 in their specs.
You have a good imagination, but I'm not sure what you're talking about by "receive end" and "message framing"
In general, the vendor writes a device driver for the OS in question that presents one or more "COM ports" to the user via the normal kernel API. In some OSes there are user-space ways of pretending to be a tty/serial port, but they don't always do a very good job of emulating the behavior of a real kernel-space driver.
The only "standard" that exists is telnet plus the RFC2217 serial port control extensions. However, a lot of devices don't use that protocol. There is at lease one "COM port" driver for Win32 that implements Telnet+RFC2217, and you can use that driver with any device that implements Telnet+RFC2217.
Grant Edwards grante Yow! Yow! Is my fallout
at shelter termite proof?
..and by doing this also ruin the serial line throughput when a half duplex protocol is used with short message frames. Some converters wait 10-100 ms after the last serial character received, before the (TCP or UDP) IP-frame is sent. For instance at 115200 bit/s, this corresponds to 100-1000 character times.
There's always going to be an throughput efficiency vs. latency tradeoff when doing AsyncEthernet. You should pick a product that allows you to control this tradeoff (or if it doesn't, make sure the designer chose a tradeoff that matches your requirements).
Grant Edwards grante Yow! Yow! I'm imagining
at a surfer van filled with
This is a valid concern, however there's a bit of a problem with single character packets when many common embedded TCP devices try to talk to common desktop operating systems.
TCP requires that you hang onto all outgoing data until it has been acknowledged, because if it's not acknowledged you will have to resend it. Most embedded implementations handle this by sending a packet, and then being unable to send another until the first has been acknowledged. The Windows TCP stack often takes as much as 200 ms to acknowledge a packet when only one is outstanding, so single character packets can be slowed down to the rate of only five per second!
There are a number of fixes. You can hit windows with a bogus ack right after you send it a packet and it will acknowledge yours. That gets you down into the 10 ms per packet regime, but then you are still going to be collecting at least 10 ms worth of incoming serial characters before each packet you send (no need for a delay, just when you get the ack packetise and send everything that came in while you were waiting on the ack).
A smarter method would be to keep better track of what has not been acknowledged, so that you can have multiple outstanding packets and not have to wait for the ack before you can send another. But most of the other wise turnkey implementations - OpenTCP for example - don't do that.
ok, Heres a link to an example configuration page from NetBurner for a customizable device using arbitrary TCP ports ... however they imply that PORT 23 is used to sense a connection from the IP network side... so that implies telnet delivers data to the serial unit, and comms from the serial unit go to an "arbitrary" port ... in this case 1000.
I was aware of the "ack" aspect, hence single char mode.
If anybody has any other experience/wisdom ... pipe up.
If you are using a protocol that was initially written for serial line communication with normal CRC checks and timeout controls, why bother with TCP, just use simple UDP. If the UDP frame is lost, let the original serial line protocol timeout mechanism handle any missing data.
Yes, if you are using such a protocol. But most common serial devices, be they lab test equipment, printers, modems, or whatnot do not use such a protocol all of the time, though users may be in the habit of using one (XMODEM or something) when corruption is intolerable.
In packet switched networks it is permissable to drop packets just because you "don't feel like passing them on right now". In most point-to-point serial, the practical assumption is of quite high reliability, usually interrupted only by total failure.
When that's not true - with really noisy phone lines... 1200 baud dialup in Moscow circa '92... error correcting packetized protocols such as MNP made sense for interactive user sessions. You just had to get used to typing far ahead of the several retries necessary to get the packetized half-duplex echo through.
There's arbitrary (you pick one you like) and then there's arbitrary (it gets dynamically assigned).
Because your device may not be literally running a telnet service, you might want to pick a port other than telnet's port 23 to listen on. In some cases, client computers may even not be able to access some traditional low port numbers. For a custom function you might want to pick something slightly over 1024. If you want to use a telnet client to open an interactive test session to your device, you specify the port number in addition to the IP address to the program, and you do something similar with the operating system call to open a TCP session if you are building it into a program.
When you connect to the device, it then picks a port to reply to you on... you don't have to worry much about that part unless you are building the TCP stack from scratch. Just worry about picking the device's listen port/
I remember my first "private TCP protocol". I chose a port number easy to remember and within the above range; so I picked yymmd, which is the day I was born. So you now know I am between 40 and 56 years old...
This has at least one problem: these serial line protocols usually assume that packets arrive in order. This makes sense for these protocols (I cannot imagine a message jumping over the previous one and reaching its destiny sooner than the other), but not for IP packets.