Bit-level protocol or text-based protocol? (PC to embedded)

I am developing (well, not personally) an embedded box that communicates to a PC host. They talk using Ethernet (TCPIP). The box interfaces to some industrial equipment, sends info to the PC and the PC controlls it.

It has been suggested that we use a bit-level protocol where a frame is composed of bytes and some bytes contain data while other bytes contain bit-leven control and status information. In my view this might suit box-to-box communication on some RS*** link. But when we have a full fledged PC and 10MBPS Ethernet, I think that a more verbose text-based protocol would be better where the messages use say XML tags or something like that. The reason I think this is because: a) it makes the messages easier to read, b) it makes socket programming easier (i.e. not having to assemble and parse bits from bytes). Presumably the consideration was speed (i.e. the less information the faster the protocol), but I don't think that in our case this should be a constraint.

Am I wrong?

Thanks

By the way, on another related issue, the design is currently purely master (PC) slave (box). This requires that the PC poll the box for eny infomration and status change. I think that having a mixed mode (i.e. each can be both) asynchronous method would be better because this way, each device only "speaks" when it needs to and we don't load up the network with polling. Is this a correct assumption? (I know both ways would work but still...).

Thanks again!

Reply to
ElderUberGeek
Loading thread data ...

No. Either way is fine. But remember that you'll still be dealing with a serial stream of characters, and you'll still need a parser. Couple of quick thoughts:

- Whether talking RS*** or ethernet, I do prefer ASCII-encoded data i.e. the value 1234 gets sent as the string "1234" rather than a binary value. Makes it easier to read, as you say, but also avoids issues with e.g. control characters and nulls. Also avoids issues with endianness and variable sizes - I'm currently working on a protocol that was defined (within an i386 environment) in terms of structures, and this, ported to a different architecture, is causing some avoidable data translation work. Ints aren't too bad, but floats are a PITA.

- Make sure your message includes a message length field, or uses a distinct message start/end (e.g. STX/ETX) signal. Pretty obvious really ;). Alternatively, use a fixed message size - but I'd avoid this if I were you, unless your data is trivial and won't ever need expansion.

Without knowing more details about your application, I feel XML might be overkill... but I can't really judge. There *are* advantages in having a tagged system, though, in that one can expand the protocol without breaking the fundamental system.

Yes, both would work. But your master may need to poll periodically anyway, just to be sure the slave is present. (Or your slave could check in periodically.) I've used both ways; one advantage of "speak only when you're spoken to" is that the master controls the timing of the network traffic. In an async environment, one can occasionally get unlucky and find that all the slaves are choosing to Tx at the same moment, and have the same repeat period - in which case average traffic is low but peak traffic is high, and there's more chance of collisions.

Steve

formatting link

Reply to
Steve at fivetrees

There's a standard way to do this, and while you might not follow it literally you probably should consider the spirit of it.

It's called SCPI and it's sort of "GPIB over ethernet/serial/tin cans and string/etc"

You may not want to go for the full command structure, but the human readable (ie, you can type it into a telnet client) commands are very usefull for debugging your device, and make it easier for users to test their programs. You might think the *IDN? command is pointless, but you'll find you get in the habit of using it just to make sure your embedded code is up and running (and that the chip has the right revision in it!)

On ethernet message size is not a speed factor at all. What will slow you down is the time it takes the PC to acknowledge each packet from your device - 200ms! The PC expects the ethernet to be pipelined with multiple packets in flight, and so won't acknowledge the first packet it gets until 200ms expires, because it's hoping to get a second. Easy fix is to send it an empty second one after each real one.

Poling is probably better because PC's expect for security reasons to be clients not servers. It's true that once the connection is open you can send data, but still, if you poll you know your device is still alive.

If you're really constrained for memory space or speed on the micro then binary coms might make sens, but otherwise, keep it readable...

Reply to
cs_posting

I start out with ASCII text that I can read and print most easily. Then after all is working and when I need to speed things up I switch to binary on the transmitter and receiver.

THis approach seems to be the best of both worlds.

Good Luck george

Reply to
GMM50

Thanks guys for the great replies. I have a follow up question. When defining the protocol, it is not only about the syntax of the messages but also the definition of the interaction between client and host (event/message flow, ack/nak, errors etc.). What should I be doing in this regard, or better yet, where can I find something written about how to properly design this type of application protocol?

Thanks.

ElderUberGeek wrote:

Reply to
ElderUberGeek

To share a experience I had with tcpip.

The PC was a Linux box and my remote embedded system sent binary data.

The Linux system programmer stated over and over that I was sending bad data (data stream too short).

After hours of trouble shooting, I found that the linux system was dropping the 0x0a char from the serial data stream.

Every *nix programmer knows you throw away linefeeds.

Setting the linux driver to binary mode took care of this problem.

*nix/Windose (PC) people are not embedded people.

Check their work.

donald

Reply to
Donald

Had a similar problem over twenty years ago with a programmer sending control sequences to a high end video hard disc recorder, but he was sending commands where the unit worked then went and did some uncommanded actions.

After 15 minutes with a serial line analyser, as I suspected he had sent

255 characters and VMS (yes this was controlled from a VAX), inserted CR, LF as it deemed the line length max had been reached.

Telling VMS from his application to set binary mode solved the problem.

Application/hosted programmers are not embedded people.

Once it gets out of the cosy 'virtual environment' of the box most of these programmers make too many assumptions.

Agreed

--
Paul Carpenter          | paul@pcserviceselectronics.co.uk
    PC Services
              GNU H8 & mailing list info
             For those web sites you hate
Reply to
Paul Carpenter

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.