Binary protocol design: TLV, LTV, or else?

Hi all.

I'm making a protocol for communication between a PC and a peripheral device. The protocol is expected to, at first, run on raw Ethernet but I am also supposed to not make any blunders that would make it impossible to later use the exact same protocol on things like IP and friends.

Since I saw these kinds of things in many Internet protocols (DNS, DHCP, TCP options, off the top of my head - but note that these may have a different order of fields), I have decided to make it an array of type- length-value triplets encapsulated in the packet frame (no header). The commands would fill the "type" field, "length" would specify the length of data ("value") following the length field, and "value" would contain the data for the command.

But I would like to hear other (read: opposing) opinions. Particularly so since I am self-taught so there may be considerations obvious to graduated engineers that I am oblivious to.

BTW, the periphery that is on the other end is autonomous and rather intelligent, but very resource constrained. Really, resource constrainment of the periphery is my main problem here.

Some interesting questions: Is ommiting a packet header a good idea? In the long run?

If I put a packet header, what do I put in it? Since addressing and error detection and "recovery" is supposed to be done by underlying protocols, the only thing I can think of putting into the header is the total-length field, and maybe, maybe, maybe a packet-id or transaction-id field. But I really don't need any of these.

My reasoning with packet-id and transaction-id (and protocol-version, really) is that I don't need them now, so I can omit them, and if I ever do need them, I can just add a command which implements them. In doing this, am I setting myself up for a very nasty problem in the future?

Is using flexible packets like this one (opposed the the contents of, say, IP header which has strictly defined fields) a good idea, or am I better off rigidifying my packets?

Is there a special prefference or reason as to why some protocols do TLV and others do LTV? (Note that I am not trying to ignite a holy war, I'm just asking.)

Is it good practice to require aligning the beggining of a TLV with a boundary, say 16-bit word boundary?

Reply to
Aleksandar Kuktin
Loading thread data ...

I've been supporting a protocol like that for many years. Doing raw Ethernet on Windows hosts is becoming increasingly problematic due to attempts by Microsoft to fix security issues. We anticipate it will soon no longer be feasible and we'll be forced to switch to UDP.

I'm not the Windows guy, but as I understand it you'll have to write a Windows kernel-mode driver to support your protcol, and users will require admin privlidges. Even then you'll have problems with various firewall setups and anti-virus software.

If the PC is running Linux, raw Ethernet isn't nearly as problematic as it is on Windows, but it does still require either root privledges or special security capabilities.

If you can, I'd recommend using UDP (which is fairly low overhead). The PC end can then be written as a normal user-space application that doesn't require admin privledges. You'll still have problems with some routers and NAT firewalls, but way fewer problems than trying to use raw Ethernet.

Using TCP will allow the easiest deployment, but TCP requires quite a bit more overhead than UDP.

--
Grant Edwards               grant.b.edwards        Yow! HAIR TONICS, please!! 
                                  at                
                              gmail.com
Reply to
Grant Edwards

Here there be dragons...

Are you sure you have enough variety to merit the extra overhead (in the packet *and* in the parsing of the packet)? Can you, instead, create a single packet format whose contents are indicated by a "packet type" specified in the header? Even if this means leaving space for values/parameters that might not be required in every packet type? For example: Where certain fields may not be used in certain packet types (their contents then being "don't care").

Alternatively, a packet type that implicitly *defines* the format of the balance of the packet. For example: type1: type2: type3: (where the format of each field may vary significantly between message types)

It seems like you are headed in the direction of: where the number of fields can vary as can their individual formats.

So, the less "thinking" (i.e., handling of variations) that the remote device has to handle, the better.

Of course, this can be done in a variety of different ways! E.g., you could adopt a format where each field consists of: and the receiving device can blindly parse the parameterNumber and plug the corresponding parameterValue into a "slot" in an array of parameters that your algorithms use.

Alternatively, you could write a parser that expects an entire message to have a fixed format and plug the parameters it discovers into predefined locations in your app.

Headers (and, where necessary, trailers) are intended to pass specific data (e.g., message type) in a way that is invariant of the content of the balance of the message. Like saying, "What follows is ...".

They also help to improve reliability of the message as they can carry information that helps verify that integrity. E.g., a checksum. Or, simply the definition of "What follows is..." allows the recipient to perform some tests on that which follows! So, if you are claiming that "what follows is an email address", the recipient can expect @. Anything that doesn't fit this template suggests something is broken -- you are claiming this is an email address yet it doesn't conform to the template for an email address!

Will that ALWAYS be the case for you? What if you later decide to run your protocol over EIA232? Will you then require inserting another protocol *beneath* it to provide those guarantees?

Will your underlying protocol guarantee that messages are delivered IN ORDER? *Always*?

Do you expect the underlying protocol to guarantee delivery? At most once? At least once?

That depends on what you expect in the future -- in terms of additions to the protocol as well as the conveyance by which your data gets to/from the device. Simpler tends to be better.

Depends on how you are processing the byte stream. E.g., for ethernet, if you try to deal with any types bigger than single octets, you need to resolve byte ordering issues (so-called network byte order). If you design your protocol to deal exclusively with octets, then you can sidestep this (by specifying an explicit byte ordering) but then force the receiving (and sending) tasks to demangle/mangle the data types outof/into these forms.

Reply to
Don Y

Read the Radius protocol RFCs and how they deal with UDP. There is a boat load of parsing code out there in the various Radius server and client implementations. If you start with UDP you can even cob together a test system using many of the scripting languages like perl, python, ruby, etc.

--
Chisolm 
Republic of Texas
Reply to
Joe Chisolm

Aleksandar Kuktin wrote in news:lakg10$kri$ snipped-for-privacy@speranza.aioe.org:

Hello,

I originated a product that used TLV packets back in the 90s and it is still in use today without any problems. It was similar to a configuration file that contained various parameters for applications that shared data. There was a root packet header. This allowed transmission acros TCP, serial, queued pipes, and file storage. We enforced a 4-byte alignment on fields due to the machines being used to parse the data - we had Windows, linux, and embedded devices reading the data. Just be sure to define the byte order. We wrote and maintained an RFC like document.

One rule we followed that may help you is that once a tag is defined it is never redefined. That prevented issues migrating forward and backward. Tags could be removed from use, but were always supported.

One issue we had with TLV was with one of the developers taking shortcuts. The TLVs were built in a tree so any V started with a TL until you got to the lowest level item being communicated. Anyway the developer in question would read the T and presume they could bypass reading the lower level tags because the order was fixed - it was not. Upgraded protocols added fields (a low level TLV) that cause read issues. Easy to find but frustrating that we had to re-release one of the node devices.

The only other error you are likely to get is due with TLVs like this is an issue if they entrire message isn't delivered. The follow on data becomes part of the previous message. That is why some encaptulation might be wise. If you are using UDP and there is no need for multiple packets per message (ever) that might be your encapsulation method.

Good luck,

David

Reply to
David LaRue

UDP adds very little compared to raw ethernet, some more or less stable header bytes and a small ARP protocol (much less than a page of code). There are a lot of tools to display the various IP and UDP headers and standard socket drivers should work OK.

If you are using raw ethernet on a big host, you most likely would have to put the ethernet adapter into promiscuous mode, which might security / permission issue.

Reply to
upsidedown

I would also advocate using UDP rather than raw Ethernet. Implementing IP can be pretty simple if one does not intend (as in this case) connect the device to the internet, fragment/defragment out of order datagrams etc. UDP on top of that is almost negligible. I can't see which MCU will have an Ethernet MAC and lack the resources for such an "almost IP" implementation.

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
dp

UDP tends to hit the "sweet spot" between "bare iron" and the bloat of TCP/IP. The implementer has probably the most leeway in deciding what he *wants* to implement vs. what he *must* implement (once you climb up into TCP, most of the "options" go away).

Having said that, the OP still has a fair number of decisions to make if he chooses to layer his protocol atop UDP. MTU, ARP/RARP implementation, checksum support (I'd advocate doing this in *his* protocol if he ever intends to run it over a leaner protocol where

*he* has to provide this reliability), etc.

I've (we've?) been assuming he can cram an entire message into a tiny "no-fragment" packet -- that may not be the case! (Or, may prove to be a problem when run over protocols with smaller MTU's)

Reply to
Don Y

Hi Don, UDP does not add any fragmentation overhead compared to his raw Ethernet anyway (that is, if he stays with UDP packets fitting in apr. 1500 bytes he will be no worse off than without UDP). IP does add fragmentation overhead - if it is a real IP. The sender may choose its MTU (likely a full size Ethernet packet) but a receiver must be ready to get that same fragmented in a few pieces and out of order and be able to defragment it. But since he is OK with raw Ethernet he does not need a true IP implementation so he can just do it as if everybody is fine with a fullsized ethernet MTU and get on with it as you suggest. Will lose a few bytes for encapsulation but if losing 100 bytes out of 1500 is an issue chances are there will be a lot of other, real problems :-).

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
dp

I'm thinking more in terms of any other media (protocols) over which he may eventually use for transport. If he doesn't want to add support for packet reassembly in *his* protocol, then he would be wise to pick a message format that fits in the smallest MTU "imaginable".

For ethernet, I think that is ~60+ octets (i.e., just bigger than the frame header). I'm a big fan of ~500 byte messages (the minimum that any node *must* be able to accommodate). I think you have to consider any other media that may get injected along the path from source to destination (i.e., if it is not purely "ethernet" from end to end. IIRC, a PPP link drops the MTU to the 200-300 range.

As above, I think if you truly want to avoid dealing with fragments, you have to be able to operate with an MTU that is little more than the header (plus 4? or 8?? octets). Even a ~500 byte message could, conceivably, appear as *100* little fragments! :-/ (and the receiving node had better be equipped to handle all 500 bytes as they trickle in!)

OP hasn't really indicated how complex/big his messages need to be. Nor what the ultimate fabric might look like.

E.g., here, I've tried really hard to keep messages *ultra* tiny by thinking about exactly what *needs* to fit in the message and how best to encode it. So, for example, I can build an ethernet-CAN bridge in a heartbeat and not have to worry about trading latency and responsiveness for packet size on the CAN bus (those nodes can have super tiny input buffers and still handle complete messages without having to worry about fragmentation, etc.)

It must have been entertaining for the folks who came up with ethernet, IP, etc. way back when to start with a clean slate and *guess* as to what would work best! :>

Reply to
Don Y

I've never found that to be the case. However raw ethernet access in non-promiscuous still requires admin/root/special privledges and causes a lot of security headaches (particularly under Windows).

--
Grant Edwards               grant.b.edwards        Yow! I'm continually AMAZED 
                                  at               at th'breathtaking effects 
                              gmail.com            of WIND EROSION!!
Reply to
Grant Edwards

I have been running raw Ethernet since the DIX days, with DECnet, LAT (similar to "telnet" terminal connections) and my own protocols forcing the network adapters into promiscious mode on thick Ethernet cables with vampire taps on the cable.

While some X.25 based protocols might limit the frame size to 64 bytes, 576 bytes has been the norm for quite a few years. Standard Ethernet frames are above 1400 bytes, while Jumbo frames could be about 9000 bytes.

64 bytes is the minimum for proper collision detection size on coaxial Ethernet networks.
Reply to
upsidedown

Anything in the chain can set the MTU to 68 bytes and still be "playing by the rules". So, if you *rely* on 70 octets coming down the 'pike in one UNFRAGMENTED datagram, if your PMTUd gives something less, you won't receive that level of service.

From RFC791:

"Every internet module must be able to forward a datagram of 68 octets without further fragmentation. This is because an internet header may be up to 60 octets, and the minimum fragment is 8 octets."

"Every internet destination must be able to receive a datagram of 576 octets either in one piece or in fragments to be reassembled."

So, a datagram could, conceivably, be fragmented into hundreds of 68 octet datagrams (which can include padding). Yet, must be able to reassemble these to form that original datagram. I.e., I could build a bridge that diced up incoming datagrams into itsy bitsy pieces and be strictly compliant -- as long as I could handle a 576 octet datagram (buffer size).

OTOH, reliable PMTU discovery is problematic on generic networks as many nodes don't handle (all) ICMP traffic (as originally intended).

But, nothing requires the nodes/hops to handle a ~1500 octet datagram ("Datagram Too Big")

Folks working on big(ger) iron often don't see where all the dark corners of the protocols manifest themselves. And, folks writing stacks often don't realize how much leeway they actually have in their implementation(s)! :<

[N.B. IPv6 increases these numbers]
Reply to
Don Y

Let's not forget that we're discussing UDP _as_a_substitute_for_ _raw_Ethernet_. That means the OP is willing to require that the two nodes are on the same network segment, and that we can assume that an Ethernet frame of 1500 bytes is OK.

If using UDP allows packets to be routed between two remote nodes _some_ of the time, that's still pure gravy compared to using raw Ethernet -- even if the UDP/IP implementation doesn't support fragmentation.

--
Grant Edwards               grant.b.edwards        Yow! PEGGY FLEMMING is 
                                  at               stealing BASKET BALLS to 
                              gmail.com            feed the babies in VERMONT.
Reply to
Grant Edwards

As I said in my reply to Dimiter, upthread: I'm thinking more in terms of any other media (protocols) over which he may eventually use for transport. Given that the OP is in the process of designing a protocol, he may want to consider the inevitability (?) of his interconnect medium (and/or underlying protocol) changing in the future. CAN-bus, zigbee, etc. I.e., *expecting* to be able to push a 1500 byte message "in one burst" can lead to problems down the road when/if that assumption can no longer be met.

Too often, an ignorance of the underlying protocol ends up having disproportionate costs for "tiny" bits of protocol overhead. E.g., adding a header that brings the payload to one byte beyond the MSS. "Why is everything so much slower than it (calculated) should be?"

I try to design with a mantra of "expect the least, enforce the most".

[The OP hasn't really indicated what sort of environment he expects to operate within nor the intent of the device and the relative importance (or lack thereof) of the comms therein]
Reply to
Don Y

Well, this is reassuring. It means at least someone did what I intend to do, so I should be able to do the same.

Reply to
Aleksandar Kuktin

Will give more details in a follow-up in a different sub-thread.

Pretty sure. The packet transmited over the wire is actually expected to be an amalgamation of various commands, parameters and options.

This is explicitly what I don't want. That way, I would need to send many, many packets to transmit my message across.

It seems this is what I will end up with.

Hmmm... Not really. Avability of CPU cycles depends on other details of the device, but if need be I can make the device drown in its own CPU cycles. Memory, on the other hand, is constrained.

I now go to the other sub-thread to continue the conversation...

Reply to
Aleksandar Kuktin

TBH, I really don't expect to support Windows, at least for the time being. My reasoning is that I can always patch together a Linux LiveCD and ship it with the device.

I began honing my skills with the Linux from scratch project, so assembling a distro should not take me more than a week.

The idea is to use one program that runs as root and relays packets and have a different program do the actual driving of the device.

UDP/IP is just an extension of IP. I considered using raw IP, but decided against it on grounds that I didn't want to implement IP, simple as it may be.

Ofcourse, I eventually *will* implement IP so then I might end up with the whole UDP/IP but, honestly, at this moment the only benefit of UDP/IP is the ease of writing the driver. But that is a very marginal benefit.

TCP/IP is out of the question, period.

Reply to
Aleksandar Kuktin

Actually, that's not how it happened at all. :)

Just like in any evolutionary process, several possible solutions were produced and the ones that were "fittest" and most adapted to the environment were the ones that prevailed.

Reply to
Aleksandar Kuktin

The device is a CNC robot, to be used in manufacture. Because of that, I can require and assume a fairly strict, secure and "proper" setup, with or without Stuxnet and its ilk.

The protocol is supposed to support transfer of compiled G-code from the PC to a device (really a battery of devices), transfer of telemetry, configuration and perhaps a few other things I forgot to think of by now.

Since its main purpose is transfer of G-code, the protocol is expected to be able to utilize fairly small packets, small enough that fragmentation is not expected to happen (60 octets should be enough).

Reply to
Aleksandar Kuktin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.