Do you use serialization formats for communication?

It is entirely robust and safe - but not directly portable to the few devices around that have CHAR_BIT > 8. By using uint8_t in the code, if it is used on a device with CHAR_BIT > 8, then the compilation will fail because the type uint8_t does not exist. Your aim in writing code should be to make it clear, efficient on the targets that matter to you, and make it fail with a compile-time error on targets that don't match your requirements. So if your realistic targets have 8-bit char (and most do - the exceptions are almost exclusively DSPs), and your code can be better by relying on that feature, then you /should/ rely on 8-bit chars. And for robustness the code should fail to compile if that feature or assumption does not hold.

If you want to use the same technique and make it portable to 16-bit char devices (TMS320 and so on), then you can't use uint8_t types - uint16_t is the smallest you should use. And the portable static_assert should be:

static_assert((CHAR_BIT * sizeof(format_payload)) == 8 * 4);

That is "better" if you want longer, slower, uglier and hard to maintain code.

What you call "lazy", I call clear, neat and efficient.

It is /a/ way to do it in a portable manner - and sometimes you want extreme portability. But such portability is rarely useful, and rarely results in better code.

When you write the documentation for a project, do you assume anyone reading it is happy with English, or do you translate it into a few dozen other languages for "portability" ? Do you avoid sentences with more than 10 words because some people reading it might be severely dyslexic? And do you expect your clients to pay for the extra time needed for such "robustness" and "portability" ?

I am not saying that portability is a bad thing, or that your method is necessarily a poor choice. I am merely saying that "portability" is not free, and you should not pay more of it than you actually need.

Reply to
David Brown
Loading thread data ...

I'm not sure you need a complex format here, if you think of the problem in two layers, keeping the protocol layer transparent to data.

A simple frame format could be:

Start of frame byte Data length, N Data, N bytes Checksum or crc End frame byte

You then write a simple state machine to verify checksum, extract the data and its length. Pass that to the protocol layer, which knows where to where to look in the data for it's revision level. To maintain compatability, any new parameters are tagged on to the end of the existing data and the data length increased to suit.

Either that, or negotiation between ends to agree capabilities, but that's much more complex and you should be able to avoid it...

Regards,

Chris

Reply to
Chris

^^^^^^^^

Sorry, typo, should have been "decode" layer.

Reply to
Chris

We use protobuf for our communication (nanopb). We're very happy with it.

We don't use dynamic memory with nanopb. However, we do have some 'scratch space', which is used as a sort of stack for unknown length data, but it is cleared after each time the data is send.

Vincent

Reply to
Vincent vB

Hi,

If you know that you will never need more than a certain buffer size, then it can be statically allocated at startup. Where that isn't known for sure, include instrumentation to check for a high water mark, then run the code worst case in a test harness to find out what it's actually using.

Chris

Reply to
Chris

Yick. And then get blind-sided when reality hoses you.

You either need a protocol that, end to end, guarantees some maximum buffer size, or you need a system that's tolerant to communications occasionally breaking down.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com 

I'm looking for work -- see my website!
Reply to
Tim Wescott

Sounds pretty meaningless to me :-).

Buffer sizes for many embedded systems are known, but worst case values are not always predictable, so you can end up allocating far more memory than is needed, which isn't helpful with limited memory systems. If you are building thousands of an item, you don't choose a micro with more ram than you need, since ram size can be one of the major price differentiators. Fine if you are running on PC hardware, Linux, whatever with Mbytes of ram, but most embedded systems don't have such luxury.

Typical recent example was a system that received commands via rs423 at 9600 with circular buffer fifos in and out. Did some quick sums and allocated bigger queues than we needed, with water marks a couple of points, then drove the system hard to see what was actually being used. Turned out that we needed far less than we thought.

Many embedded systems need that sort of fine tuning to optimise resource usage against cost. It's also good engineering practice from an efficiency point of view...

Chris

Reply to
Chris

Yes, if you're going into things blind it's good engineering practice to do things by measurement.

But my point is that if you can design the protocol, or if you have good specifications on it, it's _far better_ engineering practice to break out your pencil and paper and do the proof from first principles.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com 

I'm looking for work -- see my website!
Reply to
Tim Wescott

Oh, how old fashioned. Surely you realise that it is possible and desirable to test quality into a product :(

Reply to
Tom Gardner

Hi Chris,

We know exactly how much data is written at most. The scratch space is dimensioned for this case. Otherwise you'd need to test it.

Vincent

Reply to
Vincent vB

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.