Encryption of messages between embedded system and PC?

In a common situation such as a PC controlling some embedded unit, some form of messages is implemented (using serial, TCPIP etc.). Invariably, it is simply too easy to tap in to that link and reverse engineer the application protocol. So, the solution (I would think) is to have some type of encryption on the link to garble the messages and at least make them very difficult to understand. So here is the question. I have found several components (say for .NET) that will allow you to encrypt strings, files and streams easily. The thing is, that is nice for the PC side, but what about the embedded side where I am running some MC (that only understands C...).

Any ideas how to solve this, what a good strategy would be or maybe even some components that exist?

Thanks

Reply to
ElderUberGeek
Loading thread data ...

Build an encrypted tunnel and just pass your data through that. Don't force your application to deal with encrypting strings, etc.

Reply to
Don

And how would one do that? Would appreciate if you could expand a little...

D> > In a common situation such as a PC controlling some embedded unit, some

Reply to
ElderUberGeek

Do a search on encryption source. TEA is one that is promoted as being suitable for small micros. I am sure there are others.

--
Thad
Reply to
Thad Smith

You have *some* dort of "communication channel" that you have implemented (or, were PLANNING on implementing) -- whether it is using some high level abstraction like sockets or just a low level stream of bytes passed out via a UART.

Wrap the Tx/Rx API's into something consistent. Call them "read()" and "write()" for want of better terms.

Instead of having write() pass data to your UART (assuming this is the mechanism you are using for communication), have it pass data to the encrypting function. That function *then* passes the encrypted data to the UART (i.e. encrypt() becomes a *filter*).

Similarly, "read()" doesn't fetch incoming data directly from the UART but, instead, fetches it from "decrypt()" which, in turn, fetches from the UART. Decrypt is yet another filter.

In this way, your code sees a simple "read/write" API. The same is true in the "PC". Yet, the data itself has been encapsulated in an encrypted byte stream.

Of course, actually *doing* this requires a bit more thinking! :> You have t consider the cipher that you use and how it will react to an unreliable transport medium (i.e. what if bytes get lost/dropped/corrupted in transmission... UART overruns, etc.).

It's not a trivial problem to solve. But, it frees your application from the tedium of dealing with the encryption at the application level.

Look at things like SLIP and PPP for protocols to layer atop a simple UART. These give "reliable" transport onto which you can layer the encryption protocol.

While this may seem like a lot of work, it is "established technology". And, if you opt for some ad hoc scheme of your own, they *you* will have to define a protocol. *You* will have to figure out how you handle dropped characters, etc. (why reinvent the wheel? *especially* if communications theory isn't your forte...)

(if you are using something like TCP/IP, then you've already got the PPP/SLIP equivalent in place so just look for a respectable tunnel implementation -- ssh, SSL, etc.)

HTH,

--don

Reply to
Don

Please don't top-post. Your answer belongs after, or intermixed with, the quoted material. I fixed this one.

The first thing you have to decide is how strong your encryption is going to need to be. No system is 100% foolproof, it can only delay eavesdropping and/or make it expensive. If you just need to discourage attacks from script kiddies, then you can use a simple system.

The thing Don has been pointing out is that you need to isolate the encryption/decryption mechanism from the rest of your application. That way you can test the basics independently, and choose whether to build your own or use existing algorithms and code. You may need nothing more than an xor with a fixed value, when the problem reduces to agreeing on that value without over-exposing it. That first step, of agreeing on a key, is crucial and may be worth slow and involved coding.

--
Chuck F (cbfalconer at maineline dot net)
   Available for consulting/temporary embedded and systems.
Reply to
CBFalconer
[encrypt strings, files and streams]

TEA is bad if the uC has no native 32 bit operations. TEA might not be good anyway.

RC4 is good if (and only if) used properly.

_Any_ encryption is bad if used improperly.

The OP should go to sci.crypt to discuss this further.

Oliver

--
Oliver Betz, Muenchen (oliverbetz.de)
Reply to
Oliver Betz

There are solutions like RSA which can be implemented in software for some moderately spec'd processors or there are hardware solutions like SafeNet. The latter will take most of the grunt-work out of the implementation.

See:-

formatting link
for some ideas if not solutions.

--
********************************************************************
Paul E. Bennett ....................
 Click to see the full signature
Reply to
Paul E. Bennett

While the embedded unit could be a single chip microcontroller and use the option of preventing external access to the controller, but how are you going to protect the PC ? Unless the physical access to the PC can be inhibited, how are you going to prevent the disassembly of the encryption code in the PC and thus reverse engineering the protocol ?

BTW, it appears that in some cases the desire to keep a protocol secret is simply that the protocol implementation is so bad that you would not want anyone else see it :-)

Paul

Reply to
Paul Keinanen

Start with describing the threat you're protecting against.

Is your embedded system so simple that there may be others cloning it if the protocol is revealed?

Is it a viable scenario that you'll sell the embedded boxes even if somebody cracks the protocol and makes his own PC program for it?

Have you thought how much more difficult field troubleshooting will be with encrypted communication?

--
Tauno Voipio
tauno voipio (at) iki fi
 Click to see the full signature
Reply to
Tauno Voipio

It depends on what you are trying to protect against as well as the characteristics of the system in question.

It is possible that the only aspect of the system that is accessible is the *link*. I.e. uC and PC may be in secure -- or otherwise inaccessible -- locations. The "link" media may, however, not have these types of protections (e.g., wireless).

Also, there is a difference between motivations for protection:

- keep competitors/users from "stealing" your product

- keep "observers" from seeing what is happening in the "process" that you are controlling/whatever

- keep others from *interfering* with the "process" that you are controlling

The first is hard to do. Anyone sufficiently motivated ($$) can simply *buy* one of your devices and reverse engineer it at their leisure -- possibly using destructive techniques.

The second applies when you want to hide *information*. E.g., in an alarm system, preventing someone from passively determining when the building is no longer occupied by monitoring the data stream.

The third applies when you want to safeguard *control*. E.g., preventing someone from unlocking a door (in said alarm system).

Variations and combinations of all of the above exist.

Some commercial products have had laughable approaches to this. E.g., HP's "Secure (!) Web Console" should have left this "feature" out entirely -- since it was laughably naive.

There is a lot of science (math) in good crypto -- far more than I would ever care to understand! :> Schneier's book is an excellent reference if you chose to go down this path; and, an equally useful text if you just want to get a good overview of the issues involved and techniques available. I well recommend it.

Also, beware that some technologies are not usable in certain jurisdictions. E.g., the US has laws regarding the export of certain types of technology. I believe other EU members also have restrictions on the use of crypto technology. IANAL so you're best advised to think/research before naively including such a "feature" in a design.

--don

Reply to
Don

Almost but not entirely. In many cases this kind of scheme is used to protect overpriced junk from being copied. Or they want to sell you a piece of bad overpriced software together with some good hardware.

Rene

--
Ing.Buero R.Tschaggelar - http://www.ibrtses.com
& commercial newsgroups - http://www.talkto.net
Reply to
Rene Tschaggelar

Not true. If you want to be secure, and do it correctly you can publish the algorithm, publish the source and still be secure (as long as you protect the key). Security through obscurity will only work against lamers. There is a computability price, RSA public key stuff takes major processing, 3DES and AES not so much. So depending on how you handle the keys, encryption can be done on some embedded systems. Most current encryption algoritms have not been cracked and there is no known way to crack them (in reasonable time frames) with brute force.

"Never Say Anything" may be able to crack some publicly available stuff, but who would know? Reasonable protection against reasonable foes can be attainable.

Regards, Steve

There is no "x" in my email address.

Reply to
Steve Calfee

I think you failed to read my message correctly. Publish your algorithm. I buy your *device*. Unencapsulate the components, expose the die and microprobe the *executing* processor. Now I have a copy of the key, effectively (even if that may not be in a simple numeric form).

Sounds pie-in-the-sky? Can now be done at many universities on a student's budget (so, a *corporation* can easily do same!).

Talk to MS about how long their *hardware* secrets survived in the XBOX...

You are ignoring the physical attack on the device itself. Since you can't control who *buys*/acquires the devices, you can't prevent someone from reverse engineering the device itself (this happens a LOT... you are naive if you think it doesn't).

Depending on the complexity of the device itself (the OP's application suggests the field nodes are not as capable/complex as a "PC" so they are probably quite easy to reverse engineer), a competitor can simply buy one of your *employees*, clandestinely, and have the design *documents* handed to him in a manilla envelope :< (given how expensive engineering

*time* is, a determined adversary could gladly spend a few hundred $K to save a man-year of development effort!)
Reply to
Don

The smartcard people have put a lot of effort into defending against just this: an attacker with unlimited physical access to several copies of the product, & free to destroy them in searching. You usually can't just probe the smartcard chips.

Your second threat (suborned employees) is much more serious, indeed. Defences may include breaking the secret key/s into segments, with no one department having access to all of them. Of course, the administration cost goes up.

Reply to
David R Brooks

My experience tells me that the desire to include the incredible amount of the enciphering protection into a system is usually either the result of paranoia, ignorance or the overvalued self esteem. Any on those conditions do not contribute to the quality of the product.

The occasions where the encryption is really required are special and rare.

As for the task described, there is no point to do any strong encryption since the PC software part can be easily analyzed and hacked.

An elegant way to make a lamer proof obscure protocol is to multiply the stream of the data by a polynomial in GF(2) on the transmit side and to divide on the receive side. No synchronization or secret keys are required.

Vladimir Vassilevsky

DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

Yes, at that level of expertise a hardware attack can be effective.

However, you can get around it if the host end is secure, such as somewhere "out there" on a network. Then you can put a public key in the device. Stealing it is irrelevent since it is public. Then on connection the embedded device can get a transaction key from the host, which is used for data transmission. If the transaction key is used to send either via 3DES or AES, that transmission is secure - the transaction key is used only once and exists in a hard to reach ram location during use.

Doing good security is hard. Having both ends of a communication and the communication link available for attack is very difficult to secure. There are many attacks that experts can use. You have to determine what is "good enough".

Regards, Steve

There is no "x" in my email address.

Reply to
Steve Calfee

I never understood this sort of paranoia. We're talking about the

*protocol*. You can't snag the schematics by listening on the serial link or sniffing the USB. You can get the firmware binary sometimes, but you can always get that by cracking the case open anyway. If the company is selling hardware, then they don't make any extra profits by trying to lock up the software and making the interface proprietary.

Why after spending several thousand dollars on a JTAG debugging interface do I get stuck with crappy software because the interface is locked up tight and I'm forced to use the provided debugger only. The vendor won't lose a single sale if someone manages to get GDB to work with the product. No competitor is going to make an API compatible product that undercuts them because it's easier to make an incompatible product that undercuts them just was well.

In a few rare cases security is important. In security devices as indicated. Or devices that must strictly conform to regulations and disallow users from easily breaking things. But most of the time this extra security seems just a matter of too much paranoia, and possibly a confusion about where the value of their intellection property lies.

-- Darin Johnson

Reply to
Darin Johnson

One should assume your competitors would be competent :>

Yes, but you've missed the OP's point/application:

"In a common situation such as a PC controlling some embedded unit, some form of messages is implemented (using serial, TCPIP etc.). Invariably, it is simply too easy to tap in to that link and reverse engineer the application protocol. So, the solution (I would think) is to have some type of encryption on the link to garble the messages and at least make them very difficult to understand."

I.e. both ends of the link *are* available. And, a (bogus) customer could always *purchase* one of each end for use in his "factory" (or wherever said "PC controlling some embedded unit" is located) :>

Exactly. The point of my original post is: this is (probably) a wasted effort (in the OP's context) -- since it can so easily be circumvented.

Reply to
Don

Exactly. It increases *your* effort (as the developer) and steals resources (time, money) from making a better/cheaper/etc. product. A determined attacker will find a way of "breaking" (stealing?) any of these protections.

Actually, not really. Almost anything financial benefits from good security (transactions at CC readers, ATM's, slot machines, etc.). And, anything *medical* (privacy issues).

Increasingly, these two categories alone represent a *lot* of applications! :<

And, with more and more businessware being outsourced to third parties, those transactions often require protection (e.g., "corporate technology/trade secrets/processes/etc." are as valuable as cash in many industries). But, that can often be done via more sophisticated gateways than on the embedded devices themselves.

I'd take the opposite approach -- *publish* the protocol. Perhaps this will be of benefit to some of your customers. It undoubtedly isn't very sophisticated so you're not really "giving" your competitor much (in terms of saving him time/money to reverse engineer it).

And, you may get some goodwill from your customers in the process! E.g., in manufacturing environments, often the engineers on the floor have to tweek COTS boxes to do what they *really* want (instead of what they were *designed* to do). Having access to the protocol can simplify their cobbling together your devices with other aspects of their "system".

Reply to
Don

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.