"Random" number generation (reprise)

Hi,

[yes, I understand the difference between random, pseudo-random, etc. I've chosen *random* in the subject line, deliberately.]

  1. All of the following happens without human intervention or supervision.

  2. Imagine deploying *identical* devices -- possibly differentiated from each other with just a small-ish serial number.

  1. The devices themselves are not physically secure -- an adversary could legally purchase one, dismantle it, etc. to determine its current state to any degree of exactness "for little investment". I.e., it is not possible to "hide secrets" in them at the time of manufacture.

  2. *Deployed* devices are secure (not easily tampered without being observed) though the infrastructure is vulnerable and should be seen as providing *no* security.

  1. The devices have some small amount (i.e., tens or hundreds of bytes) of persistent store.

  2. The devices download their operating "software" as part of their bootstrap.

  1. It is practically feasible for an adversary to selectively spoof the "image server" at IPL. (i.e., enter into a dialog with a particular device that is trying to boot at that time)

The challenge is to come up with a scheme by which only "approved" images can be executed on the device(s).

From (2), you have to assume the adversary effectively has the source code for your bootstrap (or can *get* it easily). I.e., you might as well *publish* it!

From (1) and (2), the "uniqueness" of a particular device is trivia that can easily be gleaned empirically.

From (5), any persistent *changes* to the device's state (e.g., (4)) can be observed *directly* (by a rogue image executing on the device)

*If* the challenge is met, then (3) suggests that (4) can become a true "secret".

So, thwarting (6) seems to be the effective goal, here...

--

If the device can come up with a "truly" (cough) random number prior to initiating the bootstrap, then a key exchange protocol could be implemented that would add security to the "image transfer".

This "random" number can't depend on -- or be significantly influenced by -- observable events *outside* the device. It can't be possible for someone with copies of the (published!) source code, schematics, etc. to determine what a particular instance of this number is likely to be!

So, how to come up with a hardware entropy source to generate/augment this random number -- cheaply, reliably and in very little space?

Note that the time required to generate the datum is a prerequisite to completing the bootstrap. However, the actual sequencing of the bootstrap might allow the image to be transferred *while* the random number is generated and then *authenticated* afterwards (e.g., a signed secure hash computed ex post facto). If the image is large (more exactly, if the transport time is *long*), then this could buy considerable time for number generation!

If (4), then this can build upon previous "accumulated entropy".

I've been stewing over Schneier to get a better feel for just how much "new entropy" is required... in theory, a secure (4) suggests that very little *should* be required as long as the protocols don't allow replay attacks, etc. (?) But, I have yet to come up with hard and fast numbers (if that guy truly groks all this stuff, he would be amazing to share a few pitchers with!)

The ideas that come to mind (for small/cheap) are measuring thermal/avalanche noise. I can't see how an adversary could predict this sort of thing even with access to *detailed* operating environment data!

A potential vulnerability would be in detecting failures of the "entropy source". While the device in question could actively monitor its output and do some statistical tests on the data it generates, I suspect that monitoring would have to take place over a longer period of time than is available

*during* the bootstrap. [And, a savvy adversary could intentionally force a reboot *before* the software could indicate (to itself) that the RNG is defective to avoid it on subsequent boots!]

Bottom line... I'm looking to pointers to practical sources for this randomness. Especially with guidelines on how to tweak the amount of entropy yielded per unit time, etc. (so I can scale the acquisition system to fit the bandwidth of that source, economically)

Thx,

--don

Reply to
Don Y
Loading thread data ...
[?]
[?]

I'm a bit out of context here, but if the ?key exchange? there means something like Diffie-Hellman, then there's a catch. Namely, while Diffie-Hellman, indeed, relies on an RNG, and secures the data link, it doesn't, in fact, provide /authentication/.

IOW, the device would establish a data link with the image server that cannot be feasibly eavesdropped. But it /wouldn't/ /know/ whether it's the intended image server, or a rogue server.

For authentication, something like RSA or DSA is to be used, and those don't rely on an RNG once the key pair is generated.

(And sorry if I've missed the point, BTW.)

[?]
--
FSF associate member #7257
Reply to
Ivan Shmakov

I don't think DH will work -- MiM issues.

Correct. The first step is to have a means of generating a *unique* key (for each device instance) that can't be predicted.

I still have to sort out how to keep the images themselves "bonafide"...

Reply to
Don Y

Exactly.

(Unless reinforced with digital signatures, that is.)

I'd argue that it should be the /second/ step, not the first. Unless the image is itself a secret.

The device has a tiny bit of persistent storage, right? Why not to use the standard digital signature approach?

Prior to deploying the devices, we're generating a key pair for the chosen asymmetric cipher. One of the keys is stored in the device's persistent memory. The other is used to encrypt the message digest (e. g., SHA-1) of the image.

The device receives the image and the encrypted digest. It then decrypts the encrypted digest with the key it has in its persistent memory and compares it to the digest computed locally. The image is only used if these are equal (and thus the image isn't tampered.)

--
FSF associate member #7257
Reply to
Ivan Shmakov

No -- which is also part of the problem (i.e., you can't hide any secrets *in* the image(s), either!)

I have been trying to avoid an initialization step. I.e., deploy

*identical* devices and let them gather entropy to make themselves unique.

I have one system in place, now, with which I took this approach. But, it requires setting up each device before deployment. And, means those settings *must* be persistent (battery failures, crashes, etc).

I was hoping to come up with a scheme that could, effectively, "fix itself" (as if it had just been deployed!)

Reply to
Don Y

In article , Don Y wrote: }2. The devices themselves are not physically secure -- an adversary }could legally purchase one, dismantle it, etc. to determine its current }state to any degree of exactness "for little investment". I.e., it is }not possible to "hide secrets" in them at the time of manufacture. } }3. *Deployed* devices are secure (not easily tampered without being }observed) though the infrastructure is vulnerable and should be seen }as providing *no* security. } }4. The devices have some small amount (i.e., tens or hundreds of bytes) }of persistent store. } }5. The devices download their operating "software" as part of their }bootstrap. } }6. It is practically feasible for an adversary to selectively spoof }the "image server" at IPL. (i.e., enter into a dialog with a }particular device that is trying to boot at that time) } }The challenge is to come up with a scheme by which only "approved" }images can be executed on the device(s).

The devices are made containing a public key. When the loader fetchs an image, the image has been encrypted using the corresponding private key. The loader decrypts it with the public key and executes the decrypted image (preferrably after checking it has decrypted into a valid image). An attacker can fetch and decrypt any images, and simulate the action of the loaded as much as they like, but without breaking the underlying encryption method or possessing the private key they cannot generate their own image.

I don't see any need for random numbers.

Reply to
Charles Bryant
[?]

I deem it infeasible for an embedded device to decrypt the whole image using an asymmetric cipher ? there's just too much computation involved. Traditionally, the digital signatures are made by encrypting a message digest (such as SHA-1) instead.

(And note that even with whole image encryption, it's still advisable to have a message digest transmitted along with the image, since otherwise an attacker may force the device to accept garbage.)

Right. But since the image isn't a secret anyway, it's not worth encrypting in the first place.

--
FSF associate member #7257
Reply to
Ivan Shmakov
[?]

How this is going to make the devices able to distinguish the intended image server from the attacker's one?

The typical size of a public key is 2048 bits, which is 256 bytes. There're even 8-bit MCU's for which it's not a big deal to store such a key in the on-chip flash memory.

Moreover, the key can (and probably should) be identical for the whole set of devices used by a single ?owner? (an organization or unit), which may make uploading it much less a hassle.

I'm afraid there cannot be one. The devices have to be ?labelled? somehow, so that they could know their ?owner.?

[?]
--
FSF associate member #7257
Reply to
Ivan Shmakov

Just a coupla wild ideas.

If there is operator input, running some local code, before retrieveing the image from the host, that runs a RNG and asks for "Press a key to begin" and stops on a number would likely yield an 'unpredictable' value to use.

If a system has a local RTC and has access to a host or internet or other external clock, perhaps subtracting (using local routine?) the 'drift' between the 2 clocks on startup could yield a value to use.

Reply to
1 Lucky Texan

There is an (invisible) step between manufacturing a set of identical devices and *deploying* those devices. In the deployment, you need to "mate" (pair?) the devices to their "system".

The naive approach to doing this is to have a procedure ("a set of steps performed by a FALLIBLE human being) that "initializes" the devices during/just_prior_to deployment. In the system I have in place currently, this requires "me" to "manually" install appropriate keys in the persistent store of each client.

Sure, this step can be automated. But, it still must be performed and "assured" prior to deployment -- you don't want to have to remove or otherwise service a device after deployment because someone forgot to perform a step -- or, performed it incorrectly.

Exactly!

E.g., human beings are "identical". Yet, we manage to find secure ways to protect secrets, communicate, etc. (i.e., we tend to choose unique passwords -- based on the "randomness" of our individual life experiences).

I am hoping a freshly manufactured device can be deployed by someone at a very low pay grade with limited knowledge of the details, risks, vulnerabilities, etc. of such a system (would you want to have EE's install the light switches in your home? Or, would you be willing to settle for an apprentice electrician who just knows "black wire here, taped_black *there*"? :> )

So, imagine a device powering up, running it's BIST/POST and then starting its hunt for an executable image. *If* it can generate a unique (unpredictable) random number, it can exchange keys with a key server and/or the image server. [Hereafter, all communications can be "secure".]

The server can provide it with a key to decrypt/authenticate the image that it gets from the image server. It can also tell it what to save in its persistent store to *mate* it to this "system".

Thereafter, attempts by a third party to spoof the key server and/or image server can be thwarted because the "mating information" (i.e., key) is not available to that other party.

[There is a window of opportunity in this system just before the first contact to the server by this client. But, thereafter, it should (?) be reliable. Contrast this with the potential for a person to "forget" to perform the initialization step and leave that client exposed for an indefinite period of time.]

For a parallel, imagine booting a PC over a wireless link. The

*first* boot is the critical one. If engineered appropriately, boots thereafter are "secure" (assuming the PC has some persistent store that is used in the process)

Sorry, I'm still trying to wrap my head around what can go wrong so I'm not sure of what all the potential problems are likely to be. I'm trying to formulate a tentative solution that I can then refine -- quantify its shortcomings and then consider mechanisms to address those.

Reply to
Don Y

See "0", above. :> Imagine the device controls a robotic manipulator that wants to identify itself (its "instance") to some server, retrieve *its* firmware image and then begin its operations on the assembly line, etc. You wouldn't want a "technician" to have to walk the length of the line any time power is cycled *just* to "Press a key to begin"...

Hmmm... I suspect that would have to take some time to accumulate enough entropy. Also, it leaves you exposed to someone manipulating that clock.

I was hoping that the randomness could be handled *entirely* locally -- so it would be more tamper-resistent.

Reply to
Don Y

Yes. When it comes to the image transfer, all you want to do is ensure that some bogus image hasn't been substituted for a legitimate one. "Signed executable"

How do you protect further communications between the device (and "whatever") *after* the image has been loaded? Embed keys in the images? Rely on some application (and instance) -specific source of entropy to algorithmically *generate* random numbers?

Randomness must be worth *something* as CPU manufacturers move to support it "in hardware". I'm just trying to do similarly "on the cheap" (for CPUs that don't *have* those capabilities)

Reply to
Don Y
[?]

In this design, the first boot /will/ be insecure, with or without an RNG. All the subsequent boots /will/ be secure, with or without an RNG.

Therefore, I don't understand how RNG is a problem.

Without a trusted public key loaded, the device shouldn't make any attempts to execute an image, or even to download one.

Not at all, if the bootloader is distributed along with a trusted public key.

(Actually, it's the scheme that I have in mind for a couple of years, but had no spare time to try to implement it.)

[?]
--
FSF associate member #7257
Reply to
Ivan Shmakov

[?]

If the secure communication after the image has been loaded is an issue, then a good RNG is indeed a good thing to have.

(I'm not sure, however, that such a requirement was explicitly stated in the OP.)

[?]
--
FSF associate member #7257
Reply to
Ivan Shmakov
[attributions elided]

It *wasn't* stated. I had assumed that, if it was present for the image loading, then it would remain present thereafter :>

E.g., I didn't explicitly state that the devices would use the network for inter-device *communication* (all I mentioned was image transfer), yet I fully expect to use it for that purpose!

Reply to
Don Y

It will be *vulnerable*... that doesn't mean that it *will* be tampered with, defrauded, etc.

... so long as the persistent store remains intact! What I was trying to avoid is the possibility of something "changing" (in the system as a whole) after initial deployment and having this "vulnerability" go undetected -- either because someone failed to perform an essential step while installing a new/replacement device (i.e., the "installer" forgot to preload some "secret" into the device, incorrectly loaded the secret, *or* loaded the

*wrong* secret!) or, because something has "failed" but not yet been discovered as such (i.e., the persistent store was corrupted, the device self-determined this and reinitiated the sequence of self-loading the "secret").

That requires either the presence of a trusted key server (that can't be spoofed) or the prior distribution of the public key to the device. I.e., if a rogue image server can masquerade as the key server, then it can distribute a key that fits *its* bogus images...

See above (?)

That's the approach I have in place currently. It assures me that "bonafide" images are indeed, transfered (I sign a hash) as well as "configuration parameters" for the device in question (which are prepared "on the fly" by the server and signed prior to delivery).

But, it requires fitting the key into the device before (or during) deployment (and, hoping that the key isn't corrupted later). The only "automated" way of verifying that all devices have been successfully deployed is to simultaneously inform some central part of the system of the identities of all of its "components".

You now have two "manual" steps in the process.

If you automate the process (i.e., have the insertion of the secret/key also inform the "central part of the system" of this fact so that the system now knows to look for -- or expect definite contact from -- that new device... and, can alert you if this fails to happen!), then you run the risk of *both* steps (since they are one in the same) being omitted.

It looks like I have to come up with a scheme that allows the devices to be *different* (i.e., deliberately manufactured as such) and

*inform* the system of their presence -- instead of the other way around. :<
Reply to
Don Y
[?]

I believe that there's no way to avoid this requirement.

Well, yes.

However, this design requires the devices to posess a secret key of their own, so that their ACK's cannot be spoofed by a rogue entity within the network.

Yes.

--
FSF associate member #7257  [np. Opferlied, Op 121b; Philharmonia Baroque]
Reply to
Ivan Shmakov

Actually, the problem(s) with my "let the device initialize itself" approach are:

1) it is vulnerable *at* that first initialization 2) it is vulnerable if it ever becomes corrupted

The second case can be trivially mitigated, to some extent. I.e., *if* you had this "automated process" to initialize and *record* each new device instance, you could tell if a device "disappeared". Well, a self-initializing device qualifies as that "automated process" (i.e., the first time the server *sees* it, the server can record its instance).

This then leaves the issue of what to *do* (in the device) when the device determines itself to be corrupted (e.g., checksum of its persistent store is "bad" meaning it must reload the secret).

If the device tries to re-self-initialize, then it is, again, vulnerable -- as vulnerable as it was on its first initialization.

*But*, if it is hijacked (by a rogue server) in the process, the legitimate server will eventually notice its absence! If it initializes correctly, the server can notice that it has re-initialized... yet, managed NOT to be hijacked (the server can track devices that repeatedly need initialization and determine that they must have "problems" as this should be a rare event)

Another option (on detecting a corrupted persistent store) is to simply lock up -- "fail". The server will notice that the device has "disappeared" and can report the failure.

Both approaches are detectable. But, they differ in whether or not the device *can* be hijacked "after initial initialization". The first of these scenarios allows a device to be hijacked and, presumably, "do stuff" until someone comes along and deliberately "fixes" it (push the magic RESET button; replace it entirely; etc). The second prevents the hijacking but renders the device inoperable -- until someone comes along and deliberately "fixes" it.

If a hijacked device can "do harm" (e.g., if field connections to mechanical actuators that it can independently operate), then the first option is A Bad Idea.

OTOH, if a hijacked device merely represents a loss of capability (e.g., another *input* point for the system), then allowing it to be hijacked may not have significant consequences (assuming the hijacked data is not sensitive and that an attack on the system itself can't be facilitated from this point).

This leaves the first of my enumerated problems -- the device is vulnerable at that FIRST (self-) initialization.

But, as I mentioned previously, this would be true of any device prior to it's initial "personalization" (mating to system). A "virgin" device intercepted prior to deployment could be subverted just as easily.

If your "personalization" process requires some special hardware (e.g., access to a JTAG port on a PCB), then a deployed device is difficult to re-personalize in the field (e.g., if it becomes corrupted). OTOH, a self-initializing device *could* naturally be re-self-initialized (at least in terms of the available hardware for doing so). I.e., this is an argument against some special initialization/personalization interface.

Finally, you can avoid the initial vulnerability simply by running a "secured" network drop to the "initialization station". The initialization (personalization) process would simply require the staff member to connect the device to the (tamper proof) network drop and power it on. The device performs its own self-initialization -- loading the required secret, etc. -- and can then be powered off and deployed.

[if the entire network is secure -- something I claim NOT to be the case -- then simply deploying the device gives you what you want]

Hmmm... I'm going to look at this again and see what options there are for telling the "installer" whether or not the device has been "personalized" (prior to deployment) and what mechanisms can be used to allow a corrupted, "locked up" device to be unlocked and re-self-initialized after deployment. Maybe it

*isn't* as bad as I thought? :-/ [though the RNG issue seems to have been a red herring?]

Thanks!

--don

Reply to
Don Y

You should try asking on sci.crypt

Reply to
Noob

Or read on the many, (often successful), attempts to root/jailbreak/open/break-into game consoles, cell phones, e-readers, very-big-and-secretive government agencies, banks etc., and reach the conclusion that it may not be possible with the limited resources guys like us have.

-- Roberto Waltman

[ Please reply to the group, return address is invalid ]
Reply to
Roberto Waltman

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.