"Random" number generation (reprise)

That sort of attack would be equivalent to an attack on a "deployed" device, in my scenario:

  1. *Deployed* devices are secure (not easily tampered without being observed) though the infrastructure is vulnerable and should be seen as providing *no* security.

I.e., once a device "has its secret" (see conversation elsewhere), tampering with the device isn't easy to do -- *unless* you do it over the network, etc. (eavesdropping on "secure" communications, etc.).

The hacks you describe are like letting someone into the building "after hours" and giving them free reign over the devices... (almost impossible to protect against -- even with deep pockets!)

Reply to
Don Y
Loading thread data ...

Manipulating the 'host' clock does not 'guarantee' a break-in sine the local clock is part of the calc.

Perhaps, if the local routine is more robust. Say, after performing the calculation, it rejects any '0' and it rejects the value stored from last time, then performs a few no-ops, then retries the subtraction. And, perhaps, use the result as a pointer to a preloaded local database of long 'keys' if you feel the need to further make it 'guess resistant'. not sure that's really helpful though. This should make it very difficult for an attacker to force a re-boot and 'guess' at a result. It does of course require a battery-backed clock. (wouldn't have to be the RTC, could be a purpose-built 'noise/ drift-prone' circuit)

Reply to
1 Lucky Texan

Yes, but that can be predicted, to some degree (remember, the local device can't see changes/differences in its clock unless it has something to benchmark against -- some other reference.

You have to assume the attacker can cycle power at any time (e.g., several of the systems I am working on run off PoE so it is trivial for an attacker with access to the infrastructure to cycle power as well as controlling network traffic).

It's just A Bad Idea to have anything exposed to outside influences.

*Motivated* adversaries will find *some* way to exploit that. Witness how insecure so many systems *designed* to be secure actually are! :-/ Hacking phones, vending machines, gaming devices, etc.
Reply to
Don Y

Hi Don,

Coming very late to this discussion, but I've read through most of the thread. I think you should take a look at the Amoeba OS security model.

Amoeba's distributed trust system is too complex to describe here, but it is based on public-key plus (P)RNG. It's related to, but more secure and more scalable than Kerberos (if you're familiar with that). The system uses random generated session keys and uses public key encryption to communicate security tokens - which may include symmetric encryption keys. The whole system is probably overkill for your need, but I think the core is simple enough.

WRT hardware RNG, Linux has the ability to use an unconnected port. It just reads environmental noise and assembles a random set of bits. It's easy enough to put a small (unshielded) coil in the device connected to a 1-bit AD converter. That will give you a source of truly random radio noise.

George

Reply to
George Neuner

Is this Tanenbaum's Amoeba (e.g., bullet server, etc.) or something newer (you tend to be more "current" on these things than I). Google turned up an "AmoebaOS"...

I'll dig through Tanenbaum's texts to see what turns up (the "AmoebaOS" web site seemed pretty uninformative).

I think my problem lies in wanting the devices to be "identical" at deployment. It seems like you really need a way to "marry" them to a particular system in order to avoid the possibility of spoofing (that vulnerability window mentioned elsewhere)

But you could (?) influence that with an RF source nearby. I was thinking of using something like bandwidth limited thermal noise, etc. I.e., something that an attacker can't influence (?).

[I've become super paranoid of just how clever attackers can be -- esp if you give them the sources, schematics, etc. :< ]
Reply to
Don Y

If a local RNG is still desired, perhaps a battery-backed, faraday shielded 'junction noise' device similar to ;

formatting link

would work. Of course, any device should be tested with hack-attempts before deployment I suppose.

other 'random' noise sources listed here;

formatting link
uenoise.html

Reply to
1 Lucky Texan
[?]

Well, should one consider that even insulin pumps are subject to cracker attacks these days?

[1]
formatting link
[Unfortunately, the article referenced above incorrectly calls such a wrongdoer a ?hacker?, which is contrary to the The Jargon File's definition of the term [2].] [2]
formatting link
--
FSF associate member #7257
Reply to
Ivan Shmakov

Yes, you could reduce the amount of entropy from this particular entropy source. But the attacker still can't reliably cause specific values to be obtained, especially if there are other entropy sources (which there should be). The OpenBSD source code has some useful comments on RNG:

formatting link

Sure, if the stakes are high enough...

--
Gemaakt met Opera's revolutionaire e-mailprogramma:  
http://www.opera.com/mail/
(Remove the obvious prefix to reply.)
Reply to
Boudewijn Dijkstra

I don't think it has to be battery-backed. If it is *truly* random, then you can have your code sample it for a "fixed" interval (knowledge of which doesn't help the attacker) and still be guaranteed an unguessable result (assuming the interval is long enough)

formatting link

Thanks, google had already pointed me, there. :>

Reply to
Don Y

I am no longer surprised by the vulnerabilities inherent in much of the shi^H^Htuff we take for granted.

Part of the problem is naivite on the part of developers. As if no one would have the *knowledge* (nor motivation!) to "hack" one of *their* devices: "What could the attacker *gain*?" And, I think laziness also comes into play ("Why bother with all that extra thinking/work... who would ever want to hack one of *these* things?").

Also, people seldom anticipate how things will be used/connected in the future. E.g., I've designed all of my home automation to rely on *wired* connections -- to eliminate the potential for an attacker leisurely and unobtrusively accessing my network via a wireless link (e.g., "Let's set his thermostat to 98 degrees!" or "Let's open his garage door.")

But, at some time in the future, I may end up intentionally having "outside access" to this stuff (behind a firewall, etc.). So, the artificial barrier that wired vs. wireless affords me will silently disappear (and, force me to be *really* careful about how impervious the firewall is... an ONGOING security effort)

I think paranoia is a Good Thing. Imagine your life is at stake and someone REALLY SMART who HATES YOU is thinking about tampering with . How comfortable will you be sleeping *tonight*? Tomorrow night??

Reply to
Don Y

Yes it is Tanenbaum's system. I have a bunch of papers written by the developers. Unfortunatey they aren't in electronic form ... I'll see if I can find them online.

I haven't seen his last couple of books so I don't know to what extent they deal with it. Most of the treatment I have seen has been in papers or in case studies in surveys.

You might want to look at Sape Mullender's "Distributed Systems" which has an excellent discussion of authentication, secure and fault tolerant communication under various kinds of network failures and in the presence of deliberate interference.

[I have 2nd ed. ISBN 0-201-62427-3 ... haven't heard of a newer one. There is a bit of coverage of Amoeba in this book but there is not a lot of detail on Amoeba's security model. There is, however, a lot of good information.]

The problem is that it is not really possible to be "identical" for purposes of remote configuration unless the nodes also will be identical in operation. At minimum they must have different network addresses in order to communicate (unless all communication is by broadcast). The unique characteristic may be a pseudo-random generation, but it is necessary.

Not necessarily. You'll get better results with an unshielded radio sink, but you could just as easily place unterminated floating sinks inside your shielded casing and read their state periodically ... he results would depend only on noise generated within your device.

George

Reply to
George Neuner

Depending on the design, you might be able to render it *moot* (recall, the stipulation in this thread was that this is an open system... the attacker is given a head start in identifying your potential weaknesses -- since closing the system just delays the inevitable)

formatting link

Yes, but you can't count on those sorts of "devices" being present in a particular deployment instance. If, OTOH, you have a true source of randomness that can't be influenced or observed externally, you wouldn't *need* anything else! And, as long as you can generate enough random data to exceed each of your (individual) needs, anything more is wasted (i.e., if you need

100 bits and can generate 100 truly random bits in 1 minute, then -- as long as you run the RNG for a full minute before each "need" -- you *only* need to run it while generating that data.) [E.g., I once mused about making a device that (literally) flipped a coin to generate random bits -- very slow! You wouldn't want it running 24/7 since it would be noisey, waste lots of power, etc. OTOH, if all you needed was 1000 bits per IPL, then you could flip 1000 coins, store the results and shut down the RNG until *after* the next boot]

But "stakes" to one person are not the same as "stakes" to another person. And, when one of those people is the "attacker", you are vulnerable to whatever his/her idea of those stakes may be!

E.g., someone out for financial gain is motivated differently than someone who is spiteful -- or, someone who is "just up for a new challenge".

I.e., there is little financial gain available from setting someone's household thermostat to 98 degrees (unless you happen to be the entity supply the heating fuel to that household!). OTOH, someone intending harm or simply "challenged" to see *if* he could hack said system would see this as a more attractive target.

And, the *losses* can be disproportionate to the corresponding

*gains*. E.g., setting that thermostat can have a real financial cost to the household (paying for the fuel, potential damages, etc.) while resulting in no *financial* gain to the attacker.

Many years ago, the vehicle I was driving was broken into while I was at work. The vandal took my (winter) jacket. Big deal. I, OTOH, lost that jacket, had to drive home with that window open (since it had no glass :> ) on a winter evening

*and* spend the better part of the next day researching how to get the window replaced -- as well as actually *getting* it replaced. [At which point, I learned that the price you are quoted for a replacement window varies based on whether or not you have "glass coverage". If you *do* -- as I did -- the price goes *up* :< So, tell the shop that you *don't*!]

I wonder about the vandal's motivation prior to shattering my window. Was he looking for a winter coat? Was he sure

*my* coat would fit him? Or, was he hoping the coat's pockets would be stuffed with $100 bills (since there was nothing else visible or present in the car)?

Or, was he just a "punk" -- doing this just to prove he

*could* do it?
Reply to
Don Y

What I have of Amoeba is probably 10+ (?) years old. But, it's still on one of the servers scheduled for retirement and I haven't yet moved those files over to a "cooler" machine (that server doubles as a space heater in winter months :-/ ).

He has a brief summary in _Distributed Operating Systems_ in which he contrasts its features/design against Mach and Chorus. I'll have to review it. From what I recall, Amoeba dealt with security in userland instead of in the kernel. E.g., capabilities were no different from data (i.e., could be forged given infinite time).

I'll make a note of it. I'll also look to the Amoeba docs that I have to see what they address.

This differs from my memory of Amoeba's use of cryptography. IIRC, it was all DES-based (?)

Yes. But, if all devices are identical (with only MAC and/or serial number to differentiate them) at IPL, there is no way to ensure they are talking to a "legitimate" server -- since they have no a priori information about that server!

I.e., if the server dispenses signed images, then the keys to verify their signatures have to be available to the devices (clients) in an unforgeable way (e.g., distributed before hand or available from an un-spoofable key server).

If you can ensure *one* (initial) secure transaction for each device, then you could use that transaction to distribute such a key and rely on it, thereafter.

Reply to
Don Y

Any system can be broken given infinite resources ... that is not a valid reason to avoid something. It is prudent to avoid simple systems that can be broken easily, but Amoeba's (and Kerboros's) capabilities are not easy to circumvent (Kerboros was quite solid, it just didn't scale well).

I had forgotten that Mullender actually is one of Amoeba's contributors. "Distributed Systems" is a survey of methods for fault tolerance, authentication and secure transmissions.

The capability check field is a one-way function. Originally it used a DES encryption, but it has since evolved to be a cryptographic signature. PKE was introduced in "wide-area" Amoeba (v3 IIRC).

A proper authentication protocol can validate both sides. Amoeba's capabilities include handshaking authentication as part of their operation.

George

Reply to
George Neuner

But, Amoeba's DES based capabilities used in a system designed to run 24/7 means that an adversary *has* those "infinite resources". I.e., you would have to deliberately add an extra layer to the capability model to allow capabilities (tokens) to be "refreshed" (altered) periodically so that an attacker couldn't just grind away on *the* capability for (e.g.) "root access".

Since these were "just numbers", an attacker could bring any amount of external resources to bear on the problem. E.g., passing the value off to a NoW designed to just crack keys and then reimporting the result for its nefarious purpose(s).

[I think DES now falls in O(week) to concerted, low budget attacks. I.e., use in a process control system that runs "3 shifts" would be easy to compromise if the capabilities aren't intentionally refreshed]

OK. I should actually dig through my "boxed library" to see if I con't already have a copy stashed away :-/

Ah, OK. I looked at Amoeba more than 10 years ago and have been unaware of its continued evolution. The DES tie was a deal-breaker for me (I think user-land capabilities are still a big vulnerability if not "refreshed" in some way -- i.e., refresh period >>> I think my problem lies in wanting the devices to be "identical"

All of the protocols that I've examined require *some* bit of trust, somewhere. Either in the distribution of a secret key or availability of an unspoofable trusted authority.

Take N identical devices out of their factory-fresh cartons... how do they know *who* they can trust to provide their software images? What makes the legitimate image server "legitimate" while all others, aren't? I.e., what makes customer A's image server legitimate (from the standpoint of one of those N devices) as well as customer B's image server (*also* legitimate -- in customer B's eyes!) but NOT adversary X's spoofing attempts for devices intended for A or B?

(i.e., the devices have no way of differentiating between A, B and X)

Reply to
Don Y

As I said previously, the capability design depended on a one-way function - which originally was based on a non-reversible DES key generator algorithm, *not* the reversible DES encryption algorithm.

Obviously, if you know the one-way function, given enough compute power you can try every possible combination of arguments ... but the latest version of Amoeba has made that impractical for anything but a cloud by going to much larger check fields and using known strong crypto-signing functions.

The Triple-DES variant is still sanctioned for use on non-critical data - though the recommendation is to move to AES.

Kernel mode capabilities really are no safer ... they simply have another indirection in the way. The key to capabilities is to make decrypting them hard enough that most people won't try.

And as I said previously, the fact that someone could possibly bring a cloud to bear on your system is not a reason to fear using a well known and proven method. The cloud now and from here forward always will be a threat to ANY CONCEIVABLE system.

The only encryption method that even theoretically is unbreakable is the "one-time" cipher ... and in practice it relies on a secure out-of-band method for delivering the random cipher component from sender to receiver. For example, recordings of radio static used to be used for (one step of) enciphering secure radio messages. The static recordings had to be hand-delivered by courier and only then could be used to decipher received messages.

One method that doesn't relies only on random numbers and one-way functions (which are assumed to be publicly known).

Suppose the client and server each implement some number of one-way functions: f1 .. fn. The client connects, picks a random function and a random argument and challenges the server to return the answer. The server does the same of the client in its reply. These back and forth challenges/revalidations can be repeated in every message if desired.

See our 2010 discussion regarding "watermarking" for ideas on obfuscating and varying how the functions are applied. The relevant portion starts here:

formatting link

Using the indirection "string" technique, during handshaking each side can transmit an indirection matrix so that the other side can apply the correct function given a randomly chosen index. This kind of scheme can be extended arbitrarily and changed randomly and would be extremely difficult to figure out.

George

Reply to
George Neuner

[I've elided this as I have to dig out my Amoeba documents to be able to comment more accurately on its implementation...]

I don't see that.

There are two cases to consider: self-contained and distributed systems (note that self-contained may still be multiprocessors but the communication paths aren't open/vulnerable as in a distributed system).

In a self-contained system, even if there are communication paths "to the outside" (network connection, etc.), kernel mode capabilities have no semantic value outside of the kernel and the "tasks" that own/support them. The capability is tied to the resource and its owner. The kernel knows who owns (created) the capability. And, who (task) the capability is being passed on to. References to that particular capability have no meaning out of those contexts.

E.g., if I create a memory object and assign a capability to its access, only "me", the kernel and those tasks that I pass the capability *to* have any understanding of that capability. And, thus, the memory object that it controls.

So, if the capability I create has the "handle" 0x1234, then this only makes sense to *me* (and the kernel) as meaning "the memory object __________". If I pass that capability on to *you* (via the kernel), *you* will be given a handle for that object (possibly

0x5432).

But, the capability really only has meaning in a context. I.e., {me, 0x1234} and {you, 0x5432} reference the same object. But, there is no binding for {someoneelse, ?????} that will reference that object -- because the kernel hasn't been told (by me, the owner of the capability) to create a binding *for* "someoneelse".

As long as the kernel is secure and there are no faults in the protection mechanisms, "someoneelse" simply can not access that memory object (at least, not through this mechanism!).

Since the capability has no "name" or "userland representation", "you" can freely tell "someoneelse" that *you* handle for the capability is 0x5432 -- but, that won't help that someoneelse gain access to the object because there is no mapping in the kernel

*for* "someoneelse" to that object. It's like telling someoneelse that the FILE* for your stdout is 0x12345678 and wondering why he is unable to write on your output device...

I.e., in the self-contained application domain, external processing power doesn't gain you anything. Nor does *internal* processing power! So, someoneelse could try using every possible value for a "handle" and the kernel still won't grant access!

In the distributed case, the problem comes when the capability has to be passed to a task on another node -- over a potentially insecure medium.

Here, the kernel on node1 needs a secure means of transferring information to the kernel on node2. I.e., any task on node1 is bound by the preceding argument. And, any task on node2 *will* be bound by that same argument.

If the kernels can intercommunicate securely, then each can act as an agent for the other (i.e., kernel2 acts as an agent for kernel1 for all tasks on node2 wishing to access capabilities defined on node1 -- and vice versa).

Note that the fact that this is an active protocol (not just passing and storing a "number" for future use) means the security layered on the communication can be evaluated immediately -- it doesn't need to be "persistent" (whereas cryptographic capabilities have to remain valid "forever" since you never know when the client will choose to *use* a capability it has been given).

So, the communication channel between the two kernels can have time-sensitive parameters (i.e., keys that change often) since the keys need only be valid long enough for messages to pass along the wire and be decrypted at the other end (at which point, the receiving kernel notes the "real" capability in its internal maps).

You can do the same sort of thing with cryptographically strong capabilities *if* you impose some mechanism for "refreshing" capabilities (effectively changing the keys over time). But, since the capability is exposed to the user-land task directly, this requires the cooperation of that task (can't be done transparently to it).

See above. If you can shorten the time between key updates to something less than that required to break a particular key, then the external resources become meaningless.

[Of course, those resources can get faster... which means you have to reduce your key update period -- *or* deliberately interject a delay in communications beyond the system boundaries (that delay being longer than the time it takes to change keys)]

A one time pad has the drawback of being of fixed size. Sooner or later, you run out of pad! (remember, we're running "continuously")

I don't see how this gains anything (?)

I have a big box of these identical devices delivered to me from ACME Electronics. I take half of them and put them in a box for customer A and the other half in a box for customer B. (I'm a "System Integrator") I put the image server for customer A (an HVAC control system) in the first box and the image server for customer B (a robotic assembly line) in the second box.

I deliver and install these systems and all is well -- the devices for customer A are happily opening and closing ducts, powering fans, compressors, etc. while those for customer B are stamping out sheets of aluminum and heliarc'ing them together to build the "spaceship of the future".

Now, imagine that just before I shipped these boxes, I arbitrarily pulled 5 of the devices from the first box and swapped them with

5 from the second box. Since they are identical, the results will be unchanged!

Likewise, imagine I swapped the "image servers" just prior to shipment. Ditto.

This is A Good Thing -- it is a design goal! :>

But, try putting the image server for customer B in the same box

*with* the image server for customer A (shipping both to customer A). Some of the devices will power up and think they are working in an HVAC system while others will think they are building spaceships! It will just depend on which image server happened to answer their initial request!

(Right? Because we already noticed that each of the devices was equally capable of becoming an HVAC component *or* a robotic controller based solely on where it was ultimately installed!)

Think about a rogue image server at customer A's site. It knows everything about these devices (see my original post). So, it can spoof *any* protocol that you see fit to implement. Because it could, theoretically, be the legitimate image server for yet another customer -- "C"!

Regardless of which "random" challenges a device makes of the

*rogue* image server, that image server knows what the *right* answer is!

The problem lies in my desire to keep the devices identical. This is in direct conflict with the need to *mate* them to a specific, legitimate image server.

As I had discussed with Ivan, you need a way of telling the devices who their "mate" will be. This can either be a special initialization procedure/mechanism/device/fixture or some inherent aspect of the normal runtime protocol. Once *one* "secure" communication can take place between device and server, a secret can be exchanged that will *not* be known to any other potential image server.

[see my other comments regarding how I intend to do this]

formatting link

Understood. But, that only applies after a device has begun a "conversation" with a server. It doesn't guarantee that the server it is communicating with is the *legitimate* server!

My initial impression (when formulating the question) was that there was some *trick* that I wasn't aware of that would let me create information (ownership) from nothingness. These crypto protocols are clever things so I imagined I was just not seeing that "trick". Unfortunately, my initial conditions ruled out some of the prerequisites that those tricks rely upon :-/

Reply to
Don Y

If capabilities are purely local and the system relies on passing some form of abstract reference, then it has no more functionality than any other client-server system. The whole point of capabilities is to allow for establishment of transitive trust chains, for example, permitting the owner of an object (or a trusted agent) to delegate to workers that are only partly trusted. To do this, agents must be able to create subset capabilities independently of the object owner (though not necessarily locally ... creating a subset capability can be function of the server that manages the object).

Your particular system may not need transitive trust, but by building the system on capabilities you could easily add it if necessary in the future.

The simple fact of having to pass them around is why kernel capabilities are no safer than user space capabilities ... which also could be renewed with any desired frequency. Since a valid capability has to be presented every time the object is accessed (to verify that the access is permitted), you can make them session specific by retiring the presented capability and issuing a new equivalent one every time the object is accessed. Of course, this means the managing server would have to track capabilities by session instead of by object.

If an attacker has the ability to both intercept communications and decode/encode capabilities then it doesn't matter where they originated or how often they are changed. This is no different from an attacker intercepting, e.g., NFS traffic and discovering that "0x5432" is some client's session handle for an important file.

Not at all. The functions that encrypt and check/decrypt capabilities can be in the kernel. In userland the capabilities need only ever exist as blocks of encrypted data.

Any state required by a userland server process to manage its local objects is internal, protected by VMM. It need never be transmitted anywhere and, if necessary, can be encrypted for persistent storage.

Not necessarily. Each node can generate one or more continuous streams of random noise and transmit them to nodes it needs to securely communicate with. By using a causal delivery protocol and varying delay, the noise streams can be used as one time pads for other communication. Combined with a conventional secure channel, e.g., SSL, this type of scheme has the additional feature that all secure packets appear to have a double encrypted payload when in fact

1/2 of the secure packets transmitted from any node really are meaningless garbage that will simply waste the resources of an attacker trying to decipher them.

It requires neither a trusted authority nor a secret key ... it relies only on the knowledge that a legitimate node's program will understand how to respond to the challenge. The challenge itself may be made arbitrarily complex by invoking multiple algorithms and arguments.

The devices can have identical hardware and software, but they can't have identical configuration data.

However, per the watermarking discussion, the "customizing" required to match a node with its server could be just a small watermark string that could be keyed into the node and stored in internal flash memory. Likewise all server system software could be identical but the server intended for a particular set of devices would need to be configured with the matching watermark string.

In a LAN setting it is acceptable for client nodes to find servers dynamically by broadcast or multicast - finding them on a WAN is an entirely different matter. The client would include its watermark in its connection request and only a server with a matching watermark would answer.

Are you dead set against any required setup? A watermark string is about as minimal as you can make it.

You can't ever prevent deliberate spoofing, but you can detect it using cryptographic signatures.

The real questions are 1) how did it get your server system software, and 2) assuming watermarks encode both usage and client (which I don't think is too hard to arrange), how did it get the proper watermark to spoof somebody's particular set of devices?

There is no way to prevent abuse of something that was legally purchased other than by contractual agreement (and I don't mean shrink wrap licenses) or by whatever protection is afforded by existing IP law in the governing jurisdiction. The best you can do is make it hard to figure out how.

formatting link

Sorry. You can't get something for nothing. There are dynamic protocols available that can be used to minimize the amount of unique configuration required, but there always has to be some configuration.

George

Reply to
George Neuner
[Amoeba, userland cryptographically encoded capabilities]

Correct. The only difference is how that (trust) information is stored and passed.

In Amoeba's (at least, the "old" version I saw) approach, the information resides in the "token" that is passed and the "trust" comes from the cryptographic "shell" wrapped around it.

What I'm saying is the trust is embodied in the relationship between kernel and "task(s)" (whether they are clients or servers) and the information resides in the *server* (for that particular capability).

E.g., a client connects to a server (because, somehow, it is allowed to do so) and creates an object access to which it wants constrained. The client asks for a capability to be created. Then, indicates what "permissions" are to be associated with that capability. The server (for that capability) tracks the permissions and binds them to the capability -- which "resides" in the kernel but, for which, the server has a "handle".

Now, the client passes that capability (set of permissions) off to another task by telling the server (for that capability) to pass the capability to "somebody".

"Somebody" receives a handle to that capability (which still resides in the kernel). When somebody goes to access/modify the object "guarded" by that capability, it presents the capability to the server for that object. The server knows what permissions are associated with that particular capability and, if the operations requested by "somebody" fall within its scope, the operations are allowed.

In my approach, this is a *requirement* (because there is no information stored in the "token").

It's a "win" because it lets arbitrarily complex "capabilities" (permitted actions) to be created. It's a *lose* because it involves the kernel in each of those actions *and* requires the server for each such object to get involved in the "authentication" process.

Both of these are bad because they shift the cost of the transaction away from the task making the "request". I.e., the task doesn't "pay" (out of its time/space budget) for this "work" done on its behalf. In theory, a rogue task could pepper the servers with bogus requests passing invalid credentials along with each and waste lots of resources for which *it* isn't "charged" (I prefer mechanisms where your foolishness or malevolence costs *you*, not "others")

If you implement the equivalent of an "open()" (i.e., "begin_session") for each accessed object, then the capability and its verification are only needed at that time. The "connection" thus created can track the "permissions" associated with that capability.

[in the devices I make, things tend to get "wired together" and stay that way for long periods of time -- e.g., boot-to-boot -- so the important thing is ensuring the *right* things get wired together and nothing else interferes after the fact]

That, IMO, is where the kernel mode is a win -- because the interface between task (client or server) and kernel is "secure" (unless you deliberately interject an agent -- in which case, you have implicitly said "this agent acts AS IF it was me!")

Yes, but that "how to respond to the challenge" amounts to a secret! I.e., if the devices are identical, then they all know the same things!

Correct. I'm saying the "configuration data" amounts to a "secret". Then, the issue is how to get that secret to the "right devices".

Yes, see above. :>

Here's a scenario (that really isn't far-fetched -- but, doesn't disclose any application specific details :> ).

Or, better yet, let me use my audio/video clients as context (since I don't have to worry about what I disclose regarding their design).

Say you buy 30 of these things and have them installed in your home/office/etc. It's an electrician or a cable puller or some other guy who actually does the work (because most folks don't know which end of a screwdriver to hold). Conceptually, it's an easy job -- attach network cable (punchdown blocks) and "set in place". But, there may be a fair bit of prep work involved (cutting holes in ceiling for speaker placements, installing jbox's in walls, stringing wire to each of these places, etc.).

You go to turn the system on and discover the installer forgot to "configure" the devices prior to installing them (in the ceiling, walls, etc.)!

If the act of configuration requires physical access to the devices (e.g., to push the "configure me now" button or access to a JTAG connector), then you have to drag the devices out of their hiding places in order to perform this configuration step.

I.e., failing to configure before deployment can be costly (imagine "you" are a corporation that has hired electricians to do this work on an hourly basis :-/ )

So, you really would like to be able to configure using the existing communications mechanism.

But, how do you ensure that some *rogue* element doesn't spoof the configuration process (because all this stuff is openly known!) and hijack the devices? In effect, saying "you belong to me" and "here is the code I want you to hereafter execute"...

(i.e., how do you *reset* these devices once hijacked? do you have to remove them from their installations and "press the reset button"?)

The conclusion from earlier discussions (with Ivan) was to *have* this "configuration process" but have it as a normal part of the operation of the devices. I.e., their FIRST BOOT causes them to gather this "secret" (configuration data) and stash it safely in their persistent stores.

[i.e., absence of a "secret" in that store tells the device that it needs to be configured. Presence of a secret effectively prohibits "adversaries" from corrupting the device's operation.]

Once a device has connected to an image server, that server can remember having seen it. If a device later *fails* for some reason (e.g., the persistent store becomes corrupted), the server will notice its absence.

I contend that the "safe" option in the case of corruption is to have the device lock up and refuse to re-self-configure. "Something is broken. Figure out *why* this happened instead of letting the problem potentially repeat itself" (things that stop working tend to get attention; things that keep limping along tend to get *ignored*)

With these rules, someone deploying these devices in a KNOWN SECURE installation (i.e., *my* house!) can simply install the virgin devices, power up the system, wait for everything to self-configure and -- voila!

OTOH, someone in a KNOWN unSECURE installation can run a dedicated, *armored* network cable to a locked room with CCTV monitoring, etc. (how paranoid *are* you? :> ) and plug the device in for its "intial power-up" in that location -- where it is known that the communication link can't be tampered with. After that first connection, the device will have its "configuration secret" and can then be deployed "pre-configured" (and, thus, invulnerable).

[you're still screwed if someone installs a virgin device *and* there is an adversary "on the wire"]

But that's contrary to my initial stipulation -- that the devices' designs are "open".

formatting link

Correct. So, the issue became: How do I get the configuration data into the device while acknowledging: the failings of human beings (installers who forget a step), the failures of electronics (persistent stores being corrupted) and the presence of adversaries (with unexpected motivation).

I *think* the above approach covers most of the practical issues. The worst possible case is the *addition* of a device after initial deployment (*replacement* of a device is something that the server can watch for since it "saw" the old device; but, a "new" device is not something it is aware of a priori). I don't see any way of protecting against these threats/failures/attacks in that case

*other* than relying on "good procedures" :-/
Reply to
Don Y

Understood.

You've introduced an interesting twist ... but it has some issues I don't think you've considered.

Your take on kernel capabilities unquestionably would work among processes within a single host (although there is the issue of persistent storage of capabilities), but it creates logistical problems in a multiple host environment. By binding and confining capabilities to the server host, you introduce the need for session aware proxies on both sides - i.e. a remote client needs a local server proxy and likewise a remote server needs a local client proxy.

Are there to be proxies for all possibly needed services included within the kernel? That means a new kernel version and update of all nodes every time a new userland service is introduced ... for any node. [I realize that for your (current) purpose the nodes are intended to run identical peer software and so any update likely would be confined to the DHC boot image ... unless there are to be external services apart from DHC that you have not mentioned. However, in a more complex heterogeneous node system the logistics of such a network-wide update requirement could be quite problematic.]

Then you want a server which holds the handle to a local kernel capability to send that handle (or equivalent) to client B on behalf of client A?

- how does the server locate B? name service? if A provides B's address (or whatever) why should the server trust that A can vouch for B? you're worried about security ... what if A or B is a rogue client? - what if the service is new or updated and B's host hasn't yet been updated? B may have no proxy for the service. - is the handle communicated to B or the kernel of B's host? if the kernel, how does it tell B it has a new resource?

These are off the top of my head ... were I to think about for a while I'd probably have more - and harder - questions.

In contrast, using encrypted capabilities, A just sends the capability directly to B and B presents it to the server for object access (presuming methods by which A can find B and B can find the server). Logistically much simpler and no proxies needed, but the tradeoff is that you do have to rely on encryption to protect your access controls.

Agreed. I think responsibility for capability generation lies with the object's managing server.

Yes. However by using a stateful protocol you lose some flexibility and increase the server's footprint. A server (or proxy) for a stateless protocol can be smaller and simpler. You can improve performance by keeping state, i.e. trending toward a virtual session, but because the stateless protocol doesn't require it, state kept solely for performance enhancement is dispensable if memory is tight.

There's a statistical technique called "zero knowledge proof" which uses a series of challenges to converge on whether or not some initial proposition is true - in this case "the server is authentic". The scheme using one-way functions and random numbers is a *very* simplistic version of ZKP.

I'm not an expert in ZKP, but I don't think it can be implemented without some use of secret algorithm or data. Your aversion to any secrets and desire for zero configuration makes virtually every known authentication technique unusable.

RFID. NFC. Still might be a problem in a really tough location.

Makes sense.

1) Much of this thread has been about authenticating the image server. 2) Unless the device remains connected or connects at regular intervals, the server can't know it is offline. 3) If the server crashes and the connection logs are lost, how does it know the devices connecting to it are legitimate?

Agreed. Hopefully corruption can be detected.

This violates your "no setup" steps condition.

Yup.

I think we've reached the end ... unless you care to continue some interesting tidbit. There is quite a lot that can be done to simplify end use, but few things can be made as simple as a doorknob. [Yeah, I know many people can't install a doorknob but most can figure out how to use one without instructions.]

George

Reply to
George Neuner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.