"Random" number generation (reprise)

No. The fact that the kernels on the N processors can securely (?) talk with each other [this is the biggest problem, IMO] means "capabilities" can be passed from taskXy (task X on cpu y) to taskAb (my previous point).

The kernel can likewise participate in the regular IPC (in this case, RPC!) between those tasks. I.e., the "communication channel" associated with a particular object's server on cpu N can be accessed *via* the kernel on cpu M.

I.e., the kernels have to conspire to implement a "virtual kernel" that spans the many processors.

This puts a lot of work into the kernel. But, since it is just virtualizing a communication channel(s), the kernel can pass untyped data without having to understand what the object/service in question might be.

Again, this sucks (using the criteria I mentioned before) because "someone else" pays for the work done on behalf of a task instead of the task itself. E.g., a malicious task (or a misbehaving task!) wastes resources that "morally" belong to all of the tasks who are "playing by the rules".

Argh! Sorry, I've been discussing capabilities in the general sense. I don't use anything this complex in, for example, the audio clients; they are tiny and "not complex" (i.e., "fit in one human brain")

I name every system object -- services, etc. (I posted a question about this some time ago).

In a multiprocessor, this means you need some way of globally naming things. This can get expensive if you have lots of objects being created/deleted. And, some objects really only need local awareness (e.g., IPC channels that never leave the node). So, there is value to adding a local name service *agent* that can maintain a local namespace *and* interface to the global namespace as needed (on behalf of local clients).

86 the proxies. As long as the client(s) on B can locate the service (on A?), the kernel can move the required data between them.

A capability can have lots of "names" (handles). E.g., task1A might be the server for an object. It's handle for a capability might be 0x1234. When it passes that capability to task2A, it might be known as 0x5432 to task2A. Meanwhile, kernelA might know it as 0xA5A5.

When the capability is passed to task6B, the kernelB reference might be known as 0xB7B7 while task6B knows it as 0x8642.

Think of how two processes reference a single file in a filesystem. Each has a separate handle to it yet it is still the same "physical" (inasmuch as any file is "physical") file.

If that file is served via NFS, then the NFS server has a handle to it -- which differs from either of the other two processes, above. And, when the NFS *client* (other host) accesses the file, it has still another handle!

Dealing with capabilities is one place where I have to work really hard NOT to think in terms of actual "values" but, rather, think of them in an abstract sense: "This lets me turn on the motor", "This lets me erase the hard disk", etc.

The (big) win with that approach is the capabilities can be "examined" without the active involvement of the server. I.e., the server can publish (to something in the kernel?) the "right" capabiliti(es) for a particular RPC/IPC and the grunt work of decrypting and checking against the "right" template can be offloaded to some common library function.

E.g., you can embed a bitmap in the capability where each bit corresponds with a particular action and let the library do the test for you.

It would be nice (see above) if the "validation" could be removed from the "managing server". Think about it... it's a mundane task that *should* be offloadable (but I can't see how to do it in "my" approach :-/ )

Agreed! There's no free lunch. I try to fast-path operations

*within* a session to hide the costs of processing the capabilities. But, this makes the connection "fatter".

For the audio clients, I have to implement mechanisms to prevent rogue clients/servers from corrupting the data passed to these devices. But, can't rely on simple connection oriented protocols (because they can be spoofed/intercepted).

So, I'll have to do something like implementing an encrypted tunnel. Or, maybe just "sign" each packet and have the devices fail-secure if they detect attempts at passing corrupted data to them.

[this is a crappy solution because it means any attack denies service. OTOH, with the resources available in these boxes, I don't have many options... :< How do you protect the devices in your PC on the memory bus from a rogue bus master??]

Yes. That was my unfortunate conclusion :>

Yes. You can come up with solutions. My point was that these are realistic problems that can turn up and have to be addressed. If you just take a naive approach ("Let's make a 'configurator' that we can sell to customers so *they* can configure their devices prior to deployment") you can overlook bigger usage problems.

E.g., many years ago, I worked on an (physical) access control system. The system was *really* clever! It's simplicity was pure elegance!

But, the implementation of the "key maker" had serious flaws. Within minutes of my first exposure to the system, I demonstrated how a (malicious) user could make an infinite number of "grand master keys" completely undetected -- simply by unplugging one cable in the process! Ooops!

"But you're not supposed to *do* that!"

"Then why did your system *let* me??" (and, why didn't it notice that I had done this?!)

I haven't tried protecting against rogue *clients*. I.e., if you want to buy some of these boxes and attach them to the system... I'm assuming trust flows one way (i.e., I don't "trust" anything coming from the devices).

I'm more concerned with, for example, the devices being wired to the field and some rogue application runs on them and does bad things (runs a motorized mechanism "into the stops"; sets the temperature of a tempering oven too high/low; discards every product coming off the assembly line; etc.)

*If* you can get a secret into each legitimate device, then they can routinely communicate with "something" -- as part of the normal application.

With the self-configure scheme I outlined, if you set up that KNOWN SECURE connection in The Back Office *and* only served up an image that stored the secret in the devices' persistent store from that connection (e.g., a different network I/F), then you could be sure only *your* devices got *your* secret.

So, if a rogue device was deployed on the "public" interface for the server, the image that gets served up by the *legitimate* server would not include the code fragment that sets the secret. As such, the device would never be able to *run*.

Correct. I am convinced there is no way to avoid some mechanism to put the secret into the device -- at least initially.

Toilets come in a close second! :>

Ditto for toilets! ;-)

Reply to
Don Y
Loading thread data ...

You can use a source of partially random data (from voltage or timing of external events) and the Advanced Multilevel Strategy to distill a more pure random bit sequence. One article I read several years ago used the phase noise in PLL tracking of a PC clock.

See

formatting link

--
Thad
Reply to
Thad Smith

Since the only "known I/Os" available to all such devices are:

- the network interface and, since that can be "influenced" by an attacker, I was looking to add some sort of hardware to fill the "entropy pool" :>

Oooo! That might be worth looking into. I suspect it is also influenced by the "goings on" in the processor. But, can't imagine any way that an attacker could come up with any sort of controllable means of manipulating *that*!

Yes, I'm familiar with the technique (a good link though). I'm at the stage where the (uninfluenceable [sic]) entropy *source* is the issue. (though the original use for this seems unattainable)

Reply to
Don Y

Dozens of posts, yet nobody has mentioned using a TPM chip.

For a few dollars, you can get a true random number generator in a small surface mount package. The interface will be LPC or I2C. I don't think they're inherently cheap, but the huge volumes (most name- brand laptops have them) push the price down.

The TPM device also supports various crypto functions (key storage, signing, etc.).

BTW, the TPM spec allows the use of a PRNG rather than a TRNG, however all the ones I've looked at claim to have TRNG.

formatting link

Oh, and be prepared to sign an NDA just to get a datasheet.

Regards, Allan

Reply to
Allan Herriman

But, I can do those "in software".

It seems like you're buying far more than you (I) *need*. I.e., probably wonderful as a "bolt on" to something that wasn't explicitly designed to maintain an entropy pool.

A source of "randomness" and some code -- running on a processor that's already *present* in the system -- seems like it does all of that.

And, shipping products with it "uninitialized" (empty keyring) leaves me in the same boat as my original post.

Alternatively, relying on cryptosecrets (keys) hidden inside by the manufacturer violates my constraint that it be "open" (nothing hidden).

That, in itself, is a deal-breaker. I want my systems to be "publishable" (OSS). Having a chunk of code elided just to honor an NDA is contrary to that goal.

Reply to
Don Y

Why do you care? The huge volumes make it cheap. You don't have to use all the features. Rolling your own TRNG hardware may or may not be as cost effective but you won't know until you actually go through the design.

Who has validated this source of "randomness" ?

Anything you make or design yourself has to go through some very expensive qualification before you can use it for anything serious.

Of course, this may not matter to you depending on your target market.

I shouldn't need to tell you about the many security breaks in systems due to lack of entropy.

I'm not sure that you need to do that if you just want the random numbers.

The API is a freely available document and all the TPMs from the different manufacturers should support the same API. I believe that the register set (for the LPC versions) is also standardised. You shouldn't have any problems with FOSS or even schematics.

The datasheet will (hopefully) tell you what's inside the chip and how it generates the RNGs. You don't have to publish that in order to be able to write a FOSS driver.

Regards, Allan

Reply to
Allan Herriman

If I release on OS{S,H} design, will folks be able to purchase Q1 (or Q10) of them "locally" and *without* an NDA? TCO is a more complex number than "what it costs laptop manufacturers"...

You have the *code* perform tests while the entropy pool is being stirred. If/when the code determines sufficient entropy exists in the pool, then you extract some amount of it.

We're not talking about generating lots and lots of random numbers. Rather, "enough" to keep *a* link secure.

The point was trying to validate the netboot "securely". It seems the concensus is that this can *truly* be only be done if you exchange secrets prior to deployment. Hence, the "value" of the TPM's ability to store those secrets, "securely".

If there is nothing *in* the datasheet that is required to write a driver, then why bother signing an NDA?

(Conversely, if there *is* something in there that dictates the need for an NDA, then publishing the source for the driver is likely to disclose secrets covered by the NDA)

So, perhaps the *best* approach is a library that has the same (freely available) API wrapped around some "open" hardware... (?) I.e., you don't like my solution? Buy a TPM and replace the library with stubs.

Reply to
Don Y

We've been able to purchase prototype quantities as well as reels, but the average hobbyist wouldn't even get a reply from the manufacturers, and the only practical way to get small quantities for things like "kits" would be for a kit manufacturer to buy a reel and then on-sell them in ones and twos.

This problem isn't unique to TPMs though.

The pinout, the electrical parameters, etc.

I've not written a driver for one of these, so I don't know how true they are to the standard API.

This might be a problem for an open source hardware design, but I don't think it would be a problem for open source software such as a driver, due to the common API. (IIRC, there's a skeleton of a LPC TPM driver in the Linux kernel that you can inspect if you want.)

Looking at a schematic, I see that one particular TPM has power, ground, clock input, I2C lines and a reset. That's it. Publishing a schematic doesn't give anything away apart from the pinout, but I am not a lawyer so I can't say whether it would violate the NDA.

Regards, Allan

Reply to
Allan Herriman

Yeah, *I* don't want to be the one holding a reel of devices just to enable others to buy them "one off". :< I'd rather find parts that can be purchased "mainstream".

E.g., one of my PDAs uses some funky battery that isn't sold locally (I think I'd have to cross the pond for a supplier). While this might be acceptable for the manufacturer, as a

*consumer*, I find it damned annoying!

Agreed.

So, I would have to rely on that information "becoming known through sources other than myself" before I could *myself* disclose it. :<

A schematic leaks *some* information (in addition to the pinout). E.g., (assuming the design represented by the schematic has no design inaccuracies) I can learn much about timing, voltage, etc. parameters that *should* work (maybe not worst case or typ

*published* numbers but I can look at published specs for the devices that talk to/control the device in question and characterize *their* behavior. So, I know that the device will function at that "operating point" -- regardless of any other likely details that I can't inspect from a datasheet.
Reply to
Don Y

Haven't seen you on sci.crypt. Don't feel like leaving your comfort zone? ;-)

This list might be of interest.

formatting link

Intel now provides a "Digital Random Number Generator" (code-name Bull Mountain) which /might/ trickle down to Atom platforms in the future.

formatting link

The technical document is at

formatting link

Regards.

Reply to
Noob

S.crypt seemed to be more "pie in the sky"/theoretical, political and/or naive (i.e., "If I multiply each byte by 0x17, will that give me a secure code?"). I also disliked the relatively high percentage of "anonymous" posters :-/

Since I am looking for something *practical*, I posted here where I expect people to have some familiarity with the sorts of issues involved. And, perhaps, alternatives to the scheme I was pursuing.

Yes, mainly links (many of which I had already stumbled across directly from Google)

formatting link

I'll keep it in mind -- if I ever opt to use an Atom! :>

I think my best bet is to hack together a cheap/crude noise source and just implement a daemon that routinely checks randomness and gives me a hint as to how much entropy exists in the pool at any given time (i.e., "when" it's appropriate to sample the elixir)

As long as I don't expect *much* randomness (low bit rate), I think this should be adequate.

(and the original problem has already been "solved", here)

Reply to
Don Y

Right.

Like Microsoft "solved" the problem for the Xbox.

formatting link

Like Sony "solved" the problem for the PS3.

formatting link

Reply to
Noob

formatting link

No. If you reread my original post, you will see these are entirely different "problems".

Reply to
Don Y

I actually wish semi-manufacturers could realise they could cut a lot of shipping costs and get better product acceptance by using reels, with caveats.

Here is an example I was once working on a new device and after dealings with the manufacturer they shipped a second batch of samples to me. This was partly because to buy through distribution in UK meant wait 8 weeks oh and buy a tray of 490 parts.

Yes that is right 490 parts, for 6 x 6 mm QFN, in a tray that holds

14 rows of 35 devices. Which actually would mean an outlay of 1400 GBP plus shipping and taxes for two prototypes!

Anyway the manufacturer agreed to ship 6 x samples, which were supplied as

6 devices in one tray Packed into a tray box with fillers Packed with THREE empty tray boxes Into a four tray box Packed into a shipping container

This was then Air freighted from Taiwan to UK ! Air freight is a volumetric charge (weight and size)

Most manufacturers actually make devices in batches of wafers, normally the minimum is between 10 and 20 wafers, for this device probably 15k devices per wafer.

These devices get bonded out and package up usually into expensive trays.

Now if a manufacturer was to make approx 3000 batch into reels it would HELP THEIR OWN BOTTOM LINE. Obviously this becomes less possible for larger devices (84 PLCC or 100 TQFP upwards). When ICs are put on tape and reel it is mainly done from tray batches.

If this at least one reel is HELD by the manufacturer, this can become

a/ Sample stock, which can be shipped in small plastic/foam IC boxes (remember these), very cheap to ship.

b/ Small qty distribution stock (MOQ 50+ depending on value/size) Distributors can then easily ship small quantities onwards

This then makes parts easier for the likes of Digikey/Mouser/Farnell/RS to stock and ship.

This makes parts acceptance easier, which leads to further orders often for larger production or larger distribution quantities.

Don't even get me on every model has a different battery like LCDs which mean in 6 months, the batteries become 'unobtainium' so your product becomes land-fill quickly.

--
Paul Carpenter          | paul@pcserviceselectronics.co.uk
    PC Services
 Timing Diagram Font
  GNU H8 - compiler & Renesas H8/H8S/H8 Tiny
 For those web sites you hate
Reply to
Paul

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.