Certificate compromises?

Sorry for the late reply, but how does a basic signature scheme not solve this problem? Either the firmware refuses any update that has a bad sig, or the unreplaceable boot code refuses to finish starting up if the image in flash doesn't have a valid sig.

Any modified binary will not have a correct signature, since that can only be done by someone holding the signing half of the key. OTOH, if you can get an insider at the manufacturer to sign a modified binary for you, or you can hack in to the manufacturer's systems and steal the signing key, you're off to the races.

For that not to work, you'd have to assume almost all current cryptography is horribly broken. Flawed implementations aside, of course.

Reply to
Robert Wessel
Loading thread data ...

Yes, but it deliberately targeted those devices! And, was deployed in a manner that would make it "likely" that it would/could find its way into them.

Imagine, instead, writing a compiler "kit" from which you *hope* other compilers will be derived (surreptitiously propagating your payload). Then, hoping one of those derivative works

*eventually* gets used to compile some code that will run on one of those centrifuges -- and never be detected by any of the other applications for which it generates code.

When you know your target and have an explicit attack vector, its a lot easier to craft an exploit. *And*, make it "undetectable" (or, difficult to detect).

E.g., in the early 80's, the video (arcade) game industry was at its peak. A common problem, there, was product lifetime -- you had lots of "90 day wonders" (games that no one wanted to play

90 days after their release). So, manufacturers were faced with a dilemma: invest in a game's development and *hope* you can saturate the market before the 90 days expired.

OTOH, you didn't want to find yourself with lots of "unsold stock" if even the nominal market didn't materialize!

A counterfeit market materialized whereby "offshore" manufacturers would BLATANTLY copy existing games -- investing practically no effort to copy the software and just change things like the name of the game, copyright notice, colorschemes, etc. As such, within weeks of a game's release, they could have a clone on the market for considerably less money -- they didn't have any NRE to pay off!

A sh*tload of effort went into "protecting" these devices from the counterfeiters. Full custom chips, potting subassemblies, embedding metal fibers in those potted assemblies to hinder Xray analysis, etc.

And, of course, software hacks to detect changes to the executable's image!

Initial approaches were go/nogo tests. Bad checksum would cause the game to refuse to run. (Sort of like how desktop software licensing works, today).

Ah, but anyone who's ever patched a ROM knows, its trivial to find and fix a checksum so the modified image *appears* correct.

So, more surreptitious schemes crept in. Like expecting the state of the machine "after foo() has executed" to contain known artifacts as side-effects. (in one case, the "cursor" had to be left in a certain -- apparently arbitrary -- point after the splash screen was drawn)

These were similarly discovered and worked-around (e.g., draw your *modified* splash screen and then explicitly force the cursor to the "required" position before "returning").

The only *effective* way to work around the problem was to defer and disguise your knowledge of the "altered executable" in a way that your adversary would find difficult to isolate.

Yet, would piss the hell out of the folks PLAYING those games and, thus, penalize the operators who *purchased* them!

So, you'd fire shrapnel into memory, "randomly". To the player, this might appear as his score suddenly becoming *negative*. Or, his "avatar" running left instead of right. Or, his "weapon" no longer firing. etc.

The point was, you couldn't predict how the game would misbehave from moment to moment. It just "looked broken" and "unplayable".

But, you can only make this sorts of retaliation if you are keenly aware of what you have at your disposal. E.g., Stuxnet would be ineffective at crippling an air-defense system. Or, a financial institution. Or...

Reply to
Don Y

One helluva New Year's party, eh?? :>

The issue I'm questioning is how secure these things are IN PRACTICE.

We've all seen what the math *claims*. Yet, we also routinely hear of problems with technology that had previously been thought to be "secure" (MD5, anyone? Rainbow tables?) As computational facilities become cheaper, what was previously possible only at the resource level of "a small nationstate" suddenly becomes possible for "a large corporation". From there, it's only a matter of time before it's "any corporation" (or, a university whose resources are used "after hours", unbeknownst to the powers that be).

And, there are also *policies* that are required for the technology to work (Snowden, anyone?).

E.g., I suspect it is impossible for a single person to cause the launch of a missile (regardless of payload) without the complicity of others. Because even "special, physical, nonduplicable" keys isn't enough to prevent someone possessing one of them to *use* it.

Or, if someone stumbles on a flaw in the technology behind the key! (again, MD5, SHA1). IIRC, there have been malware attacks that impersonated legitimate signatures successfully (MS?)

You're making the assumption that implementations are NOT flawed! :>

Or, assuming that anything that hasn't *yet* been determined to be flawed BY DESIGN doesn't count...

Reply to
Don Y

While MD5 collisions have been generated, no SHA-1 collisions have been (although there are more than enough worries to suggest using something stronger).

In practice, there's no evidence that anyone has actually managed to forge an SHA-1 based sig (assuming all the other stuff, like the public key encryption of the hash are done well).

In any event, MD5 has long been understood to have an impractically short output, resulting in only about 64 bits of resistance to a collision attack, and SHA-1 was somewhat better (80 bits). What is puzzling is how long these have lasted in common use (like the similarly weak DES), despite everyone knowing, and having better alternatives available.

The availability and cost of computing has only a limited impact here. AES-256 is beyond brute-forcing, no matter what. Arguably even AES-128* is. Unless some algorithmic break is discovered. And yes, that's always a possibility.

So sure, there's always a chance that things cryptographic will go pear shaped in a big way, but if that happens, no one is going to care about your little device, since most of modern commerce will have broken down.

*The Landauer limit implies that even *counting* through the 2**128 states (much less actually attempting to check the keys) would take some 10**18 Joules (about three days of the world's entire energy output, or about the output of a 200 megaton nuclear device**). And that's the theoretical limit - real hardware operates ~100 million times less efficiently. **IOW, even if someone did this, it *would* be noticed.

Since we can't guarantee perfection, we should do what? Give up?

All reasonable threat models are probabilistic.

Reply to
Robert Wessel

And if my sums are right, just counting to 2**256 would require about one ten-trillionth of the entire mass/energy content of the universe.

Reply to
Robert Wessel

But how do you know that you, as "compiler ancestor" are going to

*ever* be applied to generating code that will run on a Windows host? How do you know what compilers will be *compiled* using your compiler and then find their way to application on Windows binaries? That's where we started the "ancestry" argument -- from a reference to Kernighan's description of how to "infect" a compiler such that it's infection is not noticeable until it happens to be tasked with compiling a certain application ("UNIX").

If that compiler is, in turn, used to create *another* compiler (say, one that is intended to run on Windows machines), then the "infestation" will never be fruitful (unless someone tries to compile "UNIX" on a Windows host with that compiler -- or any of the compilers having a similar ancestry)

I think you missed the point of Boudewijn and my conversation.

It's *easy* to bug a compiler so that it corrupts a given target. Just like its easy to write a virus that watches for "Password" prompts and captures keystrokes "up to the next newline".

I contend that it is *very difficult* to infect a compiler such that any compilers *created* using it (Kernighan's argument) -- and any created by *those* -- will propagate this attack scenario in a manner that is fruitful and not discoverable (i.e., the infestation is present in all of the compilers compiled *by* the original -- even if they don't execute or target the intended victim "system").

E.g., the original attack scenario was to write a compiler that recognizes when "login.c" is being compiled. Then, insert some additional code (a backdoor) so that the *executable* generated would allow the "saboteur" to gain accesss to any UNIX machine whose binaries (at least, "login") were compiled with that compiler.

*Then*, further modify the compiler so that when it is used to compile itself -- or, future enhancements of itself -- it *propagates* this "infestation" thereby ensuring the exploit remains even after the compiler has been "updated".

Boudewijn's comment, upthread, in response to Dimiter's assertion:

DP>> Well I am past that particular example, I have written the DP>> compiler :) .

BD> Did you also write the compiler that compiled your compiler?

was (IMO) a reference to this attack scenario. The conversation since then has been an examination of how viable that would be without explicit knowledge of a particular compiler's pedigree.

E.g., imagine ALL compilers in existence today were derived from that compiler! Then, compiling "login.c" using *any* of them should result in the surreptitious insertion of code that implements the back door as designed. EVEN IF YOU ARE INTENDING TO RUN THE CODE ON A MICROWAVE OVEN (i.e., this "looks" like login.c -- based on whatever criteria the compiler used to "recognize" it's attack opportunity -- yet the target obviously doesn;t have a concept of a "user id", file system, fork(), etc.)

Now, generalize the approach. Imagine you wrote that original compiler. BY WHICH ALL OTHER COMPILERS WILL BE COMPILED! So, you can theoretically inject any code you want into any executable for any target and any application. With this ability, what could you do -- that wouldn't quickly bring attention to you as a possible source of the "problem" that manifests in any/all of these applications/binaries? How do you effectively target a system that you don't know anything about?? And, not get "caught" doing so?

Reply to
Don Y

Crap! I've been attributing this to Kernighan when, in fact, it was Thompson. Easy (?) mistake. Mea culpa. See: for a detailed description of the scenario we have been discussing.

As I said in an earlier reply, upthread: "You can't trust code that you did not totally create yourself."

Reply to
Don Y

That was true of MD5 until recently, as well. and a second attack vector:

How long before we see other hashes fall away?

So, to answer my original question: Any "news"/"developments" regarding how robust "signed binaries" (et al.) have proven to be IN PRACTICE? I.e., any known exploits? Any "social engineering" exploits? you're saying, despite the theoretical strength of these approaches, they are less than desirable IN PRACTICE. Just like telling folks to choose a strong password is, in theory, great -- but, IN PRACTICE falls flat even when they are forced to adhere to strict guidelines regarding their choice.

Because folks don't adopt the strongest technology -- or implement it correctly -- available. Or, a disgruntled employee (martyr?) can render useless all the protections afforded by technology...

I suspect folks would be more concerned with lack of water or electricity than whether or not their bank was "attacked". So, those "little devices" that control the chlorination of *my* water supply and monitor it for toxic hazards -- along with the devices that ensure power makes its way to my home -- are far more "real" than codes that let Impersonal Bank #1 transact with Impersonal Bank #2. You *know* that will get fixed -- because The Folks In Charge worry a LOT about *money*. And, even if it takes time, chances are, no money will "evaporate" in the process.

OTOH, in a matter of days (hours if on a national scale) without power or water for drinking/sanitation, where the money went will be the *last* thing on folks' mind -- even the bankers!

No. My question didn't ask if the approach was perfect. It pointedly asked for feedback on how well the approach works IN PRACTICE. I.e., so a metric can be applied to it's efficacy -- instead of blindly assuming "all is well because there's lots of math behind this".

In a private dialog, I've been discussing the recent exploits at e.g., Target. And, pondering how it could have been avoided

*regardless* of what the eventual vulnerability turns out to be.

But, doing so requires questioning ALL the assumptions in your security model -- not just relying on what you *hope* to be true (signed binaries GUARANTEE no foreign code can execute; employees will never compromise our security; folks with SECRET clearances will never smuggle documents out of the facility; etc.) Sooner or later, one (or more) of those assumptions will be proven false.

Exactly. And, you need to understand the likely attack vectors to be able to make a probabilistic assessment of how they apply to

*you*! E.g., the spooksquad would have had far less troubles had Snowden not been allowed in (or out!) of the facility... despite the technological protections they had on their "data". OTOH, a 3 year old would be wise ensuring he doesn't leave his partially eaten burger on the sofa where his *dog* can get at it!

Which threat is more significant? :>

Off to meetings Always distracting to have to attend to others' needs!

Reply to
Don Y

Hard to say, of course. But my point was that MD5, and to a lesser extent SHA-1, have very little margin of safety (the former being equivalent to about a 64 bit key the latter, 80 bit). All of the replacements will have considerably more margin. Still, even cracking MD5 is still pretty impractical as a general sort of attack, although it certainly can be done.

It's not nearly as hopeless as trying to get people to use strong passwords.

I'm not really sure what your point is.

Done correctly, and it certainly appears that people *have* at time managed to do it correctly, a cryptographic signature will provide the protection you want. And then what's the "better" alternative? You could distribute your updates on masked ROMs? Making my own masked ROM would probably be easier than breaking even a marginally implemented signature scheme.

Which is an argument to trust the notion of digital signatures. The people with actual money do.

Well, fine, but I thought we were talking about a particular, and relatively focused application. But still, you have to trust the vendor at some point. And frankly, it's almost certain that they're far more likely to deliver a badly buggy update with a good signature than someone is forging the signature. Still, where is your evidence that this frequently has failed?

Reply to
Robert Wessel

You'd be wrong. A couple of years ago, some TV news magazine did a story on nuclear missile silos. One of the operators demonstrated how he could - if he wanted - circumvent the "always 2" operator safeguards and launch the missile by himself. What he showed them wasn't broadcast, but the report prompted an Air Force review of all personnel responsible for nuclear ordinance ... and that review resulted in a large number of reassignments.

George

Reply to
George Neuner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.