Certificate compromises?

And receive energy from same. Apparently there now is a research virus that can communicate across network air gaps using sound. Next will be (subliminally invisible) video transmissions from screen to web-cam.

screen,

Then

It's been said (only semi-facetiously) that the only "safe" computer is one that is lying at the bottom of the ocean encased in depleted uranium.

George

Reply to
George Neuner
Loading thread data ...

Yes, that is reasonable enough - and will mean the cost of a signature check is less significant.

Encryption is to keep the code secret, signing is to ensure that the code has not been changed or tampered with. The two concepts are orthogonal (although an appropriate encryption scheme can also be an effective signing scheme).

Absolutely true - it's all guesswork. But you can usually make some rough judgements.

Reply to
David Brown

But, it also means the object only has to "get by me" *once* to be considered "OK". This changes the nature of any "attack".

Yes. But encryption typically involves more resources -- even if only *delivered* in encrypted form (decrypted on installation instead of decrypting at each load time).

If someone wants to protect their object from prying eyes, then they can bear the cost of that "protection". I am only concerned with implementing the "admittance" function: you are/aren't allowed entry, here.

For some obvious things, I think that is true (e.g., electronic money). OTOH, I have been surprised at some of the "cracked" devices that I've encountered: "Who the heck would want to crack *that*??" (i.e., the "value" must have been one of ego gratification as the devices weren't particularly "valuable" to have access to at that level...)

So, do we agree that the real "vulnerabilities" in certificate systems are:

- social engineering at the CA (even if only a single authority)

- "probing" the target for hidden keys

I.e., the "no cost" option of examining signed objects IN THE ABSENCE OF THE TARGET to try to extract keys is completely impractical -- regardless of the number of such "examples" the attacker may have at their disposal? (e.g., imagine you could examine every signed object freely... you *still* can't gain any appreciable advantage)

Happy Holiday!

--don

Reply to
Don Y

It may be classic but it could well have caught me by surprise, certainly not an obvious thing to think of. It does take some level of access to check the free space which will be a limit on some occasions but will not be on others...

Dimiter

Reply to
dp

The output of the standard unix "df" command was the only "hook" necessary". It is difficult to imagine a user being prevented from executing "df"!

Reply to
Tom Gardner

Assigning each user a quota far enough below the available disk space to make no visible difference in the amount of disk free space visible to the receiver would fix that.

But of course, that's far from the only possible covert channel. Timing attacks can be particularly difficult to stop. For example, you could attempt to measure how busy the CPU is, for example, by timing the execution of a loop - the sender can then just execute or sleep for appropriate amounts of time. Or time how long it takes to accomplish a set of actions on a disk drive - the sender either keeps the drive busy, or not, to send a bit.

There are many other possibilities. And it's simple enough to use even a very noise channel for communications.

There are real-life analogues as well. Reporters used to track night-time pizza deliveries at the Pentagon, as an indication of unusual activity, for example.

Reply to
Robert Wessel

AFAIK df doesn't reflect the users' quota, only the total disk space.

Just so.

That's why I referred to the modulation scheme; I had spread spectrum techniques in mind!

Reply to
Tom Gardner

At the end of the day, you simply can't guarantee devices that users have physical access to can remain secure. You can make it pretty darn hard, with signatures, tamper detection, etc., but not impossible. And, of course, the degree of hardness comes with a monetary* cost, which must be balanced against the business requirements (for example, Sony doesn't really care if a few thousand PS4s get rooted, so long as that remains an impractical option for the vast majority of their users).

Of course that's no more a qualitative problem than the possibility that a device might be physically altered by someone. For example, I assume there's some sort of amplifier in the output stages of a AED - modifying that to deliver ten times the desired amperage (or a tenth), would likely be very difficult to defend against.

So critical devices must also have *physical* security.

As to devices like game consoles - if you're willing to sacrifice one or two, you can pull the ROMs and dump then, then dump the hard disk, then start disassembling.

*And often other costs as well - tamper detection tends to decrease the reliability of systems via false alarms.
Reply to
Robert Wessel

To be clear... my question, here, is not *protecting* a device from being rooted. If it's *your* device, you can drill holes in it, smash it to bits, etc.

Rather, what I am concerned with is ensuring only "certified" binaries execute on devices that *other* folks ("playing by the rules") are tamper proof. I.e., that a third party can't alter a legitimate binary and have that fact be undetected. Or, generate an "independent" binary that these "other" folks would be able to (unsuspectingly) load.

To return to the AED example: if you want to modify your AED so that it plays MP3's instead of delivering shocks to the patient in V-fib, so be it. But, the code you develop to do that shouldn't be able to find its way into a "normal" AED (intentionally or accidentally).

Exactly.

My concern is what an attacker could learn that would enable him/her to defeat/bypass the tests put in place for other devices that have

*not* been cannabilized and are not expecting to be anything other than an "AED 2000". So, someone who religiously follows the process for updating the firmware in their AED 2000 (e.g., a hospital technician) never worries that something other than the *intended* code is running on the hardware.

("WTF? Why is the defibrilator playing XMAS carols??")

Have a Happy Holiday!

--don

Reply to
Don Y

I'm pretty sure that's what df is supposed to do per the POSIX spec, but I'm not sure that all implementations ignore quotas. And some definitely do not simply reflect the available disk space. For example, on most FS's, HP-UX reserves some amount of the volume for root (20%, by default, IIRC), and df reports the percentage of the non-reserved space used by users. If the admin changes the reserved percentage to more than is actually free, df can report a negative amount of disk space free.

I've also seen some very odd things out of df and friends on virtualized volumes.

And to be honest, I wasn't really thinking specifically in terms of Unix and df, but more generally, and many systems only report the available quota for the user on a volume by default.

Reply to
Robert Wessel

Op Fri, 20 Dec 2013 22:50:06 +0100 schreef dp :

Did you also write the compiler that compiled your compiler?

--
(Remove the obvious prefix to reply privately.) 
Gemaakt met Opera's e-mailprogramma: http://www.opera.com/mail/
Reply to
Boudewijn Dijkstra

That was the nature of Kernighan's defense of the obvious workaround.

However, given the number of different *independent* ancestries of the compilers in use, today, it would be relatively easy to work around his threat approach -- just compile your compiler with a variety of different compilers and examine the critical portions of generated code (in BK's case, the loophole was explicit). Any that generate "suspicious" code can be considered as "infected compilers" and removed from service.

Reply to
Don Y

Yes. And the one before that. The first in the chain was the 6809 assembler which ran on MDOS09 machines. (I suppose I can still read binary 6809 code pretty well after all the decades :-) ).

Then they were not compilers but assemblers (68k, the cpu32 flavour). So there is no bit within my code not under my control.

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
dp

Op Wed, 08 Jan 2014 21:34:31 +0100 schreef Don Y :

Nowadays one would write a new compiler in C and use a cross-compiler to generate the first self-hosted version. I imagine that very early compilers were bootstrapped using an initial assembler version, but how do you know which ancestries are really independent?

--
(Remove the obvious prefix to reply privately.) 
Gemaakt met Opera's e-mailprogramma: http://www.opera.com/mail/
Reply to
Boudewijn Dijkstra

As in Dimiter's case -- bootstrap the compiler yourself! Write a *crude* compiler (that generates really crappy code... or, even an "interpreter" if you want). Once this is running, use it to compiler an optimizing compiler (for which you can inspect the sources). Once *that* is running, use it to compile itself!

In Kernighan's case, you know where the offending code wants to be. In the more generic case, you can never be sure!

Reply to
Don Y

Let me rephrase my question: how do you know which compilers are self-bootstrapped and which compilers used an existing compiler?

--
(Remove the obvious prefix to reply privately.) 
Gemaakt met Opera's e-mailprogramma: http://www.opera.com/mail/
Reply to
Boudewijn Dijkstra

You don't. Unless you have first-hand knowledge of the pedigree (i.e., wrote it yourself).

But, for a "generic" compiler (for any particular language) that can be applied to any application domain, its hard to conceive of an exploit that the compiler writer could embed in its code (open *or* closed) that could apply in all cases.

I.e., you could hope to target a Linux deployment environment and hope your compiler is eventually used to compile something that runs "with privilege". Or, a Windows environment, etc.

But, what are you going to "exploit" if I use your compiler to compile the code for a microwave oven controller? Or, a rice cooker?

As you (compiler saboteur) don't have any a priori knowledge of the semantics of any given application, you can't reliably figure out how to *use* the communication media, data, sensors and actuators at your disposal. Even if you had natural language capabilities and could parse the commentary associated with the code! :>

About the "best" you can do is cause the device to fail, deliberately. And, even doing this is problematic.

[Assume you also wrote the linkage editor/loader so you have global knowledge of the program's execution flow and can "arrange" for your injected code to be executed wherever you'd like -- vs. waiting for the application to stumble across it]

Any efforts that happen too early during execution are easily found (unless you also control the code executing in the debuggers and logic analyzers used by the development staff :> ). Failures that manifest later (long after POR) are a bit more problematic. But, one thing the arcade game heyday taught us was that any *hard* failure is easy to identify and work around. I.e., if there is a go/no-go point in the code, it is easy to locate and sort out why the "no-go" branch is being taken. And, "fix" this.

[Remember, all we would need to do here is become suspicious of the compiler -- enough to start examining the actual code it is generating (which, once generated, it can't later *hide*!). If we determine the compiler to be "buggy", we retire it.]

OTOH, if you don't "break" the code but, instead, cause it to "misbehave", slightly -- AND UNPREDICTABLY -- it becomes very difficult to locate the source of the problem. E.g., set (or clear) a "random" LSb in memory (which may have been set/clear already). Done infrequently enough, the "near failures" that it causes are hard to track down. I.e., sometimes, your microwave oven cooks for an extra second. Other times, the power level is incorrect. Still other times, it resets in the middle of the day while sitting "idle".

But, even doing this requires intimate *understanding* of the code that the developer has compiled. How do you ensure your exploit gets invoked "randomly" and *infrequently*? (if it happens 100 times in the first second after POR, you can bet your efforts will quickly be uncovered!)

So, the *elegant* solution is to have the compiler generate a VM and run the application *in* that VM. So that it (the VM that it builds) can do what it wants, *when* it wants, regardless of program flow in the application.

And, hope no one ever drags out a tool that lets them see what opcodes are *actually* being executed...

A similarly interesting challenge is embedding hidden algorithms "in plain sight". I.e., cases where folks have access to the sources yet might not be able to perceive some "hidden computation channel" that isn't formally documented in the sources.

Reply to
Don Y

So, assuming that you didn't stand at the cradle of many of today's compilers, how can you claim to have any idea about "the number of different *independent* ancestries of the compilers in use, today"? Note that I'm not trying to discredit you, just curious what knowledge led you to make the claim.

Why go for "all cases" when you can focus your energy on something worthwhile, like uranium centrifuges?

--
(Remove the obvious prefix to reply privately.) 
Gemaakt met Opera's e-mailprogramma: http://www.opera.com/mail/
Reply to
Boudewijn Dijkstra

Because I know many people who have *written* compilers :>

Also, "older" compilers tended to be written in more of an ad hoc manner -- instead of splitting out the code generator in the back end so a single "compiler" could target many different processors)

How do you recognize the application from an examination of the sources that you are compiling? Do you expect the developer to put a comment to the effect of:

// !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! // >>>>>>>> APPLICATION: U centrifuge control

Reply to
Don Y

Presumably, as a Compiler Saboteur, I have some goals other than causing random crashes of microwave ovens. Goals much more likely to be achieved by inserting code in a Windows or Linux device driver. And detecting when one of those is being compiled is going to be pretty trivial (for example, Windows DDs will pretty much *all* include ntddk.h, which almost *no* applications will). Disguising the code somewhat isn't going to be all that hard either, especially if you insert it only in release builds. Sure, withstanding determined scrutiny is hard, but how much of that actually happens? As for doing only occasional executions, both Linux and Windows make timers available to kernel mode ode (and user mode code as well, at least in Windows),. So you can read the equivalent of GetTickCount on Win32 by reading 0x7ffe0320. So my first order of business is to not do anything until the machine has been up for three days...

Reply to
Robert Wessel

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.