Managing "capabilities" for security


Not sure exactly how I want to ask this question; i.e., how best to differentiate the examples where X should be allowed vs X should be prohibited.

I have a capabilities based security model. Each capability has "authorizations" associated with it (trying to avoid using the word "capability", again :< ).

These authorizations are defined by the entity that creates the capability based on the "authorizations" that *it* has available to it!

I.e., if I own a resource, I have all of the authorizations conceivable for that resource. I can give all or part of those authorizations to entities (actors) of my choosing.

E.g., if the resource is a file, I could elect to give A, B and C read access to that file and write access only to B and D.

Similarly, if the resource is a mechanism, I might give the ability to move it RIGHT to A, B and D; the ability to move it LEFT to B and C; the ability to power it OFF to only A; etc.

It's important to be able to give subsets of your "authorizations" to others -- that you presumably trust (whatever that means). This allows them to act on your behalf.

E.g., if I have read & write access to a file, I might want to give *read* access to that file (principle of least privilege) to someone who will encrypt it's contents for me (returning an encrypted copy of the file but not altering the original; I trust him enough to *see* the file's contents but not enough to allow him to *alter* them -- to, for example, replace the file with its encrypted form... *I* can do that with my write authorization).

Similarly, I might want to give subsets of my authorizations to several different actors concurrently -- so each can do "whatever" to the resource without requiring me to serialize their accesses to it (multiprocessing)

And, I may also want to *forfeit* my authorizations -- possibly after passing them on to someone else.


Some of the trickier issues I'm trying to address include:

- "revoking" an authorization that I have previously given to another actor (do I do this asynchronously? synchronously with the other actor's consent/participation? etc.)

- handling intermediaries whose roles are strictly as "pipes" (e.g., imagine transparently imposing an actor between A and B -- call it D -- to allow the transactions between those two actors to be Debugged). D should have no need to invoke any of the authorities associated with any capability passed from A to B. It should be restricted to solely *propagating* the capability. I.e., D can't *hold* that capability but can pass it along.

- as a followup to the above, handling cases where the capability can be held or propagated -- but not *duplicated*. I.e., *you* can access this file; *or*, have someone else do it on your behalf; but it's one option or the other... the capability can't multiply!

The goal here is to allow *most* actors to be untrusted and still minimize the risk they pose to the system, the data and operations it implements, etc.

E.g., I could create a resource called an "email address". I could define operations on that like "send to", "forward to", etc. And, I can choose to make the actual address itself, *opaque*!

So, I can create a capability for this resource that has authorizations for "send to". Perhaps there's even a "send exactly once"! I can now give that capability to an actor and it would be able to send email *to* that address -- yet, never *see* the address itself! So, if it was a rogue agent trying to harvest email addresses from ever device that it was running on, this ability would be thwarted.

To protect against it generating neverending quantities of spam, I might opt to give it the "send exactly once" aauthorization knowing that it's damage/annoyance factor would be thusly limited.

If this actor can pass subset(s) of its capability to others, then it could just spawn another copy of itself, send a copy of the capability to that second instance, generate a piece of spam and

*die* -- knowing it's clone has a valid copy of the "send exactly once" authorization!

See where this is going? And, how powerful it can be in providing fine-grained control of resources??

It *seems* like what I really want to do is create a "service" (for want of a better word) that implements these "capability authorizations". I.e., if you want to pass a copy of a capability to another actor, you hand the capability to that service along with the desired target actor and that service examines the capability to see the authorizations that you have been granted for *it*!

"Ah, sorry but I can't perform this action for you because you don't have 'propagate a copy' authorization for this capability!"

[ This is really confusing due mainly to the fact that I'm trying to fabricate terms to address concepts that are very similar but different -- capability, authorization, etc.]

Hopefully this makes *some* sense... I'll try to work on a better lexicon.


Reply to
Don Y
Loading thread data ...

I can't see that you've been asking a question at all - it looks more like you have some ideas about what you think "capabilities" are and are trying to get a clearer picture. But I don't think your post here is ready for direct comments - you'll have to read a bit more, think a bit more, and figure out what you are trying to say, trying to ask, and trying to do.

In the meantime, read up a bit on "posix capabilities" and their implementation in Linux:

(and of course, google is your friend :-)

I don't think this is the kind of stuff you want to do yourself - there are a great deal of things to get right for tasks to have enough access to what they need without opening security holes.



Reply to
David Brown

Hi Don,

Revoking always should be asynchronous because it is solely at the discretion of the giver. If a capability can be delegated transitively, the originating authority may neither know nor be able to communicate directly with all of the current holders of the capability.

An agent can't simply hand out a copy of an original capability given to it - it needs to pass on a derived capability that is separate from but linked to the original. The derived capability has to be revocable both independently (by the agent itself) and in conjunction with revocation of the original capability (by the originating authority).

How to handle transitive delegation is *the* major issue in designing a capability system.

Yes. A communication channel should not make information observable to any entity that is not a participant in the communication.

A debugger monitoring a channel, and possibly injecting traffic into it, is a special case of a "silent" participant.

Just a particular case of transitive delegation.

Nope and Nope 8-)

The problem is that such flexible capabilities effectively become stores of arbitrary key-value pairs. That makes them difficult to manage and difficult to propagate (or migrate) to remote hosts.

I know you are using a (more or less) centralized DBMS - which solves the migration problem - but you still have the issue of how to organize the DB so that capabilities are easy to modify, copy or subset, *and* can be rapidly searched.

Haven't given it a lot of thought, but I don't immediately see a good answer. You know I'm not a big fan of non-TEXT extensible fields because (usually) they can't be indexed. Once you start down the road of using arbitrary capabilities, like ACLs, very quickly you find that you have many thousands of {capability,name,setting} triples in use.

It seems like what you want to do is have a communication system that can open a channel *and* deliver the access capability for it in a single action. Once a channel is open, you can communicate other capabilities directly.

E.g., on the client side, connect() creates both a new connection and a capability for it, and transmits the capability to the accept() on the server side. Depending on your programming model, both may also return the capability to their respective callers - e.g., for delegate communication.

Within a host you can start a child task and transfer capabilities automagically during the fork(). You can do similar using a process server on a remote host, communicating the capabilities to the server and letting it fork and install them before exec()ing the new process. [Naturally, after providing proof of _your_ capability to remotely launch the process.]

ISTM that a system which rejects this particular use case is not realistic. If you can't start a task which isn't running, or open communication with a task that is, then the system is useless.

YMMV, George

Reply to
George Neuner

Hi George,

[Apologies if more typos than usual -- pen > >>

Yes, of course [re: giver]. What I was trying to draw attention to is how The "holder" is effectively "made aware" of his loss of some/all of the "authorizations" that he had previously. (more below)

Unlike, (e.g.) Amoeba (apologies if I am misremembering references/ implementations, here) -- where a capability is "just a (magic) number (which can obviously be copied FREELY and indefinitely) -- I implement capabilities as kernel based/maintained objects. What a task sees as a "capability" is actually a *handle* to a capability (that the kernel tracks).

[E.g., Amoeba's implementation doesn't require the kernel to be aware of where a capability "is", currently. It is only aware when operations *on* the capability need to be performed (reminder: Amoeba makes the capabilities UNFORGEABLE and little more).]

As my kernel knows where every capability is located, at the moment, if can deliver an asynchronous notification ("signal") to the holder of the capability (holder == task; so no guarantee which of the task's threads will "see" that notification -- unless an exception handler thread has been nominated).

So, I *can* notify the holder. Or, wait for him to try to use the capability and throw an error at that time.

Of course, either approach can work. What I'm trying to decide is the relative merits of each -- on both sides of that notification fence!

As holders only have handles to capabilities, I actually *can* "move" the original capability. If a holder elects to create a *new* capability embuing it with some formal subset of the "authorizations" that are present in the original capability, it can (potentially!) do so. [that's one of my conceived restrictions...]

Hence the reason for my questions! They (below) pertain to he sorts of operations you can perform *on* the capability.

E.g., the Amoeba approach allows the holder to freely copy and distribute *a* capability that it holds. It has to trust EVERY recipient of those capability-copies. And, implicitly, any one that *they* may conceivably pass yet other copies to!

[in my case. I can implement mechanisms that allow you to create copies *or* restrict you to passing *the* original along. I.e., the valet can drive my car while he has the keys -- but, once he gives them to a *thief*, he loses that ability!]

In the example I cited, should I have to trust D as much as I do B? If I allow it to create a copy (for its own use) of the capability, then I do. OTOH, if I create a "pass all or nothing" attribute for the capability, then the only way that D can use the capability is by denying it to B.

(see where I'm going, here?)

But, IMO, the ability to do this effects the security that the capability system provides. If you can always duplicate a capability (or portions thereof), then you have to always trust everyone you give it to.

While I may trust you not to abuse a capability, do I also know that you can't be tricked (bug) into passing a copy of that capability on to an adversary?

I can move them in much the same way that I can move all my "objects" -- kernel(i) and kernel(j) conspire to track any particular capability (cuz it is implemented *in* the kernel)

It's just not *efficient* to do so (vs. the "capability is a magic integer" approach of e.g., Amoeba.

[From our discussions, you should already know that I'm wasting LOTS of resources on mechanisms that I think will enhance the development/execution environment]

Ah, no. This is at a different level. I haven't even begun to sort out how to make capabilities "persistent" (i.e., so they could be "stored" in the RDMS). Instead, they are transient objects that only exist while the owner/holder is alive.

E.g., a file handle is not, in itself, persistent -- even if the file that it *references* is (or *isn't*!)

*Accessing* the RDBMS can be controlled by a capability, though.

Exactly. Though mine don't have a textual way of being expressed (as an ACL would).

Yes, in a sense.

Each capability is actually an (object,authorizations) tuple (I

*really* need a better word than "authorizations" :< ). Think of a file handle. It allows you (the "Holder") to access some particular file (the name of which is not deducible from the handle itself!) in some particular way. If thread A creates a handle and gives it to thread B, B must *implicitly* know what "authorizations" are embedded in that handle (i.e. if the file was opened for read-only access, thread B can't decide to *write* to it!).

In my case, "somehow" (easy once you think about it), a task is given a reference (handle) to an object (on which it would like to operate *or* on which it is being *asked* to operate!) along with the authorizations as to *how* it can operate on that object.

In a crude sense, it is being given a reference to an "object" and a list of the verbs/methods that it can invoke for that object (it's actually much finer-grained than this but this should give you an idea).

For a traditional "file", those authorizations might include read, write, seek, etc. For a motor, they might include, CW, CCW, FAST, SLOW, BRAKE, etc.

Yes. Though connect() implies that you have previously been given the authorization to connect to that object! :> I.e., if you should have no need to talk to the file system, then you can't even *connect* to that service!

Being careful about terms, here...

In my case, I can create more *threads* within the current *task* (task == container of resources -- threads being execution resources). Any thread can use any of the resources (e.g., capabilities) that are present in that *task*.

Tasks can also create additional tasks (the UNIX "process" model). In *those* cases, the spawned task does not automatically inherit the resources of the "parent" (whereas a thread does!). Instead, the resources that it should have are explictly passed to it.

So, any threads created by threads *in* a particular task have the same resources that all threads in that task have. If you want to restrict (or change in any way) the resources available to an "execution unit", you have to create a different task.

Yes. Though the process server is actually the kernel running thereon.

The fact that I *did* launch the process is preconditioned on *having* the capability to do so!

That's just a bootstrap problem. I.e., *something* starts "init". Once the first "process" (task) is running, it creates the remaining tasks in the system.

As each task is an "object", there is an implicit capability associated with it! Using this "handle", that first process can push an executable image into the task, give it capabilities for other resources, etc.

And, each of these initial tasks (processes) can create their own resources with associated capabilities that can be handed out to still other tasks, etc.

I.e., "init" builds the initial set of servers as part of the application itself. Any service that wants tight reign over its clients can hand out capabilities that it refuses to let be propagated. That doesn't prevent the recipients of those capabilities from ACTING as AGENTS for the service! So, *they* can create their own INDEPENDANT capabilities (i.e., not subsets of the original capabilities granted to themselves).

So, "you" might not be able to talk to a root DNS server. But, you can talk to *me* (using an API of my own creation) and *I* can elect to contact the root DNS servers if I think that is warranted.

(inefficient because I am now in the middle. But, now allows me to actively control access to that other resource, cache results, etc.)

Morning tea...


Reply to
Don Y

To put this in more concrete terms:

I can create an object called an "email_address".

Unlike the intuitive way you would think of an email address (i.e., a set of characters with an '@' in the middle), the object manager for email addresses (email_address_handler) can make these *opaque* to applications. I.e., an application can *send* something to an email address but can't tell where it is going! (i.e., you can't harvest the email addresses in my address book and pass them on to your friend The Nigerian Prince!).

So, I can give you a capability that allows you to send messsages to a particular recipient: email_address_t recipient; without knowing who that is.

I might opt to give you the authority to do this exactly once (if, for example, the email_address_handler allowed me to create a number_of_uses argument for a particular capability instance).

"Hey, you're just supposed to be notifying these people of my upcoming party. Why do you need to be able to send more than *one* email to each recipient?"

If I prohibit you from passing this along, then I am, in effect, forcing *you* to do the work (send out the invitations).

If I allow you to pass it along BUT NOT DUPLICATE IT, then you can delegate a third party to perform this activity on your behalf -- but you also forfeit the ability to use that resource thereafter. I.e., you can ask your friend The Nigerian Prince to send out the invitations for you (of course, he wont be able to "see" the actual email addresses any more than *you* would). He might opt to send them all some sort of junk mail. But, he'll only be able to send one message to each recipient (because that was the constraint placed on the capability when it was originally granted to *you*!)

The question then becomes: do you create some set of "operations" that can be applied to all capabilities; or, allow each capability to have a specific handler (so *it* operates on the capabilities while the capabilities operate on the objects they reference)

[another indirection]
Reply to
Don Y

Different beast entirely -- in mvch the same way that "file permission bits" differ from full-fledged ACLs.

From your first reference, below: "A capability (known in some systems as a key) is a communicable, unforgeable token of authority. It refers to a value that references an object along with an associated set of access rights."

AFAICT, Linvx uses the term just to reference a finer grained set of "permissions" afforded to processes (beyond "root == God").

[IMO, you can't *effectively* ADD capabilities to an existing "system" except in very narrow, fortuitous places]

For more info, you might want to look at Amoeba, Chorus, EROS/KeyKOS, etc. (each to differing degrees).

Reply to
Don Y

I would disagree with this. If the use of a capability without authorization causes the requester to malfunction, then having a protocol that doesn't begin revocation with notification can make the authorization nearly worthless, as the actor using it can't be sure it is safe.

You also need to make sure that you don't have "transactions" that might be corrupted with a revocation in their midst.

Yes, sometimes just doing an asynchronous revocation may make sense, and in many cases having it as a fall back if the cooperation method fails to complete in a needed time is needed, but that doesn't mean that asynchronous is generally preferred.

As to the transitively granting, the same method could be used to relay the request to revoke.

Reply to
Richard Damon

The problem with a "cooperative" approach is as you allude to below: how long do you wait for the "holder" to relinquish it? Do you wait in units of wall-clock time? (if so, how do you know the holder isn't blocked, preempted, etc and, as such, not even aware of your request through no fault of its own?) Even if you can be assured the holder has received your request and is currently executing, how long might it conceivably take for him to comply (in an orderly fashion)?

So, as you acknowledge below, your app design must be able to handle this case -- which is essentially the asynchronous case.

I currently manage *physical* resources asynchronously (though with notification after the fact) -- because they *can* disappear even without my explicit control (e.g., power failure, drop in water pressure, etc.). So, this same sort of reaasoning would at least be *consistent*.

I.e., do an operation and *check* to see if it completed as expected (just like checking return value of malloc).

This is a tougher call (though I think I have a solution that addresses these issues). Who does the relaying? The actor who delegated the capability? (what if he is now a zombie?) Or, does the kernel track "derived capabilities" and treat them as part of the original capability?

As I began my original post: "... i.e., how best to differentiate the examples where X should be allowed vs X should be prohibited." you can come up with examples where /each/ approach is "right" and the others *wrong*. :<

Engineering: finding the least wrong solution to a problem.

But, at least its interesting! :>

Reply to
Don Y

Come to think of it, any capability-based system in which the kernel does NOT maintain/implement the actual capabilities *must* rely on deferred errors that the holder(s) would have to detect and handle.

E.g., In Amoeba and Chorus, the capability is just an integer (albeit one with magical properties). The kernel doesn't know how many "instances" of that number exist in the system! So, no way for it to notify all holders in advance of revocation or downgrading.

Instead, the holder(s) eventually *use* the capability -- and find that "it doesn't work".

I.e., if I change the permissions on a file after telling you the name of the file, you may not be able to access it as my initial intent might have suggested. So, if you *expect* the fopen() to succeed (and haven't coded for the !SUCCESS possibility), you've got a bug.

Reply to
Don Y

All questions to be decided at design phase, with no "generic answer". Presumably, if there is a deadline for when the acknowledgement can be given, then presumably this spec is applied when designing such a real time system.

Some operations do not make checking at each operation so easy. What if the resource is access to some memory, do you check for an "error" after every access? This presumes that the system even gives you an application level ability to continue pass this sort of error. What do you do about cooperative "authorization" to access parts of structures for things like synchronization where there isn't a hardware/OS capability to stop you? In your case, since the operation do have the capability of suddenly starting to fail, an asynchronous revocation likely doesn't cause problems that you didn't need to handle anyway, as long as the system structure to allow it.

I would generally say that the actor who was given a permission is responsible for relaying the revocations to those it relayed to. If it has shared a right that it might have revoked from it, it needs to maintain a way to do that.

This is why I object to the statement that it SHOULD ALWAYS be asynchronous. The only real answer is that "it depends", and lists can be made of what it depends on. Some examples include:

If the authorization even remotely revokable? (Sometimes it isn't)

What is the effect on the requesting task if the authorization goes away unexpectedly?

What is the effect of delaying the revocation?

Reply to
Richard Damon

In all the capability based systems I am aware of, the capability "ticket" has to be presented for *every* operation involving a protected resource: not just when "opening" the resource [whatever that happens to mean].

"No tickey, no laundry." Other than being told explicitly, the ticket holder finds out his capability has been revoked when some operation involving the protected resource fails.

That's perfectly reasonable.

Amoeba chose to place permissions directly into the user-space "ticket" because its set of permissions largely was predefined [there were some user definable bits available but most were reserved.]

When the scope of "permissions" is more or less arbitrary, you really do need some kind of server implementation [minimally] maintaining a key-value store DB. But you still can make use of cryptographic signing to make tickets that identify the authorized user.

Yes and no. Both kernel and user space capabilities existed in Amoeba.

Amoeba took the position that each service was responsible for administering its own capabilities. That included kernel services such as starting new processes, creating new ports, mapping process address space, etc.

In Amoeba every service - filesystem, network, etc. - either was a resource owner itself or was a managing agent having delegated access granting authority.

E.g., the filesystem service didn't "own" the files it managed (human users did), but it owned the means to access the files. So the filesystem was an agent with delegated authority to grant access to files based on owner/creator supplied rules [which in the case of Amoeba was simply Unix group membership].

Recall that Amoeba was built around a pretty straightforward delegate and trust chain model needed for the distributed filesystem.

Much more complex scenarios involving agented agents, subcontractors, etc. and arbitrary degree trust chains technically were possible, but the administration of them was left as an exercise.

Yes. However revoking a master capability must also revoke any other capabilities derived from it [even if located on another host]. If you (the user) suddenly decide to make a file read-only, any existing ticket granting write permission for that file, anywhere in the system, has to be revoked.

Of course, that could be done lazily when the ticket eventually is presented for use ... however, if you (the user) again make the file writable, is it still the same file? Should the old tickets be honored if presented or must a new ticket be obtained?

These are things you eventually will have to think about for a distributed capability system.

Yes. But note that Amoeba also permitted creating new distinct capabilities - having the same or reduced permissions - that were separately revocable. That was part of the agent support.

I.e. the decision of how to extend the trust chain was left to the application.

Not really. A debugger isn't necessarily bound by the same rules as is a normal application.

Turning from debuggers to a more generic discussion of "pipes thru filters" applications, then the scenario is only a problem if you permit anonymous "bearer" tickets.

Consider that a ticket may incorporate the identity of the authorized process (see below), and that the system can positively identify the process presenting a ticket for service [at least within the system]. Under these conditions, a ticket might be "stolen", but it can't be used unless the thief also can successfully impersonate the authorized user.

You can uniquely identify programs by cryptographically hashing the executable, particular instances of running programs by host and process ids, and also user/group ids, etc. These can be combined to create tickets that identify both the service granting access and the exact client (or clients) authorized by the ticket.

Actual tickets (and their associated permissions) can be stored securely as in your model, they dosn't need to be user-space entities. But for a multihost system the user space "handle" has to encode host as well as ticket selector.


I think I've shown that you don't.

You can have both "bearer" tickets (useable by anyone) and "name" tickets (limited to particular users) together in the same system. The only limitation is to what extent you can reliably determine the identity of users - maybe only imperfectly in an open system, maybe perfectly in a closed one.

It, at least, implies that you have a capability to use the relevant communication service ... the endpoint service is free to reject your connection attempt [always, but particularly if your identity can be established and it knows a priori to whom it should respond - you can look at it from either direction (or both)].

Bootstrap problem. There is a basic set of capabilities that must be given to every process, a somewhat larger set of capabilities that must be given to most processes, and an even larger set that must be given to network aware processes.

You can "spawn" tasks, "fork" threads, "launch" processes, "poke" servers, etc. ad nauseam. The actual entities and terms involved don't matter so much as the programming model.

YMMV, George

Reply to
George Neuner

Fair enough. This is not a topic I know a lot about - I was just trying to give you some pointers that /might/ have helped, since no one else had replied to your post.

Since you mention "file permission bits" vs. ACL's, I'd like you to be /very/ sure that you actually /need/ the complications of the system you are proposing.

I've administered file servers with ACL's, and file servers with just Linux permission bits and groups. There is no doubt that ACL's give finer control - but I also have no doubt that with careful use of Linux group membership, group ownership of files and directories, and group "sticky" bits on directories, it is vastly easier to get good security where the right people have the right access to the files. Groups and permissions are quick and easy to work with, easy to understand, and easy to check. With the old ACL-based setup we had before there were endless battles - and these were often solved by simply making whole directories read/write for everyone (everyone with a valid user and password, and only on the local network, of course). That in turn often led to battles about not having permission to change the ACL's despite being an administrator - and thus having to recursively take ownership of the directories first.

Before anyone starts to tell me how to handle ACL's "correctly", the point is that when you want to make something secure, having a clear, logical, obvious system is normally more important than having a very flexible with control of the smallest details. It is better to have a simple system that can be used correctly and /is/ used correctly, than a complex system that is used incorrectly because it is too difficult.

And of course, the simple system is much easier to implement correctly, and test and verify correctly, and has far less chance of unexpected and unplanned holes.

Maybe you've thought through this already. But a security idea that leads to the type of discussion in this group strikes me as one that is too complex to get 100% right - and if it is not 100% bulletproof, then it is worthless.



Reply to
David Brown

You may, or may not, find some inspiration from the information pointed to in this current comp.arch thread.

If nothing else it may save you from blind alleys.

Re: Bounded Pointers On 11/4/13, 1:57 AM, Ivan Godard wrote: > On 11/4/2013 12:58 AM, Michael S wrote: > > >> What are those "capabilities" that you are mentioning so often? >> In particular, how they help to augment chain of trust for "safe" languages? >> Lacking imagination, I can't see how "Quis custodiet ipsos custodes?" puzzle could be possibly >> solved. >> > > Start here: >

formatting link
formatting link
> > After that: Google is your friend

Prof. Hank Levy's excellent but out of print book is now available as a set of PDFs from his webpage:

formatting link

Definitely worth reading as it describes a bunch of cap. systems in detail.

Reply to
Tom Gardner

Remember, this is c.a.e -- chances are, we aren't dealing with "files" but, rather, specific I/O's, mechanisms, etc.

In a *closed* system, it's (relatively) easy to get "permissions" right: if task A has no business talking to the motor driver, then task A shouldn't contain any code that *talks* to the motor driver! Verify that this is, indeed, the case -- then release the codebase to production.

OTOH, in an *open* system, you can't predict what tomorrow's application will do -- or *try* to do. How do you ensure it can't muck with things that it shouldn't? Typically, that's done by pushing "special" things into a protection domain (most often, the kernel). Then, hoping the application hasn't come up with a clever way to screw this up!

Files have fixed operations. It's easy to come up with "gates" on those operations as they are few in number and tend to have static permissions. But, when your resources/IO's get to be more esoteric (which can mean "run-of-the-mill!), you can end up with lots of different operations and a desire to separate which agents can invoke each.

With a capabilities-based model, you can delegate who can do what

*dynamically* and with finer precision. E.g., "you can turn the motor off but you can't turn it *on*" (i.e., you can be a monitoring process that prevents the mechanism from running away... and, I have no fear that *you* will TELL the mechanism to run away! even if you fail to tell it to STOP!)

Even filesystems often want finer-grained control IN A SINGLE FILE! E.g., parts of passwd(5) should be visible to all processes while other parts should be *hidden*. And, even different "versions" of passwd(5) for certain applications. (e.g., ~ftp/etc/passwd vs /etc/passwd vs master password)

If "passwd" is treated as an object in a capabilities based system, then the capability that each "process" is given can cause the handler at the other end of that capability to provide the image of passwd that is most appropriate to that process (instead of exposing one of three files to that process).

The "simple system" is big monolithic kernel and hope everything is coded correctly (cuz the guy who is implementing the device driver for foo is operating in the same privileged space as the device driver for the disk system, scheduler, etc.). There is no fine-grained permission possible -- and definitely nothing "expandable" and consistent across the entire system (e.g., how would you implement the email_address_t I mentioned elsewhere with similar "security"?)

See above. Unplanned holes can affect unrelated subsystems. An application (or a subsystem) can't create its own concept of how *its* objects should be managed EXCEPT in complete isolation. Each comes up with their own notion of object, permissions, security, etc. and hopes the others are somehow compatible (or, remain separate islands)

How do ACLs deal with a user asynchronously opting to change the permissions on a resource/file? What does the application do in that case? ("undefined behavior"?) The point of these discussions is to figure out what makes sense for that sort of situation because the "users" are applications:

"Gee, you should avoid reading this file now because some other process is busy writing it. I'll just arrange for cron to run you 5 minutes later than him -- and HOPE he's finished by then..."

Better to have each process *expect* to be (temporarily) denied access to a resource (file) and actively try to recover than to have them choke when they encounter "/* CAN'T HAPPEN */". Expect your capability to be revoked from time to time. Should you request it again? Or, should you blindly retry? Or...

"Why is my request to move the motor being denied? That's not supposed to happen..."


"Hmmm... for some reason, I am not being allowed to move the motor right now. How should I react in this EXPECTED situation?"

Reply to
Don Y

Hi George,

[snips throughout for sole purpose of rimming message length]

Yes. In my case, the "capability" (I call them Handles -- for reasons that should become apparent) also indicates the object in question. So, the "authorizations" come along with the "reference".

Exactly. Though the holder can defer learning this (indefinitely), sooner or later he *will* learn. Presumably, you will code to account for return(NO_PRIVILEGE) so why not just let that *existing* coding handle the revocation case? If you need to know sooner, I can just as easily send you an asynchronous notification (signal) after the fact as I could send you an asynchronous notification *before*!

(Yeah, it's nice to know the power is GOING to fail... but, you have to be able to deal with it HAVING FAILED, regardless!)

(It just seems like giving advanced warning means MORE coding. And, a false sense of security: "Hey! You didn't TELL me that you were going to do that!" "Um, yes I did. Perhaps the message just hasn't been delivered, yet...")

Amoeba's "ticket" is far more efficient than my approach. It can be copied, moved, etc. "for the cost of a long long" (IIRC). In my case, a trap to the kernel is required for each operation on a "Handle" -- because it's a kernel structure that is being manipulated (or referenced).

I can still give user-land services the final say in what a Handle

*means* (along with the "authorities" that it conveys to its bearer). But, you have to go *through* the kernel to get back to userland.

A subtle difference: if "task" (again, forgetting lexicon differences) A decides to manipulate object H backed by service B, in Amoeba's case, B does all the work for each attempt A makes. EVEN IF THE ATTEMPT IS DISALLOWED by H's authorizations. B's resources are consumed even though A has no authority to use B's object (H)!

If A is an Adversary, then B is brought to its knees by A's hostile actions. There is nothing B can do to prevent A from continuously trying to use object H! And its all done on B's dime!

In my case, if A tries to use one of B's resources (H), it first must truly *be* one of B's resources (not just a long long that A *claims* is managed by B). If not, the kernel disallows the transaction.

If H truly *is* backed ("handled") by B, then the kernel allows the transaction -- calling on B to enforce any finer grained authorities (besides "access"). I.e., B knows which authorities are available

*in* H and can verify that the action requested is one of those allowed.

Finally, if A persists in being a pain in the ass (Adversarial DoS behavior), B can tell the kernel to revoke his capabilities. And, thereafter, A can't even *talk* to B! Any attempts happen on

*A's* dime!

Exactly. Every entity for me is an Object. Every Object has a Handler. Every reference to an Object includes a set of "authorizations" that apply to *that* reference and a granted to the "Holder" of that "Handle".

In my case, each file (currently being referenced) is done so by the use of a Handle. There can be multiple Handles to the same "physical" file. These can be Held by multiple tasks -- or the same task! Operations performed on that file are done through a specific Handle and must meet the authorities associated with that Handle (i.e., you might hold write access to a particular file but if the Handle that you use to access it doesn't include that authorization, then your write attempt will be disallowed.).

The File Handler (there may be different ones for different types of files) is responsible for "backing" (handling) the File Objects. When you want to read a file's (referenced by a particular Handle) contents, the File Handler for that file provides the data to you (possibly by accessing different services associated with the various media supported in your system).

So, .../timeofday could actually be a "file" that gets handled by a service that returns the current time-of-day (i.e., it isn't a file in the sense of other "storage" files). Having write access to that Handle would effectively allow you to set the time-of-day!

Furthermore, attempting to set the time to "HH34kdiss" can throw a "write error" (for obvious reasons).

(File systems are bad examples because they are so commonly used to implement namespaces and not just "files")

This means "something" must track history/relationships. It also says nothing about *when* the revocation takes place (effectively) and when notification of that event occurs.

I.e., in Amoeba's case, the kernel never knows who is holding which (copies!) of a particular ticket (derived from some other ticket, etc.). So, there is no wy for it to know who to notify AT THE TIME OF REVOCATION. Instead, it has to rely on the Holder(s) noticing that fact when they *eventually* try to use their capabilities (tickets/keys).

And, you are never sure when every ticket has been "discovered" to be voided -- a task can have a copy of a ticket (you can hold multiple copies of any ticket!) that he just hasn't got around to trying!

Sort of like finding a bunch of keys in a desk drawer and not discarding them because you're not quite sure you *want* to discard them *maybe they still FIT something!)

Exactly. You need to force "issuers" to go back to the well to create "new" tickets. And, this process must implicitly randomize or serialize the identifiers embedded in the tickets to prevent reuse. If you only allow *downgrading* a capability, then any lingering tickets are safe from being reused as "full fledged" tickets once they have been downgraded/revoked *if* new ones always have new ID's!

In my case, kernels are the only things that *hold* capabilities. So, all kernels can be notified that a particular capability has been revoked and they all *are* revoked. Just like if your kernel chooses to delete a file descriptor (remembering that it is now a zombie), any future references by you (the task) to that fd can throw an error assuming you ignore the signal sent to notify you that it has been destroyed).

Yes. My "factory" publishes Handles for key services that tasks may want to avail themselves of. These are accessed by a single "Service Locator" Handle that is given to each task (task == process == resource container) as the task is created. [Conceivably, the Handle for this service given to Task A can differ from Task B if the authorizations between A and B are to be different!].

Tasks locate the services that they want using this Service Locator. It provides a generic Handle that allow the service in question to be contacted. (i.e., this is all part of the bootstrap of the initial access to a service).

The task can then contact the Handler behind that Handle -- i.e., the service in question -- and make whatever requests it is authorized to make (based on its Handle).

More importantly, the creating task can do all of this for the "child" cramming the appropriate Handles for the Objects (incl Services) that the child will need AND THEN DELETING THAT INSTANCE OF THE SERVICE LOCATOR handle to effectively sandbox the child. I.e., these are the resources you can use and operate on -- nothing more!

You can't "steal" Handles in my system because they are in the kernel. If you can trick the Holder to GIVE it to you, then its yours (just like if you trick me to give you the keys to my house).

The current Holder of a Handle is implicitly known to the kernel (it's in A's resource container so A Holds it!)

If I were to tag Handles with "rightful owners", then proxies would be more apparent. But, how do you validate a proxy's request for a Handle on behalf of another? ("Please give me Bob's door keys...")

That information need only be made aware to the (local) kernel. Any atempt to use the resource referenced by the Handle goes through the kernel so *it* is the only agency that needs this information.

It also means it is easier for a service (handler) to move, "physically" as only the kernels holding refernces to the objects that a service backs need be notified. And, the *tasks* holding them can remain ignorant of a service's physical location!

So, I can bring a spare processor on-line to handle times of heavy load and I don't have to run around telling all existing clients that the service has been migrated to that new processor. Similarly, if the load decreases, I can migrate that service back to a "less heavily overused" (?) processor and power down the surplus processor.

Again, the "name" is always implicit in my case. Just like *your* stdin is not *my* stdin. If you want to be able to have a proxy

*use* "your" stdin (presumably on your behalf), *I* require that the proxy *hold* that Handle. *You* had to give it to him. But, I don't keep track of where it came from (what happens if he wants someone else to act as a proxy for him? ad infinitum?)

In my case, if you haven't got a Handle for a service, you can't use it. Having a handle means you can *connect* to it -- long enough for *it* to decide if what you are asking of it is consistent with your "authorizations"

See above.

Yes. I was trying to draw attention to the fact that people often think of "processes" in a legacy UNIX context: one thread in one resource container and new processes inherit their parent's environment/privilege/etc.

In my case, only threads share resources implicitly. Tasks need to have their environments (resource sets) explicitly created. You don't just "inherit" whatever your creator happened to have.


Reply to
Don Y

Thanks! Always willing to add texts to my collection -- especially if they don't take up any PHYSICAL space!

(first glance, this looks like "early" material on the subject... when hardware was often part of a solution :< )


Reply to
Don Y

Hi Richard,

[attrs elided] [revoking an authorization]

But that's the problem. When is the design phase "over" for an open system? Someone (third party) adds a "feature" a year after product release. Does he get to claim the design phase extended to a period MONTHS after "initial release" -- because that was when *he* was working on the design of *his* feature?

[of course not]

At some point, you say, "this is the environment for which you have to design". Every mechanism that you make available is a mechanism that has to be maintained and utilized. And, also acts as a *constraint* on the system and its evolution: "Crap! I have to notify each Holder of a pending capability revocation 100ms before revocation. But, my satellite transmission path is twice that! I guess I just can't use satellites (or, can't revoke capabilities)"

E.g., I handle physical resource revocation asynchronously BECAUSE I HAVE NO CONTROL OVER EXTERNAL EVENTS. If I wrap the resources in a capability, now I suddenly have to provide different semantics? ("Hey, you can't revoke the 'sunlight' capability!")

Life isn't guaranteed to be easy! :>

If "backing store" could go away while it was being used, then your "system" would obviously need a way of detecting that and informing the "holder" of that resource that this has, in fact, happened. The holder would also ned to be aware of what resources could "disappear" and code to accommodate those possibilities.

If I am driving a motor, power to the motor driver/translator could fail while I am in the middle of an operation. Even if I have a backup power supply, the motor driver itself could fail. Even if I have a redundant motor driver, the *motor* could fail. Or, a gearbox, mechanism, *sensor*, etc.

Shit Happens.

If you don't plan to accommodate the (likely/consequential) failures, you have a bug.

That's the point! You (developer) know shit CAN happen. Anything that you are "holding" can be revoked. Plan on it. (Heck, I can "kill -9"

*you* without giving any advanced warning! Gee, *then* what?)

The actor may be gone! BY DESIGN! I.e., he has done *he* needed to do (with "greater privilege") and is now leaving *you* to clean up (with some reduced capability).

E.g., he can turn motor on, set direction and turn off. He starts motor in right direction, then delegates the "off" capability to you (your role being to watch a limit switch and turn off the motor at that time -- or, when some timeout is exceeded) and exits. (no need for him to hang around consuming ALL the resources that he originally needed to determine how the motor should be operated)

However, since my capabilities reside in the kernel, I can opt to have the kernel track derivations and cascade revocations. But, this means all derived capabilities must come from a single "parent"

You obviously can't revoke authorization for a fait accompli. But, what other authorizations, once granted, can't be rescinded? Some may leave you in a predicament (e.g., never being able to turn off the power) but expecting the capability system to know about these sorts of dependencies is, I think, too much.

The designer of the holding task would have to consider that in how the tasks actions and recoveries are structured. What would it have done had the authorization not been granted in the first place?

The big problem with "being considerate" is that it encourages others to be exploitative. There is no downside to their "selfishness" so, "why not?"! "Heads, I win; tails, you lose"

OTOH, if you take a heavy-handed approach (unilaterally revoking capabilities) then sloppy coders pay a price -- by having thier code

*crash* (presumably, users will then opt to avoid applications from those "developers") [There's no other pressure I can bring to bear on them to "do the right thing"]
Reply to
Don Y

You're welcome.

I recommend you look at that whole thread.

comp.arch is particularly interesting at the moment since there is a radically new processor architecture being slowly described - as patents are filed. The protagonists in the discussion (principally Glew and Goddard) would both like to make a caps architecture machine but don't know how to sell it.

The new processor architecture will, it is claimed, work well with existing code, with roughly an order of magnitude speedup. They've managed to get DSP performance!

I haven't followed all the discussions in detail, but they have serious previous form and haven't been shot down yet.

Reply to
Tom Gardner

Ah, sorry. I thought you were only pointing out the book...

Ah, so pertinent to this (my) thread!

The problem, as I see it, is that it's hard to take advantage of what capabilities have to offer "retroactively". Like tring to apply OOP to a procedural implementation.

I can't see how this speedup is a consequence of the capabilities themselves -- "with existing code". But, I've learned that software folks can be incredibly creative when they opt to look at something from a different -- nontraditional -- viewpoint.

I will go a-hunting... thanks!

Reply to
Don Y

Correct, it isn't. CAP is a topic that came up as part of the non-objectives of the new architecture.

Example of just how different the architecture is: it doesn't have registers and isn't a stack machine. Internal micro-op work with a use-it-or-lose-it "belt" where an "register" address is of the form "the fifth to last aritnmetic result".

Reply to
Tom Gardner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.