Managing "capabilities" for security

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View

Not sure exactly how I want to ask this question;
i.e., how best to differentiate the examples where
X should be allowed vs X should be prohibited.

I have a capabilities based security model.  Each
capability has "authorizations" associated with it
(trying to avoid using the word "capability", again  :< ).

These authorizations are defined by the entity that
creates the capability based on the "authorizations"
that *it* has available to it!

I.e., if I own a resource, I have all of the authorizations
conceivable for that resource.  I can give all or part of
those authorizations to entities (actors) of my choosing.

E.g., if the resource is a file, I could elect to give
A, B and C read access to that file and write access
only to B and D.

Similarly, if the resource is a mechanism, I might give the
ability to move it RIGHT to A, B and D; the ability to move
it LEFT to B and C; the ability to power it OFF to only A;

It's important to be able to give subsets of your "authorizations"
to others -- that you presumably trust (whatever that means).
This allows them to act on your behalf.

E.g., if I have read & write access to a file, I might want to
give *read* access to that file (principle of least privilege)
to someone who will encrypt it's contents for me (returning
an encrypted copy of the file but not altering the original;
I trust him enough to *see* the file's contents but not enough
to allow him to *alter* them -- to, for example, replace the
file with its encrypted form... *I* can do that with my write

Similarly, I might want to give subsets of my authorizations
to several different actors concurrently -- so each can do
"whatever" to the resource without requiring me to serialize
their accesses to it (multiprocessing)

And, I may also want to *forfeit* my authorizations -- possibly
after passing them on to someone else.


Some of the trickier issues I'm trying to address include:

- "revoking" an authorization that I have previously given
   to another actor (do I do this asynchronously?  synchronously
   with the other actor's consent/participation?  etc.)
- handling intermediaries whose roles are strictly as "pipes"
   (e.g., imagine transparently imposing an actor between A
   and B -- call it D -- to allow the transactions between those
   two actors to be Debugged).  D should have no need to invoke
   any of the authorities associated with any capability passed
   from A to B.  It should be restricted to solely *propagating*
   the capability.  I.e., D can't *hold* that capability but
   can pass it along.
- as a followup to the above, handling cases where the capability
   can be held or propagated -- but not *duplicated*.  I.e., *you*
   can access this file; *or*, have someone else do it on your behalf;
   but it's one option or the other... the capability can't multiply!

The goal here is to allow *most* actors to be untrusted and still
minimize the risk they pose to the system, the data and operations
it implements, etc.

E.g., I could create a resource called an "email address".  I could
define operations on that like "send to", "forward to", etc.  And,
I can choose to make the actual address itself, *opaque*!

So, I can create a capability for this resource that has authorizations
for "send to".  Perhaps there's even a "send exactly once"!  I can
now give that capability to an actor and it would be able to send
email *to* that address -- yet, never *see* the address itself!
So, if it was a rogue agent trying to harvest email addresses from
ever device that it was running on, this ability would be thwarted.

To protect against it generating neverending quantities of spam, I
might opt to give it the "send exactly once" aauthorization knowing
that it's damage/annoyance factor would be thusly limited.

If this actor can pass subset(s) of its capability to others, then
it could just spawn another copy of itself, send a copy of the
capability to that second instance, generate a piece of spam and
*die* -- knowing it's clone has a valid copy of the "send exactly
once" authorization!

See where this is going?  And, how powerful it can be in providing
fine-grained control of resources??

It *seems* like what I really want to do is create a "service" (for want
of a better word) that implements these "capability authorizations".
I.e., if you want to pass a copy of a capability to another actor,
you hand the capability to that service along with the desired target
actor and that service examines the capability to see the authorizations
that you have been granted for *it*!

"Ah, sorry but I can't perform this action for you because you
don't have 'propagate a copy' authorization for this capability!"

[<frown>  This is really confusing due mainly to the fact that I'm
trying to fabricate terms to address concepts that are very similar
but different -- capability, authorization, etc.]

Hopefully this makes *some* sense...  I'll try to work on a better


Re: Managing "capabilities" for security
On 01/11/13 21:34, Don Y wrote:
Quoted text here. Click to load it

I can't see that you've been asking a question at all - it looks more
like you have some ideas about what you think "capabilities" are and are
trying to get a clearer picture.  But I don't think your post here is
ready for direct comments - you'll have to read a bit more, think a bit
more, and figure out what you are trying to say, trying to ask, and
trying to do.

In the meantime, read up a bit on "posix capabilities" and their
implementation in Linux:


(and of course, google is your friend :-)

I don't think this is the kind of stuff you want to do yourself - there
are a great deal of things to get right for tasks to have enough access
to what they need without opening security holes.



Quoted text here. Click to load it

Re: Managing "capabilities" for security
Hi David,

On 11/4/2013 12:49 AM, David Brown wrote:
Quoted text here. Click to load it

Different beast entirely -- in mvch the same way that "file permission  
bits" differ from full-fledged ACLs.

 From your first reference, below:
    "A capability (known in some systems as a key) is a communicable,
    unforgeable token of authority. It refers to a value that references
    an object along with an associated set of access rights."

AFAICT, Linvx uses the term just to reference a finer grained set of
"permissions" afforded to processes (beyond "root == God").

[IMO, you can't *effectively* ADD capabilities to an existing "system"
except in very narrow, fortuitous places]

For more info, you might want to look at Amoeba, Chorus, EROS/KeyKOS,  
etc. (each to differing degrees).

Quoted text here. Click to load it

Re: Managing "capabilities" for security
On 05/11/13 00:06, Don Y wrote:
Quoted text here. Click to load it

Fair enough.  This is not a topic I know a lot about - I was just trying
to give you some pointers that /might/ have helped, since no one else
had replied to your post.

Since you mention "file permission bits" vs. ACL's, I'd like you to be
/very/ sure that you actually /need/ the complications of the system you
are proposing.

I've administered file servers with ACL's, and file servers with just
Linux permission bits and groups.  There is no doubt that ACL's give
finer control - but I also have no doubt that with careful use of Linux
group membership, group ownership of files and directories, and group
"sticky" bits on directories, it is vastly easier to get good security
where the right people have the right access to the files.  Groups and
permissions are quick and easy to work with, easy to understand, and
easy to check.  With the old ACL-based setup we had before there were
endless battles - and these were often solved by simply making whole
directories read/write for everyone (everyone with a valid user and
password, and only on the local network, of course).  That in turn often
led to battles about not having permission to change the ACL's despite
being an administrator - and thus having to recursively take ownership
of the directories first.

Before anyone starts to tell me how to handle ACL's "correctly", the
point is that when you want to make something secure, having a clear,
logical, obvious system is normally more important than having a very
flexible with control of the smallest details.  It is better to have a
simple system that can be used correctly and /is/ used correctly, than a
complex system that is used incorrectly because it is too difficult.

And of course, the simple system is much easier to implement correctly,
and test and verify correctly, and has far less chance of unexpected and
unplanned holes.

Maybe you've thought through this already.  But a security idea that
leads to the type of discussion in this group strikes me as one that is
too complex to get 100% right - and if it is not 100% bulletproof, then
it is worthless.



Quoted text here. Click to load it

Re: Managing "capabilities" for security
Hi David,

On 11/5/2013 1:20 AM, David Brown wrote:

Quoted text here. Click to load it

Remember, this is c.a.e -- chances are, we aren't dealing with "files"
but, rather, specific I/O's, mechanisms, etc.

In a *closed* system, it's (relatively) easy to get "permissions"
right:  if task A has no business talking to the motor driver, then
task A shouldn't contain any code that *talks* to the motor driver!
Verify that this is, indeed, the case -- then release the codebase
to production.

OTOH, in an *open* system, you can't predict what tomorrow's application
will do -- or *try* to do.  How do you ensure it can't muck with things
that it shouldn't?  Typically, that's done by pushing "special" things
into a protection domain (most often, the kernel).  Then, hoping the
application hasn't come up with a clever way to screw this up!

Files have fixed operations.  It's easy to come up with "gates" on
those operations as they are few in number and tend to have static
permissions.  But, when your resources/IO's get to be more esoteric
(which can mean "run-of-the-mill!), you can end up with lots of
different operations and a desire to separate which agents can invoke

With a capabilities-based model, you can delegate who can do what
*dynamically* and with finer precision.  E.g., "you can turn the motor
off but you can't turn it *on*"  (i.e., you can be a monitoring process
that prevents the mechanism from running away... and, I have no fear
that *you* will TELL the mechanism to run away!  even if you fail to
tell it to STOP!)

Even filesystems often want finer-grained control IN A SINGLE FILE!
E.g., parts of passwd(5) should be visible to all processes while
other parts should be *hidden*.  And, even different "versions"
of passwd(5) for certain applications.  (e.g., ~ftp/etc/passwd
vs /etc/passwd vs master password)

If "passwd" is treated as an object in a capabilities based system,
then the capability that each "process" is given can cause the handler
at the other end of that capability to provide the image of passwd that
is most appropriate to that process (instead of exposing one of
three files to that process).

Quoted text here. Click to load it

The "simple system" is big monolithic kernel and hope everything is
coded correctly (cuz the guy who is implementing the device driver for
foo is operating in the same privileged space as the device driver for
the disk system, scheduler, etc.).  There is no fine-grained permission
possible -- and definitely nothing "expandable" and consistent across
the entire system  (e.g., how would you implement the email_address_t
I mentioned elsewhere with similar "security"?)

Quoted text here. Click to load it

See above.  Unplanned holes can affect unrelated subsystems.  An
application (or a subsystem) can't create its own concept of how *its*
objects should be managed EXCEPT in complete isolation.  Each comes
up with their own notion of object, permissions, security, etc. and
hopes the others are somehow compatible (or, remain separate islands)

Quoted text here. Click to load it

How do ACLs deal with a user asynchronously opting to change the
permissions on a resource/file?  What does the application do in
that case?  ("undefined behavior"?)  The point of these discussions
is to figure out what makes sense for that sort of situation because
the "users" are applications:

"Gee, you should avoid reading this file now because some other process
is busy writing it.  I'll just arrange for cron to run you 5 minutes
later than him -- and HOPE he's finished by then..."

Better to have each process *expect* to be (temporarily) denied access
to a resource (file) and actively try to recover than to have them
choke when they encounter "/* CAN'T HAPPEN */".  Expect your capability
to be revoked from time to time.  Should you request it again?  Or,
should you blindly retry?  Or...

"Why is my request to move the motor being denied?  That's not supposed
to happen..."


"Hmmm... for some reason, I am not being allowed to move the motor
right now.  How should I react in this EXPECTED situation?"

Re: Managing "capabilities" for security
Hi Don,

Quoted text here. Click to load it

Revoking always should be asynchronous because it is solely at the
discretion of the giver.  If a capability can be delegated
transitively, the originating authority may neither know nor be able
to communicate directly with all of the current holders of the

An agent can't simply hand out a copy of an original capability given
to it - it needs to pass on a derived capability that is separate from
but linked to the original.  The derived capability has to be
revocable both independently (by the agent itself) and in conjunction
with revocation of the original capability (by the originating

How to handle transitive delegation is *the* major issue in designing
a capability system.

Quoted text here. Click to load it

Yes.  A communication channel should not make information observable
to any entity that is not a participant in the communication.

A debugger monitoring a channel, and possibly injecting traffic into
it, is a special case of a "silent" participant.

Quoted text here. Click to load it

Just a particular case of transitive delegation.

Quoted text here. Click to load it

Nope and Nope 8-)

The problem is that such flexible capabilities effectively become
stores of arbitrary key-value pairs.  That makes them difficult to
manage and difficult to propagate (or migrate) to remote hosts.

I know you are using a (more or less) centralized DBMS - which solves
the migration problem - but you still have the issue of how to
organize the DB so that capabilities are easy to modify, copy or
subset, *and* can be rapidly searched.

Haven't given it a lot of thought, but I don't immediately see a good
answer.  You know I'm not a big fan of non-TEXT extensible fields
because (usually) they can't be indexed.  Once you start down the road
of using arbitrary capabilities, like ACLs, very quickly you find that
you have many thousands of triples in use.

Quoted text here. Click to load it

It seems like what you want to do is have a communication system that
can open a channel *and* deliver the access capability for it in a
single action.  Once a channel is open, you can communicate other
capabilities directly.

E.g., on the client side, connect() creates both a new connection and
a capability for it, and transmits the capability to the accept() on
the server side.  Depending on your programming model, both may also
return the capability to their respective callers - e.g., for delegate

Within a host you can start a child task and transfer capabilities
automagically during the fork().  You can do similar using a process
server on a remote host, communicating the capabilities to the server
and letting it fork and install them before exec()ing the new process.
[Naturally, after providing proof of _your_ capability to remotely
launch the process.]

Quoted text here. Click to load it

ISTM that a system which rejects this particular use case is not
realistic.  If you can't start a task which isn't running, or open
communication with a task that is, then the system is useless.

Quoted text here. Click to load it


Re: Managing "capabilities" for security
Hi George,

[Apologies if more typos than usual -- pen interface :( ]

On 11/4/2013 1:20 AM, George Neuner wrote:
Quoted text here. Click to load it

Yes, of course [re: giver]. What I was trying to draw attention to is  
how The "holder" is effectively "made aware" of his loss of some/all of  
the "authorizations" that he had previously.  (more below)

Quoted text here. Click to load it

Unlike, (e.g.) Amoeba (apologies if I am misremembering references/
implementations, here) -- where a capability is "just a (magic) number
(which can obviously be copied FREELY and indefinitely) -- I implement
capabilities as kernel based/maintained objects.  What a task sees as a
"capability" is actually a *handle* to a capability (that the kernel

[E.g., Amoeba's implementation doesn't require the kernel to be aware
of where a capability "is", currently.  It is only aware when operations
*on* the capability need to be performed (reminder:  Amoeba makes the
capabilities UNFORGEABLE and little more).]

As my kernel knows where every capability is located, at the moment,
if can deliver an asynchronous notification ("signal") to the holder
of the capability (holder == task; so no guarantee which of the
task's threads will "see" that notification -- unless an exception
handler thread has been nominated).

So, I *can* notify the holder.  Or, wait for him to try to use the
capability and throw an error at that time.

Of course, either approach can work.  What I'm trying to decide is
the relative merits of each -- on both sides of that notification

Quoted text here. Click to load it

As holders only have handles to capabilities,  I actually *can* "move"
the original capability.  If a holder elects to create a *new*
capability embuing it with some formal subset of the "authorizations"
that are present in the original capability, it can (potentially!)
do so.  [that's one of my conceived restrictions...]

Quoted text here. Click to load it

<grin>  Hence the reason for my questions!  They (below) pertain to he
sorts of operations you can perform *on* the capability.

E.g., the Amoeba approach allows the holder to freely copy and  
distribute *a* capability that it holds.  It has to trust EVERY
recipient of those capability-copies.  And, implicitly, any
one that *they* may conceivably pass yet other copies to!

[in my case. I can implement mechanisms that allow you to create copies
*or* restrict you to passing *the* original along.  I.e., the valet can
drive my car while he has the keys -- but, once he gives them to a
*thief*, he loses that ability!]

Quoted text here. Click to load it

In the example I cited, should I have to trust D as much as I do B?
If I allow it to create a copy (for its own use) of the capability,
then I do.  OTOH, if I create a "pass all or nothing" attribute
for the capability, then the only way that D can use the capability is
by denying it to B.

(see where I'm going, here?)

Quoted text here. Click to load it

But, IMO, the ability to do this effects the security that the
capability system provides.  If you can always duplicate a
capability (or portions thereof), then you have to always trust
everyone you give it to.

While I may trust you not to abuse a capability, do I also know that
you can't be tricked (bug) into passing a copy of that capability
on to an adversary?

Quoted text here. Click to load it

I can move them in much the same way that I can move all my
"objects" -- kernel(i) and kernel(j) conspire to track any
particular capability (cuz it is implemented *in* the kernel)

It's just not *efficient* to do so (vs. the "capability is a
magic integer" approach of e.g., Amoeba.

[From our discussions, you should already know that I'm wasting
LOTS of resources on mechanisms that I think will enhance the
development/execution environment]

Quoted text here. Click to load it

Ah, no.  This is at a different level.  I haven't even begun to sort
out how to make capabilities "persistent" (i.e., so they could be
"stored" in the RDMS).  Instead, they are transient objects that
only exist while the owner/holder is alive.

E.g., a file handle is not, in itself, persistent -- even if
the file that it *references* is (or *isn't*!)

*Accessing* the RDBMS can be controlled by a capability, though.

Quoted text here. Click to load it

Exactly.  Though mine don't have a textual way of being expressed
(as an ACL would).

Quoted text here. Click to load it

Yes, in a sense.

Each capability is actually an (object,authorizations) tuple (I
*really* need a better word than "authorizations" :< ).  Think of
a file handle.  It allows you (the "Holder") to  access some
particular file (the name of which is not deducible from the
handle itself!) in some particular way.  If thread A creates a
handle and gives it to thread B, B must *implicitly* know what
"authorizations" are embedded in that handle (i.e. if the
file was opened for read-only access, thread B can't decide to
*write* to it!).

In my case, "somehow" (easy once you think about it), a task is
given a reference (handle) to an object (on which it would like to
operate *or* on which it is being *asked* to operate!) along with
the authorizations as to *how* it can operate on that object.

In a crude sense, it is being given a reference to an "object"
and a list of the verbs/methods that it can invoke for that
object (it's actually much finer-grained than this but this
should give you an idea).

For a traditional "file", those authorizations might include read,
write, seek, etc.  For a motor, they might include, CW, CCW,

Quoted text here. Click to load it

Yes.  Though connect() implies that you have previously been given the
authorization to connect to that object!  :>  I.e., if you should have
no need to talk to the file system, then you can't even *connect*
to that service!

Quoted text here. Click to load it

Being careful about terms, here...

In my case, I can create more *threads* within the current *task*
(task == container of resources -- threads being execution resources).
Any thread can use any of the resources (e.g., capabilities) that are
present in that *task*.

Tasks can also create additional tasks (the UNIX "process" model).
In *those* cases, the spawned task does not automatically inherit the
resources of the "parent" (whereas a thread does!).  Instead, the
resources that it should have are explictly passed to it.

So, any threads created by threads *in* a particular task have the same
resources that all threads in that task have.  If you want to restrict
(or change in any way) the resources available to an "execution unit",
you have to create a different task.

Quoted text here. Click to load it

Yes.  Though the process server is actually the kernel running thereon.

Quoted text here. Click to load it

The fact that I *did* launch the process is preconditioned on *having*
the capability to do so!

Quoted text here. Click to load it

That's just a bootstrap problem.  I.e., *something* starts "init".
Once the first "process" (task) is running, it creates the remaining
tasks in the system.

As each task is an "object", there is an implicit capability associated
with it!  Using this "handle", that first process can push an
executable image into the task, give it capabilities for other
resources, etc.

And, each of these initial tasks (processes) can create their own
resources with associated capabilities that can be handed out to
still other tasks, etc.

I.e., "init" builds the initial set of servers as part of the
application itself.  Any service that wants tight reign over its clients
can hand out capabilities that it refuses to let be propagated.
That doesn't prevent the recipients of those capabilities from ACTING
as AGENTS for the service!  So, *they* can create their own
INDEPENDANT capabilities (i.e., not subsets of the original capabilities
granted to themselves).

So, "you" might not be able to talk to a root DNS server. But,
you can talk to *me* (using an API of my own creation) and *I*
can elect to contact the root DNS servers if I think that is

(inefficient because I am now in the middle.  But, now allows me
to actively control access to that other resource, cache results,

Morning tea...

Re: Managing "capabilities" for security
[Apologies for following up on my own post...]

On 11/4/2013 8:23 AM, Don Y wrote:

Quoted text here. Click to load it

To put this in more concrete terms:

I can create an object called an "email_address".

Unlike the intuitive way you would think of an email address (i.e., a
set of characters with an '@' in the middle), the object manager for
email addresses (email_address_handler) can make these *opaque* to
applications.  I.e., an application can *send* something to an
email address but can't tell where it is going!  (i.e., you can't
harvest the email addresses in my address book and pass them on
to your friend The Nigerian Prince!).

So, I can give you a capability that allows you to send messsages to
a particular recipient:
    email_address_t recipient;
without knowing who that is.

I might opt to give you the authority to do this exactly once
(if, for example, the email_address_handler allowed me to create a
number_of_uses argument for a particular capability instance).

    "Hey, you're just supposed to be notifying these people of
    my upcoming party.  Why do you need to be able to send more than
    *one* email to each recipient?"

If I prohibit you from passing this along, then I am, in effect,
forcing *you* to do the work (send out the invitations).

If I allow you to pass it along BUT NOT DUPLICATE IT, then you
can delegate a third party to perform this activity on your
behalf -- but you also forfeit the ability to use that resource
thereafter.  I.e., you can ask your friend The Nigerian Prince
to send out the invitations for you (of course, he wont be able
to "see" the actual email addresses any more than *you* would).
He might opt to send them all some sort of junk mail.  But,
he'll only be able to send one message to each recipient
(because that was the constraint placed on the capability
when it was originally granted to *you*!)

The question then becomes:  do you create some set of "operations"
that can be applied to all capabilities; or, allow each capability to
have a specific handler (so *it* operates on the capabilities
while the capabilities operate on the objects they reference)

[another indirection]

Re: Managing "capabilities" for security
Hi Don,

Quoted text here. Click to load it

In all the capability based systems I am aware of, the capability
"ticket" has to be presented for *every* operation involving a
protected resource: not just when "opening" the resource [whatever
that happens to mean].

"No tickey, no laundry."  Other than being told explicitly, the ticket
holder finds out his capability has been revoked when some operation
involving the protected resource fails.

Quoted text here. Click to load it

That's perfectly reasonable.

Amoeba chose to place permissions directly into the user-space
"ticket" because its set of permissions largely was predefined [there
were some user definable bits available but most were reserved.]

When the scope of "permissions" is more or less arbitrary, you really
do need some kind of server implementation [minimally] maintaining a
key-value store DB.  But you still can make use of cryptographic
signing to make tickets that identify the authorized user.

Quoted text here. Click to load it

Yes and no. Both kernel and user space capabilities existed in Amoeba.

Amoeba took the position that each service was responsible for
administering its own capabilities.  That included kernel services
such as starting new processes, creating new ports, mapping process
address space, etc.

In Amoeba every service - filesystem, network, etc. - either was a
resource owner itself or was a managing agent having delegated access
granting authority.  

E.g., the filesystem service didn't "own" the files it managed (human
users did), but it owned the means to access the files.  So the
filesystem was an agent with delegated authority to grant access to
files based on owner/creator supplied rules [which in the case of
Amoeba was simply Unix group membership].

Quoted text here. Click to load it

Recall that Amoeba was built around a pretty straightforward delegate
and trust chain model needed for the distributed filesystem.

Much more complex scenarios involving agented agents, subcontractors,
etc. and arbitrary degree trust chains technically were possible, but
the administration of them was left as an exercise.

Quoted text here. Click to load it

Yes.  However revoking a master capability must also revoke any other
capabilities derived from it [even if located on another host].  If
you (the user) suddenly decide to make a file read-only, any existing
ticket granting write permission for that file, anywhere in the
system, has to be revoked.  

Of course, that could be done lazily when the ticket eventually is
presented for use ... however, if you (the user) again make the file
writable, is it still the same file?  Should the old tickets be
honored if presented or must a new ticket be obtained?

These are things you eventually will have to think about for a
distributed capability system.

Quoted text here. Click to load it

Yes.  But note that Amoeba also permitted creating new distinct
capabilities - having the same or reduced permissions - that were
separately revocable.  That was part of the agent support.

I.e. the decision of how to extend the trust chain was left to the

Quoted text here. Click to load it

Not really.  A debugger isn't necessarily bound by the same rules as
is a normal application.

Turning from debuggers to a more generic discussion of "pipes thru
filters" applications, then the scenario is only a problem if you
permit anonymous "bearer" tickets.  

Consider that a ticket may incorporate the identity of the authorized
process (see below), and that the system can positively identify the
process presenting a ticket for service [at least within the system].
Under these conditions, a ticket might be "stolen", but it can't be
used unless the thief also can successfully impersonate the authorized

You can uniquely identify programs by cryptographically hashing the
executable, particular instances of running programs by host and
process ids, and also user/group ids, etc.  These can be combined to
create tickets that identify both the service granting access and the
exact client (or clients) authorized by the ticket.

Actual tickets (and their associated permissions) can be stored
securely as in your model, they dosn't need to be user-space entities.
But for a multihost system the user space "handle" has to encode host
as well as ticket selector.

Quoted text here. Click to load it
Quoted text here. Click to load it

I think I've shown that you don't.

You can have both "bearer" tickets (useable by anyone) and "name"
tickets (limited to particular users) together in the same system. The
only limitation is to what extent you can reliably determine the
identity of users - maybe only imperfectly in an open system, maybe
perfectly in a closed one.

Quoted text here. Click to load it

It, at least, implies that you have a capability to use the relevant
communication service ... the endpoint service is free to reject your
connection attempt [always, but particularly if your identity can be
established and it knows a priori to whom it should respond - you can
look at it from either direction (or both)].

Bootstrap problem.  There is a basic set of capabilities that must be
given to every process, a somewhat larger set of capabilities that
must be given to most processes, and an even larger set that must be
given to network aware processes.

Quoted text here. Click to load it

You can "spawn" tasks, "fork" threads, "launch" processes, "poke"
servers, etc. ad nauseam.  The actual entities and terms involved
don't matter so much as the programming model.


Re: Managing "capabilities" for security
Hi George,

[snips throughout for sole purpose of rimming message length]

Quoted text here. Click to load it

Yes.  In my case, the "capability" (I call them Handles -- for reasons
that should become apparent) also indicates the object in question.
So, the "authorizations" come along with the "reference".

Quoted text here. Click to load it

Exactly.  Though the holder can defer learning this (indefinitely),
sooner or later he *will* learn.  Presumably, you will code to
account for return(NO_PRIVILEGE) so why not just let that *existing*
coding handle the revocation case?  If you need to know sooner, I can
just as easily send you an asynchronous notification (signal) after
the fact as I could send you an asynchronous notification *before*!

(Yeah, it's nice to know the power is GOING to fail... but, you have to
be able to deal with it HAVING FAILED, regardless!)

(It just seems like giving advanced warning means MORE coding.   And,
a false sense of security:  "Hey!  You didn't TELL me that you were
going to do that!"  "Um, yes I did. Perhaps the message just hasn't
been delivered, yet...")

Quoted text here. Click to load it

Amoeba's "ticket" is far more efficient than my approach.  It can be
copied, moved, etc. "for the cost of a long long" (IIRC).  In my
case, a trap to the kernel is required for each operation on a
"Handle" -- because it's a kernel structure that is being manipulated
(or referenced).

I can still give user-land services the final say in what a Handle
*means* (along with the "authorities" that it conveys to its bearer).
But, you have to go *through* the kernel to get back to userland.

A subtle difference:  if "task" (again, forgetting lexicon differences)
A decides to manipulate object H backed by service B, in Amoeba's case,
B does all the work for each attempt A makes.  EVEN IF THE ATTEMPT IS
DISALLOWED by H's authorizations.  B's resources are consumed even
though A has no authority to use B's object (H)!

If A is an Adversary, then B is brought to its knees by A's hostile
actions.  There is nothing B can do to prevent A from continuously
trying to use object H!  And its all done on B's dime!

In my case, if A tries to use one of B's resources (H), it first must
truly *be* one of B's resources (not just a long long that A *claims*
is managed by B).  If not, the kernel disallows the transaction.

If H truly *is* backed ("handled") by B, then the kernel allows the
transaction -- calling on B to enforce any finer grained authorities
(besides "access").  I.e., B knows which authorities are available
*in* H and can verify that the action requested is one of those allowed.

Finally, if A persists in being a pain in the ass (Adversarial DoS
behavior), B can tell the kernel to revoke his capabilities.  And,
thereafter, A can't even *talk* to B!  Any attempts happen on
*A's* dime!

Quoted text here. Click to load it

Exactly.  Every entity for me is an Object.  Every Object has a
Handler.  Every reference to an Object includes a set of
"authorizations" that apply to *that* reference and a granted
to the "Holder" of that "Handle".

Quoted text here. Click to load it

In my case, each file (currently being referenced) is done so by the
use of a Handle.  There can be multiple Handles to the same "physical"
file.  These can be Held by multiple tasks -- or the same task!
Operations performed on that file are done through a specific Handle
and must meet the authorities associated with that Handle (i.e., you
might hold write access to a particular file but if the Handle that
you use to access it doesn't include that authorization, then your
write attempt will be disallowed.).

The File Handler (there may be different ones for different types of
files) is responsible for "backing" (handling) the File Objects.
When you want to read a file's (referenced by a particular Handle)
contents, the File Handler for that file provides the data to you
(possibly by accessing different services associated with the various
media supported in your system).

So, .../timeofday could actually be a "file" that gets handled by a
service that returns the current time-of-day (i.e., it isn't a file
in the sense of other "storage" files).  Having write access to
that Handle would effectively allow you to set the time-of-day!

Furthermore, attempting to set the time to "HH34kdiss" can throw a
"write error" (for obvious reasons).

(File systems are bad examples because they are so commonly used to
implement namespaces and not just "files")

Quoted text here. Click to load it

This means "something" must track history/relationships.  It also says
nothing about *when* the revocation takes place (effectively) and when
notification of that event occurs.

I.e., in Amoeba's case, the kernel never knows who is holding which
(copies!) of a particular ticket (derived from some other ticket, etc.).
So, there is no wy for it to know who to notify AT THE TIME OF
REVOCATION.  Instead, it has to rely on the Holder(s) noticing that
fact when they *eventually* try to use their capabilities

And, you are never sure when every ticket has been "discovered" to be
voided -- a task can have a copy of a ticket (you can hold multiple
copies of any ticket!) that he just hasn't got around to trying!

Sort of like finding a bunch of keys in a desk drawer and not discarding
them because you're not quite sure you *want* to discard them  *maybe
they still FIT something!)

Quoted text here. Click to load it

Exactly.  You need to force "issuers" to go back to the well to create
"new" tickets.  And, this process must implicitly randomize or serialize
the identifiers embedded in the tickets to prevent reuse.  If you
only allow *downgrading* a capability, then any lingering tickets
are safe from being reused as "full fledged" tickets once they have been
downgraded/revoked *if* new ones always have new ID's!

Quoted text here. Click to load it

In my case, kernels are the only things that *hold* capabilities.
So, all kernels can be notified that a particular capability has been
revoked and they all *are* revoked.   Just like if your kernel
chooses to delete a file descriptor (remembering that it is now
a zombie), any future references by you (the task) to that fd can
throw an error assuming you ignore the signal sent to notify you
that it has been destroyed).

Quoted text here. Click to load it

Yes.  My "factory" publishes Handles for key services that tasks may
want to avail themselves of.  These are accessed by a single "Service
Locator" Handle that is given to each task (task == process == resource
container) as the task is created.  [Conceivably, the Handle for this
service given to Task A can differ from Task B if the authorizations
between A and B are to be different!].

Tasks locate the services that they want using this Service Locator.
It provides a generic Handle that allow the service in question to be
contacted.  (i.e., this is all part of the bootstrap of the initial
access to a service).

The task can then contact the Handler behind that Handle -- i.e., the
service in question -- and make whatever requests it is authorized
to make (based on its Handle).

More importantly, the creating task can do all of this for the "child"
cramming the appropriate Handles for the Objects (incl Services) that
LOCATOR handle to effectively sandbox the child.   I.e., these are
the resources you can use and operate on -- nothing more!

Quoted text here. Click to load it

You can't "steal" Handles in my system because they are in the kernel.
If you can trick the Holder to GIVE it to you, then its yours (just like
if you trick me to give you the keys to my house).

The current Holder of a Handle is implicitly known to the kernel
(it's in A's resource container so A Holds it!)

If I were to tag Handles with "rightful owners", then proxies would
be more apparent.  But, how do you validate a proxy's request for a
Handle on behalf of another?  ("Please give me Bob's door keys...")

Quoted text here. Click to load it

That information need only be made aware to the (local) kernel.
Any atempt to use the resource referenced by the Handle goes
through the kernel so *it* is the only agency that needs this

It also means it is easier for a service (handler) to move, "physically"
as only the kernels holding refernces to the objects that a service  
backs need be notified.  And, the *tasks* holding them can remain
ignorant of a service's physical location!

So, I can bring a spare processor on-line to handle times of heavy load
and I don't have to run around telling all existing clients that the
service has been migrated to that new processor.  Similarly, if the
load decreases, I can migrate that service back to a "less heavily
overused" (?) processor and power down the surplus processor.

Quoted text here. Click to load it

Again, the "name" is always implicit in my case.  Just like *your*
stdin is not *my* stdin.  If you want to be able to have a proxy
*use* "your" stdin (presumably on your behalf), *I* require that the
proxy *hold* that Handle.  *You* had to give it to him.  But, I
don't keep track of where it came from (what happens if he wants
someone else to act as a proxy for him?  ad infinitum?)

Quoted text here. Click to load it

In my case, if you haven't got a Handle for a service, you can't use
it.  Having a handle means you can *connect* to it -- long enough
for *it* to decide if what you are asking of it is consistent with
your "authorizations"

Quoted text here. Click to load it

See above.

Yes.  I was trying to draw attention to the fact that people often
think of "processes" in a legacy UNIX context:  one thread in one
resource container and new processes inherit their parent's

In my case, only threads share resources implicitly.  Tasks need to
have their environments (resource sets) explicitly created.  You don't
just "inherit" whatever your creator happened to have.


Re: Managing "capabilities" for security
Hi Don,

Quoted text here. Click to load it

In the original version yes ... later they went to a 256-bit ticket to
include more end-point information and better crypto-signing.

Quoted text here. Click to load it

In your case, kernel resources are consumed.  6 of one ...

And unless you can prevent A from even connecting to B there will be
"wasted" effort on B's part anyway.

I may be misunderstanding, but ISTM that you're trying to pack too
much into the meaning of capabilities [or possibly too much stock into
prior authorization].

Regardless of how capabilities are implemented (user vs kernel), every
system I have read about would divide the credentials and
authorizations involved in this problem among multiple capabilities:

  - X(H) is a legal operation on H
  - B administers H
  - A can perform X(H)
  - A can connect to B
  - B can perform X(H) as a proxy
  - B can perform X(H) as proxy for A


It seems as if you want to go straight to the final one - but the
question is: how do you get there?

Who grants to A that final capability that implies all the others?  To
get that capability presumes that A can talk to B (or some other
granting authority) in the first place ... which you seem to want to

Obviously, B can tell the kernel that B administers H ... but how does
the kernel know what A wants with B?  How can A try to access H
directly?  "URN: A doesn't know about B."  Ok, but then can B act as a
proxy for anyone, or just for "authorized" users?  Who decides A is
authorized for H?  B?  How does B (or anyone else) know A wants access
to H if A can't even ask?

Amoeba and others solve the problem by letting B administrate.  A
connects to B, asks for access to H. A can present a ticket for H if
it has one, or B can issue a ticket to A if A is allowed but doesn't
have one.
[Amoeba servers have a public access API which anyone can connect to
ask for a ticket granting specific access to a managed object. After
first getting the ticket, they can connect to actually perform the
allowed operations.  Getting access then is a 2 step process.]

None of this requires free roaming user-space capabilities ... it all
can be with handles referencing secure capabilities kept by the kernel
or another credential server (Kerberos model).

Quoted text here. Click to load it

How does the kernel know H belongs to B?  
How does A know to ask for H in the first place?

Quoted text here. Click to load it

What "transaction"?  The set of possible objects and the actions that
might need to be performed on them both are unbounded.  

A generic "do-it" kernel API that can evaluate every possible action
on any object is a major bottleneck and a PITA to work with.  Even if
the high level programmer has a sweet wrapper API, the low level
programmer has to deal with absolutely anything that can be pushed
through the nonspecific interface.

For decades, Unix has been moving toward more verbose APIs and away
from trying to cram everything into ioctl().  [How many options do
sockets have now?  And how many different parameter blocks?]  

Linux, OTOH, went back-ass-wards with its new driver model in which
every operation is performed by reading/writing some special file.

Quoted text here. Click to load it

A common directory service is fine, but I'm not particularly a fan of
uniform "file" interfaces.  I rather like the idea of being able to
ask an object (or its managing proxy) what functions are available.

Unfortunately, doing this generically is a PITA (so no one does it).
If you are familiar with COM or Corba, it amounts to the server
returning an IDL specification, and the program [somehow] being able
to interpret/use the IDL spec to make specific requests.

Quoted text here. Click to load it

Yes.  However it is necessary.  If you no longer trust Q, then, by
transitivity, you no longer trust anyone Q may have delegated to.

Quoted text here. Click to load it

Yes.  But as you said to someone else, every program must deal with
the possibility of permission being denied.  Under those
circumstances, notification can be deferred until attempted use.

System-wide synchronous revocation is impractical, but revocation can
be done asynchronously if master capabilities are versioned and
derived capabilities indicate which version of the master was in force
when they were issued.

It suffices for the owner/manager to be able to say "all capabilities
for H  [or better, X(H)] issued prior to CurVer(H) are no good".

It also can be done with time stamping, but that presupposes a system
wide synchronized notion of time.  In practice, versioning is simpler.

Quoted text here. Click to load it

So?  In your system host kernel's exchange capabilities and proxy for
one another.  How are you going to notify a host that's powered down?

Quoted text here. Click to load it

The analogy is semi-flawed:  capabilities shouldn't be thought of as
student key cards that open some subset of the doors on campus.

Properly a capability opens only one lock [i.e. addresses one object].
A rejected capability is known to be useless, so there's no point to
keeping it.

The "one lock" principle is applicable to replicated services: every
instance of a particular service should answer to the same set of

Obviously a capability system *can* provide key card functionality,
but you need to look at the situation in the opposite way: i.e. the
student's key card doesn't open a group of locks, but rather a group
of locks share capability to admit the card.

Semantics ... but important semantics.

Quoted text here. Click to load it

But hosts may be offline: powered down or network partitioned. How
long do you keep the "expiration" of a capability?  That just clutters
up your store.

At some point, you have to accept that a remote host may try to use a
capability the resource's host no longer honors.

Quoted text here. Click to load it

But who decides what permissions A and B have wrt the service?

Quoted text here. Click to load it

That's a nice feature.  Amoeba didn't have this, but other capability
systems did.

Quoted text here. Click to load it

Again, this is a scenario of replicated service: local proxies should
be considered an instance of the remote service.  The user's
capability to access the service lets it access the proxy.  The proxy
itself should have a separate capability to access the remote service
so that the chain of trust remains valid.

Quoted text here. Click to load it


Re: Managing "capabilities" for security
Hi George,

[eliding a lot for fear of hitting upper message length limit]

On 11/7/2013 2:27 PM, George Neuner wrote:
Quoted text here. Click to load it

OK.  But that just changes the size of the copy.  It still allows you
to create as many copies as you want -- without anyone knowing about
them.  And, makes "a certain bit pattern" effectively the same as
another copy of that capability!

Quoted text here. Click to load it

Yes.  No free lunch.  *Big* limitation but, I'm hoping, one with
worthwhile tradeoffs!

Quoted text here. Click to load it

A user (task) somehow gets a set of "authorizations" to a particular
object (an object may actually be a service, another task/thread, etc.).
This could come from a "parent" task handing the authorizations and
object reference -- together called a Handle, in my lexicon -- to
the task.  Or, from the task requesting that (object,authorization)
from some chain of "directory" services -- ultimately terminating
at a service that is responsible (and capable!) of satisfying this

The user then wants to invoke a method supported by that object.
The Handle (which indicates the object and the authorizations thereof
FR THIS INSTANCE OF THE HANDLE) is presented to the kernel in an
IPC/RPC request (wrapper for the method to be invoked).

If the user doesn't have the *right* to connect to the "service" that
implements that object, then the RPC fails before it gets started.
I.e., a task can't talk to anything that it doesn't have the *right*
to talk to (this is a more fundamental "permission" than the
"authorizations" implemented in the capability/Handle).

I.e., I can disconnect your Handle from the service that backs it
and you're just a spoiled brat crying in a sandbox.  Nothing you
can do about it -- even if you *had* the authorizations to do
grand and wonderful things!  I've just "unplugged" the cable
tying you to that service.

Once the kernel has decided that you *can* "talk" to that service
(the one that backs the object in question), the IPC/RPC proceeds
(marshall arguments, push the message across the comm media, await
reply, etc.).

On the receiving end, the service sees your request come in.  Knows
the object to which it applies (because of which "wire" it came in on),
identifies the action you want to perform (becasue of the IPC/RPC
payload) and *decides* if you have been allowed to do that!

It does so by noting what permissions it has *recorded* for your
Handle when it *gave* you that Handle (or, when someone else gave it
to you on its behalf).  If the recorded permissions/authorizations
allow the action that you have requested to proceed, then the service
implements those actions and completes the IPC/RPC accordingly
(possibly returning ERROR if some OTHER, non-permission-related aspect
of the action fails).

As the Handler makes the *final* determinationas to whether or not
it wants to *do* whatever you've asked it to do to the referenced
object, it is free to define any number of such actions -- and any
number of arbitrary constraints on them!

E.g., it may let *you* write numbers into a file but someone else
can only write *letters* -- to that same file!  (I have no idea
why this would be important  :> )  So, unlike AMoeba and other
ticket-based systems, the number of "authorizations" isn't defined
by a bitfield *in* the "ticket/key".  Rather, its whatever the
Handler considers to be important.

"I'll let you send a message to this email_address_t -- but, it has
to be a short one."

"I'll let you send a message to this email_address_t -- but it can't
have any attachments!"

"I'll let you send a message to this email_address_t -- but it can't
contain any profanity"


Much of the implementation is Mach-inspired.  Think of Handles as
port+authorizations.  Handles that don't implicitly have *send*
rights to the receiving port (which is held by the "Handler")
can't reference it (remembering that send rights can be revoked.
I.e., the holding task can be "disconnected" if the Handler decides
he is being abusive, etc.)

Quoted text here. Click to load it

I.e., there is an IDL for X(H)

Quoted text here. Click to load it

... and task B holds the receive rights for the port that references H
(so, any references to H USING THAT HANDLE will end up in B's lap)

Quoted text here. Click to load it

... because "someone" told B to allow those permissions for requests
coming in on the port assigned (given) to A by which it can access
object H

Quoted text here. Click to load it

... because A (still) holds a send right to the port for which B is
the receiver

Quoted text here. Click to load it

... because it is B's job to implement X on H (or, to know how to
get *other* agents to perform portions of that operation)
A doesn't know *how* to "read a file", "turn on a motor", etc.
I.e., the methods associated with H

Quoted text here. Click to load it

As above.

In the Beginning, ...  :>

Quoted text here. Click to load it

Kernel doesn't *care* what A's intentions are!  Doesn't *want* to care!
It wants *H* to determine what can be done -- on H!  Expects "someone"
(task) to implement those actions -- call him B, Q or Elephant.

All kernel does is let these two parties talk to each other.  And,
prevent others from talking that don't have the "right" (deliberate
choice of word) to talk to each other.  The Handler for an object
ultimately implements the permission(s) and actions ("Sorry, I
don't want to do that for you and you can't make me!")

Quoted text here. Click to load it

A has no knowledge of who is "backing" H.  A starts with a *name* for an
object (assuming it isn't trying to *create* a yet-to-be-named object).
It consults a namespace (another Object that has been created for it
and, to which, it has been given access "authorizations" -- of some
degree) that has been created for its use.  Only things that are
referenced in that namespace "exist", as far as A is concerned!

Think of it as chroot($HOME) -- /etc/passwd doesn't exist in that
context unless *you* happen to have coincidentally created your own
"object" and named it such.

The namespace, like any other object, is "backed" (handled) by some
active entity.  When you use the Handle that you have been pre-endowed
with (by init?) to access (and operate on!) that namespace, you can
ask the namespace to resolve a name... however "names" are defined
in your namespace (e.g., names might be simple integers, or 8000
character strings, or binary numbers, or...).  You obviously must have
about how names are defined -- and possibly *used* -- in that namespace.

That convention may be different for some other namespace -- even if
that other namespace is handled by the same active entity!  All that
matters is the agreed upon syntax of the API -- as evidenced in the
IDL for that "method" -- and the conventions you agree to (when your
code was written).

When you "lookup" a name, the namespace service (for that namespace,
yada yada yada) gives you a Handle to the *object* that is paired
with the name you provided.  Or, "ERROR_NOT_FOUND", etc.

Again, by convention, you know the type of the object that you have
just been granted a "reference" to.  So, you know what methods
you can *potentially* ask to be performed by that "object" on
your behalf.

The Handler that backs that object (referenced in your Handle),
holds the receive right (Mach-speak) for that "port".  (You now
hold a *send* right to it).  When that Handle is used in an IPC/RPC,
the identifier of the particular IPC/RPC "method" of interest,
along with any arguments involved, will be delivered to the
Handler holding that receive right FOR THAT PORT (meaning the
*object* associated with that port/Handle).

If, for example, "H" is the file system, then you might be
asking B to "create a new file" in that filesystem.  Where in
the *real* filesystem it actually resides may be hidden from you.
All you care is that you will subsequently be able to access it
using the name "foo" -- that you provided (presumably avoiding
any conflict with other names IN YOUR NAMESPACE -- because the
Handler for your namespace won't let you create a "new name"
that conflicts with an "old name" (part of the convention
that you adhere to when you interact with a Namespace object!)

Presumably, you will put something in this file. Or, perhaps
not.  Maybe your role was just to create it, prevent its
deletion and place it into a *new* namespace that you will
pass onto one of your "offspring" -- so *it* can fill it
with content!

Quoted text here. Click to load it

Who decides that UID "don" can access ~don but not ~george?

Quoted text here. Click to load it

Same thing, here.

Quoted text here. Click to load it

Same sort of approach.  But, the kernel has no explicit knowledge of
what that "specific access" entails.  It just routes messages between
endpoints after ensuring that you have the "right" to use a particular

Quoted text here. Click to load it

User-space capailities allow the kernel to get out of the loop.
But, mean that the kernel can't *do* anything to control the
proliferation of copies, etc.

Quoted text here. Click to load it

It doesn't.  It just pushes a message down that "pipe" and... Gee, look,
B is suddenly READY to execute, again!  How'd that happen?  :>

Quoted text here. Click to load it

Convention.  How do you know to ask for ~/.profile when a user logs in?
Why not /foo/biguns?

Quoted text here. Click to load it

Yes.  Kernel cares not about *what* A is asking B to do on H.
Does your UNIX box care if you push "ABCD" down a particular
named pipe to some random process on the other end?  All it does
is make the mechanism available to you as an AUTHORIZED USER
of that mechanism.  The fact that ABCD causes the reciving process
to erase every odd byte on /dev/rdsk is no concern of the kernel!

Quoted text here. Click to load it

Handlers and Holders conspire as to what actions they want/need to
support.  If you want to be able to erase every odd byte on the raw
disk device, then *someone* has to write the code to do that!
If you want to ensure this action isn't casually initiated, then
someone has to enforce some "rules" as to who can use it -- and
even *how*/when (e.g., you might have authorization to do this,
but the Handler only lets it happen on Fridays at midnight).
Let the Handler and Holder decide what makes sense to them!

I wanted to keep the kernel out of the "policy" issues and just
let it provide/enforce "mechanism".

Unfortunately, it makes the kernel a bottleneck as all IPC/RPC
has to be authenticated there.  But, it gives me a stranglehold
on "who can do what".  It also gives Handlers the ability to
decide what constitutes abuse of privilege -- *its* privilege!
And, provides far more refined ideas of what those privleges
actually *are*.

E.g., the email example (that I seem to have become obsessed with).
I can have "something" put textual representations of email
addresses in the RDBMS.  Something (else?) can pull them out,
wrap them in a "method" and hand them to "consumers".  Those
consumers can invoke the method (".sendmail") on the object
(address) and never anything more.  If I later want to ensure they
can;t continue to use that object (email address), I can revoke
their "authorization" to use that method on that instance of that
object.  (Or, I can "unwire" the Handle completely -- so, any
future operation throws an error)

Quoted text here. Click to load it

My approach is more like pushing untyped data through a function
interface and knowing that the thing on the other end will make
sense of it.  The IDL lets "humans" agree on just what any particular
set of data on a particular interface are LIKELY to mean!

Quoted text here. Click to load it

This is the Inferno way, as well.  In some aspects, its nice.  But,
its also tedious.

Quoted text here. Click to load it

I don't have a filesystem.  I have *namespaces*.  *Multiple*
namespaces.  Filesystems traditionally bound names (and containers)
to "magnetic domains on a medium".  Then, to "drivers" for particular

In my case, a namespace binds a name to a Handler.  What that Handler
does and how it does it can have absolutely nothing in common with
any other Handler in the system.

The *namespace* "object" has operations that can be performed on
it (methods defined in the IDL that can be applied to any Handle
that references that particular *flavor* of namespace).  E.g.,
resolve(), create(), delete(), etc.  But, it has no sense of
reading/writing *to* the Handles that it manages.

Quoted text here. Click to load it

I don't implement a full-fledged factory.  Rather, I assume you know
everything there is to know about the objects with which you are
interacting.  That you and their Handlers have conspired beforehand
to some set of agreed upon methods (abilities?  trying to avoid
using the word "capabilities").

So, when you decide to revoke the "move motor left at high speed"
authorization from a Handle that previously *had* that authorization,
*you* and the Handler know what this means.  The kernel doesn't care!
If, tomorrow, you decided to implement a "reduce motor operating
current until full stall" authorization, so be it.  Kernel never
changes.  None of the other "tasks" change.  Just users of that IDL
(and, specifically, this new method added to it)

Quoted text here. Click to load it

I'm trying to find a middle ground.

I don't want a Holder to have "poll" to see if an authorization is
still valid (or, that even the *object* to which that authorization
applied still exists!).

Nor do I want to prenotify before revoking authorizations (or
deleting objects or unwiring connections or...).

I figured the best compromise (noun:  a situation where EVERYONE
gets screwed) is to allow asynchronous revocation but provide
a notification ex post factum.  I.e., if they haven't *yet*
tried to exercise the authorization, they get notified.  If they
are in the process of using it, they may or may not succeed
(depends on how the race is won).  And, if they don't *care*,
they can ignore the notification and wait until they try to
ues the authorization, later!

<shrug>  It *seems* like the most bang for the least buck.

Quoted text here. Click to load it

No need for versioning.  Handles are unique -- not "reused" (until all
references to it are known to be gone).  As they can't be duplicated
(without kernels involvement), it knows when it is safe to reuse a
stale Handle.  (a task can *try* to hold onto it but the kernel that
serves that task *knows* it doesn't exist anymore.  "File descriptor
27 is no longer attached to a file -- regardless of what you may

Quoted text here. Click to load it

The tasks running on that host (whose Handles are held *in* that host!)
are dead.  They can't access anything even if they wanted to!

The handles in *other* hosts that reference objects *backed* by tasks
in that host are told that the other end has come unplugged.  So,
all of *those* Handles cease to exist (and they are notified).

If tasks on the down host referenced objects on these "up" hosts, the
Handlers for each of those objects are told that the connection is
broken and they need no longer expect requests on those Handles.

The problem is more one of *recovery* after the fact.  How do you
rebuild these connections?  I currently have no notion of
persistence in the system.  Once it goes down, it reboots from
scratch -- anything in progress is lost (unless the agents doing
the work deliberately elected to create persistent objects from
which they could resume operations)

Quoted text here. Click to load it

Yes.  "Set of keys" implies "set of locks".  If keys can be freely
copied, there is no way to know where every copy resides.  No way
to *notify* the holder that a particular key no longer works:
"The lock has been changed"

Quoted text here. Click to load it

Assumes you have *tried* the Handle and discovered it to be useless.
Or, been notified (see above) that it has been revoked (rendered

My point was that a set of 64 (or 256) bit values in memory tells
you nothing about whether you should keep them -- or not.  You'd have
to go around "trying your keys" to see which ones are worth keeping.

Mch like finding a set of keys in a desk drawer:  you try them on
every lock you can think of.  The ones that work, you set aside.
The ones that don't, you decide if they are worth discarding
(Hmmm... are there any locks I have forgotten to test??)

OTOH, if you don't want to test them (now), the only "safe bet" is
to hold onto them -- just in case!

Quoted text here. Click to load it

The kernel doesn't care about this.  It's up to the Handler for the
objects in question to make his implementation choice.

E.g., two Handles (in the same or different tasks) can map onto the
same object.

A Handle can map onto multiple objects -- if a proxy handling the
Handle acts on your behalf ("The phone only rings in one location.
If you want ot be able to call two people, you need two phone
numbers and the ability to dial both/either).

Two file descriptors in different (or same) process can reference
the same file.  If you want to reference *two* files, you need to
have a proxy that knows how to interpret your request(s) for each
file (said proxy having two file descriptors).

Or, do it yourself as two fd's.

Quoted text here. Click to load it

When host comes back up, local Handle doesn't exist.  Memory is empty.
Local kernel has no knowledge of what happened before the lights went

If you are incommunicado for "too long" (whatever that means), others
come to the conclusion that you are powered off.  Anything "wired"
into you is invalidated.  Come back on-line and *claim* you've been
running all this time regardless of how it looks?  "Gee, that's too
bad.  We though you had moved out and sold all your stuff..."

Quoted text here. Click to load it

How do you decide that task A should be able to turn the motor on
but not task B?  You MAKE THAT DECISION and then you put it in the
code.  Unless the code gets rewritten (or bug), B simply never
thinks about talking to the motor.

Quoted text here. Click to load it

I think it is important for things like init -- to be able to go away
(free up its resources AND IT'S UTMOST PRIVILEGE LEVELS!)

Quoted text here. Click to load it

Exactly.  A on host 1 doesn't talk to the Handle for B on host 2.
A, instead, talks to a proxy on host 1.  The kernels have conspired
to wire this proxy to another proxy (actually, a part of the remote
kernel) on host 2 that, in turn connects to B.

So, when host 2 dies, the proxy on host 1 sees that (because the kernel
on 1 loses contact with kernel 2 -- anything that is "wired" to that
remote kernel is now notified of the failure.  That in turn is
propagated up to A, et al.

Never instantaneous.  But, anything "in the works" when the host goes
down fails to see a completion code so knows it has been unceremoniously
aborted "in progress".

(see why I htink async notifications ex post factum are the only
realistic solutions?)

Now, to see if news server bellyaches about length of this post...


Re: Managing "capabilities" for security
Hi Don,

Quoted text here. Click to load it

Don't know what the limit is, but I've seen messages several thousand
lines long in various groups.  If everyone edits judiciously, ISTM
that it would be hard to get there in any reasonable discussion.  The
ridiculously long messages I have seen often were the result of
repeated top-postings or "me too"ing with no attempts made at editing.

Quoted text here. Click to load it

Yes.  However, the enlarged capability was an improvement over the
original because it carried information on client(s) authorized to use
the capability.

Quoted text here. Click to load it

Server has an addressable port per managed object?  Seems like

Quoted text here. Click to load it

Yes.  As I noted previously, when the set of "authorizations" is
arbitrary, the role of the ticket has to be demoted from
self-contained capability to some kind of capability selector.  But it
doesn't require kernel involvement - it could be done all in

Quoted text here. Click to load it


Yes.  However, capabilities can be managed in user-space by the
services themselves - which IMO actually makes more sense if the set
of authorizations they control are wildly different.  All that is
necessary at the kernel level is to validate "port send" permission.

But in any case, we're back to how it is granted  8-)

Quoted text here. Click to load it

Previously you had said that your kernel was able to prevent clients
from making connections on the basis of complex permissions like the
right to "erase every odd byte" of the object.  That's why I asked how
the kernel knows what the client wants.

Now you are saying that the kernel only checks the client's "port
send" authority and more leaves complex decisions to the server.

Which is it?

Quoted text here. Click to load it

See, again here you seem to be saying again that the kernel can make
decisions based on fairly intimate knowledge of the client's

Quoted text here. Click to load it

So implementing the IDL gives you capability?  Or just potential? I.e.
you can assume that an imposter task has implemented the IDL for the
service/object it wants to hijack.

Quoted text here. Click to load it

But if you are using "handles" ("indexes", "selectors", whatever) to
represent arbitrary collections of authorities, you're going to run
out of them pretty quickly unless the handle objects are fairly large.

I.e. 4 billion [32-bit handles] seems like a really large number until
you actually start parceling it out: e.g., if "objX,read" is distinct
from "objX,read,append" is distinct from "objY,read,append", etc.  

That's part of the reason Amoeba used wide tickets. [1st version used
80-bits without the crypto-signing field, 128 bits in all.  2nd
version capabilities were 256 bits].


Re: Managing "capabilities" for security
Hi George,

On 11/11/2013 10:20 PM, George Neuner wrote:
Quoted text here. Click to load it

I think NNTP servers are free to impose their own limits.
I've previously bumped up against it and found it annoying
to have to edit my own reply before being allowed to send it...

Quoted text here. Click to load it

So, how are surrogates handled?  E.g., send capability to X
and X wants to delegate some or all of it to Y.  I.e., it can
create a new capability from a subset of its own (which Y can
then do for Z, etc.) but how do you track down all derived
capabilities (or, just not recycle "identifiers" so any stale
copes eventually find their IDs invalid WHEN PRESENTED FOR SERVICE)

Quoted text here. Click to load it

Yes.  But think about how many managed objects you are likely to
have.  E.g., only *open* files need to have handles...

Quoted text here. Click to load it

Yup.  In my case, the Handler provides "policy"... and can decide
whatever those authorizations make sense for this instance of
this object.

Kernel provides communications and "Handle-related" (i.e., think
of the Handles as objects in their own right -- not just
REPRESENTATIVES of other objects) operations.

Quoted text here. Click to load it

Correct.  And ensure the "messages" (IDL) destined for each "object"
(Handle/port) get routed to the right Handler for that object.

Quoted text here. Click to load it

Initially, everything is hierarchical.  So, whatever *I* create,
*I* can give to others (e.g., my offspring -- directly or
indirectly).  But, they are free to create *their* own objects
and act as Handlers for them -- and, give them to other actors
that they are made aware of (e.g., via a directory service,
their explicit namespaces, etc.)

Quoted text here. Click to load it

Nothing talks to anything without kernel's involvement.  If you
don't hold a send right for a port, then you can't *send* to it!
So, you can repeatedly trap to the kernel -- but never get past
that point.  (send rights are not forgeable).

Whatever *backs* the object (the "Handler" behind the "Handle")
decides how to interpret each communication (e.g., IDL).

But, the Kernel acts as Handler for certain object types as

E.g., a Task is a container for resources and Threads.  You
may want to operate *on* a task (e.g., change its priority,
scheduling algorithm, stack allocation, kill it, etc.).  So,
each Task has a (at least one) Handle.  When someone wants
to SUSPEND a task, it takes a Handle for that Task and
passes it to the task_suspend() IDL.  If the caller has
permission to *talk* to that object (i.e., the Task),
the kernel routes the IPC/RPC message to the Handler for
that object -- namely, the kernel itself!  If the permissions
recorded *by* the Handler for that instance of that Handle
include the ability to SUSPEND the task, then the Handler
(i.e., the kernel) suspends the task and returns SUCCESS
to the caller.

If an object is a page of memory and there is some (wacky)
operation supported on memory pages that allows "every odd byte"
to be erased, then anyone holding a Handle to a memory page
object for which that "authorization" has been granted can
invoke the "erase odd bytes" method on that object.

(The kernel has some involvement with memory though not exclusive)

Quoted text here. Click to load it

Kernel acts as initial gatekeeper.  Implements communication and
transport *mechanism* along with the "port capabilities" -- send
and receive (plus others not discussed here).

To each actor, you're always talking to the kernel -- your Handle
resides *in* the kernel, the communication that it represents
is implemented *by* the kernel, notifications, operations on
those Handles, etc.

Actor has no knowledge of who is backing the object (Handling the
Handle).  To it, everything LOOKS like a kernel interface.

This is different from Amoeba where actors are conscious of the
fact that they are actually talking to other actors.

In Amoeba, you could pass a capability from Task A to Task B
using USMail (or whatever).  The kernel didn't need to be involved!
Or, if it was, it could just provide a *pipe* -- no real
checking going on, there.

Since my Handles are implemented *in* the kernel, the kernel
has to be involved in every communication.  But, this is what
I want -- I don't want Task A to be able to *bother* Task B
unless it has previously been authorized to do so!

And, if Task A turns out to be hostile or goes rogue, then Task
B can revoke Task A's ability to "send" to it and effectively
isolate it.

If Task B only notices this annoying behavior on a couple of
Handles that it provides to Task A, it can disconnect those
Handles (ports) without affecting other Handles that Task A
may currently hold (that are backed by Task B).

I.e., I can implement fine-grained damage control instead of
taking a meat cleaver to Task A.

Quoted text here. Click to load it

IDL is just a collection of bytes that tell the recipient of the
message (the envelope that contains the bytes) what they mean.
You can "say" whatever you want -- no need to go through the
stubs generated by the IDL.

I.e., you can fabricate a message that says "write to file"
and push it to a Handle (port).  If it happens to agree with the
correct form for a "write to file" message *and* the Handle
happens to be backed by a "file Handler" *and* that instance
of that Handle allows write authorization, then you will cause
the file to be written!  The IDL stubs are just convenience to
save you this trouble.

OTOH, if you send that message to a Handle that represents a
*motor*, it won't make sense.  *Or*, can mean something entirely
different for a motor (perhaps it means APPLY BRAKE).  If
you don't have APPLY BRAKE authorization for the motor that
is backed by that Handle, then the IPC/RPC will fail.

If you *do* have authorization to APPLY BRAKE, then the brake will
be applied -- even though you *thought* you were fabricating a
message to cause a "file write" operation!

In reality, this is minimized because you tend not to create your
own messages.  And, message ID's are disjoint.  You would have
to really work hard to create a message that works on an object of
type X while thinking you were dealing with an object of type Y!

[Sorry, sloppy explanation but I think you can imagine what the
machinery looks like.  Bottom line is the content of the message
can't change the things you are "authorized" to do nor the things
on which you are authorized to act!]

Quoted text here. Click to load it

Again, think about the sort of applications and the things that
are big enough AND IN PLAY to require a Handle.

E.g., the email_addr_t example I've enjoyed playing with...
you only need to represent an email_addr_t as a "live object"
(i.e., a Handle backed by a Handler) when it actually *is*
"live".  You can have tens of thousands of email addresses
in your address book (RDBMS) but only those that have been
instantiated for live references/operations need Handles!

Quoted text here. Click to load it

But Amoeba also allowed persistence for capabilities.  So, you
*could* store a capability in the RDBMS alongside each of those
thousands of email addresses!  Or, one for every file on the
disk (bullet server).

But, you don't have thousands of file descriptors (Handles!) in
your code!  You don't fopen(2C) every file in the file system
when your program starts -- "just in case".  Instead, you create
fd's as you happen to need them and the kernel (in most OS's)
keeps track of what *actual* file each pertains to.  When you
close a file, the descriptor ceases to exist (in all practical
terms) and the resources (kernel memory) that were associated
with it can be reused for some *other* file reference.

Make sense?  It's not a "lean" way of doing things but I think
it's the only way I can get all the isolation I want between
(possibly hostile and/or rogue) actors.

Gotta go finish building a machine to deliver tomorrow.  Still
have a few apps to install and snapshots to take before I will
feel "confident" letting others screw with it!  :>


Re: Managing "capabilities" for security
Hi Don,

Quoted text here. Click to load it

You don't track them down, you just invalidate them at the source.

Some refresher background:

In Amoeba, tickets are public objects, but capabilities are server
held *private* objects.  Tickets are cryptographically signed to
prevent forging.  The signing function is kernel based (for
uniformity).  There is a public server API (not discussed here) to ask
for tickets you don't have.

For brevity here I am focusing only how tickets are created, validated
and revoked.  Tickets may carry additional information beyond object
access rights which I will not discuss here. [but see further below]


- "capability" is a tuple of { capID, rights, check#, ... } which is
associated by a server/service with a managed object.

- "ticket" is a tuple of { svrID, capID, rights, signature, ... }. The
svrID identifies the server/service. The capID references a particular
capability offered by the service.

- "rights" are N-bit wide fields. The meanings of the bits are defined
by the issuing server.  

The actual sizes of these data are version dependent on the capability
subsystem.  Both capabilities and tickets adopted new functionality
over time.

It's important to understand that Amoeba capabilities are, in fact,
"versioned", though versioning is neither sequential nor readily

When an object manager [server] creates a new capability, it generates
two large random numbers to be used as the capability ID and as a
"check" number associated with it.  The ID will be made public, the
check number is private, kept secret by the manager.

The rights specified in the manager's capability tuple reflect the
full set of privileges *this* capability can offer - which is not
necessarily the complete set of privileges offered by the object.

The capability ID, rights, and check number all are passed into the
signing function to generate a signature.  An "owner" ticket then is
constructed from the ID, the rights, and the signature (the check
number remains private to the manager).  

A "non-owner" ticket having reduced privileges is constructed by first
determining a value for the ticket's rights field.  "Owner" and
"non-owner" tickets are distinguished by whether the rights field in
the ticket _exactly_ matches the rights field in the manager's

The reduced rights value then is "combined" with the capability check
number to create a derived check number. [Amoeba XOR'd them but any
deterministic method will work]  A derived signature is generated (as
above) using the ID, reduced rights and the derived check number, and
the new "non-owner" ticket is created from the ID, the reduced rights
and the derived signature.

[Signatures (and rights for issued non-owner tickets) can be stored to
optimize server side ticket validation, but all the signatures could
be recomputed if necessary using data from capabilities and tickets.]

To validate a ticket, the object manager finds the specified
capability using the ID field of the ticket.  If the ticket's rights
exactly match those of the capability (i.e. an "owner" ticket), the
manager uses the check number to compute the expected signature and
compares the value to the signature field of the ticket.

If the ticket's rights don't exactly match the capability (i.e. a
"non-owner" ticket), as above a derived check number and derived
signature are computed, and the ticket is checked against the derived


At this point, it should be clear that every issued ticket is tied to
a specific "version" of a capability by the capability's secret check
number.  If the capability is versioned - i.e. the check number
modified- or if the capability record is deleted, then every ticket
issued referencing that (no longer existing) capability is immediately
rendered invalid.

Of course, there is the possibility that the same pairing of ID and
check# for an existing or past (deleted) capability could recur for an
unrelated object.  Amoeba used per-server [not global] capabilities
and *large* randomly generated ID and check values to minimize the
chances of that occurring.


So how to handle surrogates?

The meanings of bits in the rights field of the ticket are completely
defined by the issuing server: the value may be an enum or a bitmap,
there may be subfields ... whatever the implementer chooses.

One bit can be defined as meaning "this is a surrogate ticket".  A
surrogate ticket holder would be permitted to ask the server to create
a new reduced capability for the managed object.  

The new capability maximally would allow only those privileges that
were granted to the surrogate, allowing the surrogate independently to
delegate by issuing "non-owner" tickets based on its own capability.

The surrogate capability might also permit the surrogate to name group
peers, fail-over alternates, etc. by transferring its "owner" ticket
if other factors allow this (see following).

Because capabilities are kept private by the issuing server,
surrogates capabilities can be linked to the owner's capability,
allowing the owner to void delegate and/or surrogate tickets by
versioning/deleting the appropriate capability.  The surrogate, of
course, can void delegate tickets by versioning/deleting its own

Further having tickets encode who is authorized to use them permits
more restrictions, e.g., preventing delegates from enabling peers by
copying the ticket.

All versions of Amoeba's tickets specified a server (or service) ID -
the field wasn't sign protected because the ID might be a task
instance, but it allowed servers to immediately reject tickets they
couldn't possibly have issued.

Later versions of the capability system widened tickets to include an
authorized user/group ID field protected by a 2nd crypt signature.
[And also enlarged the rights field.]

Quoted text here. Click to load it

Understood. IDL based RPC mechanism.

Quoted text here. Click to load it

Well, servers managing the objects anyway.

Quoted text here. Click to load it

Does the kernel recognize DOS attacks on itself?

Quoted text here. Click to load it

Amoeba v2 effectively could do the same.

Quoted text here. Click to load it

Anyone could persist a *ticket* - but the referenced capability might
no longer exist when the ticket is presented for use:  e.g., following
a restart or after version management performed by the capability

Quoted text here. Click to load it

Ever see an image based OS?  No files (or, at least, none the user can
perceive): just a virtual space containing "program" functions and
"document" data structures with a directory for finding things.  All
"programs" and "documents" available at all times.

Like working in a Lisp or Smalltalk system but extended to encompass
all activity.

Current NV memory based systems, e.g., for tablets, appear to work
similarly, but they still perceptionally are "file" oriented.

Quoted text here. Click to load it


Re: Managing "capabilities" for security
Hi George,

On 11/14/2013 2:55 PM, George Neuner wrote:
Quoted text here. Click to load it

(by source, we agree to mean the server "handling" the object)

Quoted text here. Click to load it

Yes.  Tickets can be freely copied and passed around.  Nothing
*prevents* that.  The capabilities (object, authorizations)
behind them are "protected".

~dgy/.profile can be *known* to many, yet inaccessible to damn near

Th esigning function could similarly be implemented within the
"service" for a particular ticket -- or, in addition to.  I.e.,
anything that needs to know that "secret" can perform that duty.

By contrast, "Handles" (ports) in my scheme are just "small integers"
in much the same way that file descriptors are "small integers".
And, while nothing prevents you from *copying* a particular "small
integer", the integer itself is neither the ticket nor the capability.

Rather, *like* a file descriptor, it acts as a "name" for a particulat
Handle IN A PARTICULAR CONTEXT!  (that of the task holding that handle!)

E.g., "23" interpreted as a file descriptor in task (process) A can
refer to a particular pty.  Passing "23" to some other task breaks the
association with that particular pty.  "23" is *just* "23" -- nothing

OTOH, 0xDEADBEEF010204302893740 passed from Amoeba task A to Amoeba
task B carries "rights" with it.  Encoded within the cryptographic

Quoted text here. Click to load it

In my case, the capability is embedded in the Handle and implemented by
the Handler.  The Handler could conceivably *change* how it interprets
a set of "capabilities" (terms are getting WAY overloaded, here!) on
the fly.  Doing so without the actor's awareness could be challenging :>

Quoted text here. Click to load it

There is no concept of a "ticket" in my scheme.  A "Handle" only exists
in a specific context.  Remove it from that context and it loses all
meaning -- it's just a bunch of bits.

Quoted text here. Click to load it

What I've called "authorizations".  Except there is no visible
"bit field" in my implementation.  Each Handler decides how it wants
to implement a set of "authorizations".

E.g., a file server could have two threads (groups of threads) that
are responsible for read or write access to a file.  Files opened
for read access are service (Handled) by thread R while those that
are opened for write access are handled by thread W.  Read requests
are never *seen* by thread W and vice versa!  (because the endpoint of
the eventual read/write RPC differs -- wired differently when the
open() is granted!)

(I also have "communication rights" beneath the "rights" that are
associated with the object being managed)

Quoted text here. Click to load it

This can just be seen as extending the namespace of the capability
in a manner that makes for easier management.  I.e., so a Handler
(server) can opt to ignore "older" rights (because it can't vacuum
memory to find and remove all instances of a particular "ticket")

Quoted text here. Click to load it

Correct.  As I don't have to give you read *and* write access to
a file.  Or, could opt to only grant APPEND access.  Or, any
other operator that I choose to implement (remove_duplicate_lines(),
compress_in_place(), etc.)

Quoted text here. Click to load it

But, there is no direct *tie* to the original ticket from which
this one (subset) was created!  (or the one before *that*; or the one
before *that*; etc.)

So, when a ticket is presented, you can't look at the ticket and
decide that the ticket from which it was created was revoked and,
therefore, so should this one!

(hence version.  But, now the handler/server needs to keep track
of which version is current for each outstanding!)

Quoted text here. Click to load it

Yes.  So, the Handler/server needs to effectively treat the version
as the ID of the actor to which a ticket is granted IF IT WANTS TO

Quoted text here. Click to load it

Of course.

Of course.  This is my "do not duplicate" attribute.

Quoted text here. Click to load it

But that means the handler/server has to do all this work!  "Remembering".

Quoted text here. Click to load it

In my case, the copying has to be done *in* the kernel.  All
that's exposed is the "small integer" so an actor can't do squat with it.

Quoted text here. Click to load it

In my case, the Handler/server never is *presented* the communication
in the first place -- unless the path has been previously created by  
possession of the Handle.

Just like you can't write to a file without a file descriptor
having been created.  The file server never sees your actions;
they are blocked *in* the kernel.

Quoted text here. Click to load it

Yes.  And the IDL is just a convenience service provided to the
developer.  Sort of like the difference between using the native X API
and one of the widget sets.  (the latter just encapsulates the former)

Quoted text here. Click to load it

Yes.  In my approach, you are always "talking" to the kernel
as *it* is responsible for validating (the communication
portion) implementing the actual RPC/IPC/kernel trap (*all* look
the same to the actor)

Quoted text here. Click to load it

It currently doesn't.  Nor do I see a need to do so in the future.

Any attacks on services (Handlers) *through* the kernel (as the
communication medium) just come out of the attacker's resource
share.  I.e., *your* timeslice is being consumed while the kernel
is trying to determine if you are entitled to this action.  If
that's how you want to spend your time... <shrug>  You could
just as ridiculously spend it spining in a tight while(1) {}!

A "direct" attack (i.e., asking the kernel to perform an action
that is known to be *backed* by the kernel itself) has the same
net result.  It's your dime, if you think this a wise way to spend
it, then so be it!

Of course, the *system* ends up losing performance because it's
supporting a task that is "doing nothing productive".  But,
how does an independent agency make that distinction?

Do I kill a task because it has tied to do something it isn't
entitled to do?  What if that ability has been revoked?  Do I
penalize the task for this?  What if the only realistic recovery
mechanism for an "unavailable (at this time)" resource is to
"try, try again"?  When do I decide the actor is attacking vs. normal

Kernel tries REALLY HARD not to implement policy.  Let the services
and handlers make that definition AS BEFITTING THE APPLICATION (or
portion thereof)

Quoted text here. Click to load it

Yes.  In my case, I don't support persistence of "Handles".
I.e., they can't be created -- nor recreated -- from a store.
Instead, everything (with the exception of bootstrap) is
built dynamically and persists until explicitly killed/revoked
*or* the system shuts down.

Quoted text here. Click to load it

But the actors don't hold "Handles" to all of those objects!
E.g., I can have millions of files in the file store -- yet only need
*dozens* of Handles to interact with those dozen objects that are
"live" at the present time.

The Handlers role is to create "live" objects, "however".  If that
means mapping some blocks on a disk to a particular Handle, so be it.
If it means wrapping one of thousands of email addresses *in* an
email_addr_t, likewise.

The "problem" with my approach is that all of these things -- for
the complete set of tasks executing on a host -- are contained in the
kernel.  Amoeba (et al.) allows the references to be moved *out*
of the kernel into task-space (whether that's user-land or not).

One of the Mach problems, IMnsHO, was their desire/goal of trying to
reimplement UN*X.  So, any "impedance mismatches" between their
model and the one used by the UN*X implementors was a performance
or conceptualization "hit" (hence my deliberate choice of "impedance
mismatch").  None of these things were "deal breakers" but they
conspired to make it a bad fit. overall.

I'm looking at themechanisms in a different light.  To address a
different class of problems FROM THE START instead of trying to
back-fill to an existing implementation.

E.g., the Standard Library wasn't reentrant.  Users had to take pains
to preserve "static" members (thereby exposing bits that should have
remained *hidden* within the library!).  Or, functions had to be
redefined to export these entities.

In my case, I can implement the libraries as a *service* that you
"connect to" ("load library").  That service can take it upon itself
to instantiate thread-specific copies of all these statics.  Without
exposing any of this to the application.

Of course, UNIX could do likewise!  But, now the library had to be
aware of the details of the process/thread model *in* UNIX.  In
my case, I just create a tuple binding the "connection" (handle)
to its specific "statics" WITHIN the "library SERVER"!

Quoted text here. Click to load it

Re: Managing "capabilities" for security
An afterthought...

On 11/14/2013 3:57 PM, Don Y wrote:

Quoted text here. Click to load it

The flip side of this is also important:
  -- how do you know when all outstanding "capabilities"
  (object, permission tuples) FOR A PARTICULAR "object" are "gone"?
I.e. how doesa Handler know that no one is interested in the "live"
object any longer (if "nothing" is tracking "outstanding"
Handles/tickets/capabilities?) How do you Know when to free the
resovrces set aside to implement/manage that object?

Keep "delegated" capabilities in mind, as well.

Ignoring these "niggly issues" i5 how smaller, simpler, faster kernels  
get their "performance edge". :(

Re: Managing "capabilities" for security

Quoted text here. Click to load it

It doesn't matter - if the object is destroyed, all of the
capabilities associated with it are destroyed.

Capabilities only control access to an object, not its existence. They
can be created and destroyed independently of the object.

Quoted text here. Click to load it

When the owner destroys it.

The handler should not care whether or not someone will try to access
a nonexistent object in the future.

Quoted text here. Click to load it

Any signatory to a joint account can close it.  If an object can have
multiple owners, then any of them should be able to destroy the object
and it's associated capabilities (which should be automatic).

What's important is to be able to distinguish owners from their agents
if/when necessary.

Quoted text here. Click to load it

Too often I think you perceive complexity where none really exists.


Re: Managing "capabilities" for security
Hi George,

On 11/17/2013 5:22 AM, George Neuner wrote:
Quoted text here. Click to load it

Agreed. But, you missed the point of my question.

Capabilities (Handles) are (object&, permission) tuples. Each capability
references an object.

What happens when every reference to an object disappears? How does the
Handler (server) BACKlNG the object know that all references to it
(along with their particular permissions) have "disappeared"? How does
it know that the resources that it has set aside to manage it can now
be released for other uses?

Quoted text here. Click to load it

But the object hasn't been destroyed; just the last ovtstanding
REFERENCE to this LlVE lNSTANCE of it!

Five actors hold Handles to a particular "file". Each has some set of
permissions enabled by *their* particular Handle (capability). The
Handler backing that file (i.e. File server) has set aside some
resources to implenent that/those live instance of that object.

E.g., read and/or write buffers and/or a mmap()-ed view of the actual
file-on-disk.  Synchronization primitives within the server to ensure
actions by actors are serialized in some predictable manner.

How does the file server know when the last reference to this object
disappears?  I.e., the last actor holding a capability has terminated
or died (unceremoniously).  No one knows how many outstanding
capabilities may still exist for that (Amoeba) object so the file
handler never knows when it can forget about the object.

[Rather than digress into a discussion about the bullet server and its
oddities, replace "file" with "motor", above.  How does he Motor Server
know when it can afford to power down the translator/motor driver for
a particular motor because no one has any OUTSTANDING live references to
it?  The motor still exists.  A "non-live" reference to it still exists
from which *live* references could be created in the future (e.g.,
from the soap server).  It hasn't been "deleted" by it's "owner".  Just
everyone has currently lost interest in it -- for the time being!]

The email Handler instantiates a particular email_addr_t.  It looks up
the "human" representation of an email address in the RDBMS (i.e.,
that's a privilege that only *it* has).  It copies this into memory
somewhere inside itself and returns a Handle to some actor that allows
that actor to do certain things to/with that email_addr_t.

Of course, what really happens when the actor wants to do something with
that email_addr_t is the email Handler is called upon to perform that
particular action under the authority granted by that particular
Handle (capability).

Actor gives some subset of his permissions to another actor.  One or
both of them invoke actions (methods) on the email_addr_t through their
respective Handles.  Eventually, both "lose interest" in that particular
email_addr_t ("close()" file).  When the last such "open Handle" into
a particular email_addr_t instance is released (closed), the Email
Handler can free its resources set aside for that email_addr_t.

To "close" an email_addr_t in my scheme, all you do is forfeit your
Handle.  Because the Handle is implemented in the kernel, it knows
every such reference to the object.  It can notify the Handler when/if
an actor holding a Handle *dies* -- without previously having explicitly
told the server that it was no longer interested in the object backed
by that particular Handle!

With Amoeba's tickets, *anyone* could hold a valid ticket for an object.
AND BE DOING SO LEGITIMATELY!  How does the server backing a particular
object referenced by those N copies of that ticket know that it is NOW
safe to free the resources set aside to implement that object?  It has
no way of knowing what N is at any instant.  Or, when it goes to 0!

It has to rely on an actor explicitly saying, "close" the object
represented by this ticket -- and all other future references to
that object that may come along.  If the actor responsible for doing
this dies, there's no one to clean up the zombie objects!  You'd
have to implement a keep-alive policy so the server could automatically
"shut down" objects that haven't been referenced, recently.

And, actors would have to deliberately "tickle" every object for which
they hold tickets just to be sure those objects didn't get closed
due to inactivity!

[IIRC, this was how the bullet server dealt with the possibility.
And, that was a situation where it would be relatively *easy* for
a client (ticket holder) to be reasonably expected to tickle the
object regularly -- as the object (file) had to be created locally
before being sent to the bullet server for "commitment" to media.
The same sort of GC was required of all "live" objects -- under
policies defined by their services.  E.g., each time the GC was
invoked, any "untouched" objects (objects for which tickets had
not been presented in the previous N GC cycles) were deinstantiated.
If an actor happened to be too sluggish to use a ticket (or, was
perhaps BLOCKED from doing so), then the object could go away!  If
the ticket's capabilities didn;t allow him to recreate the object...]

Quoted text here. Click to load it

Again, you're missing the point.  How does the bank know when both
account holders have DIED??  (bank is a bad example because there
are undoubtedly laws governing this)  I.e., they can't just *spend*
the monies in that account -- cuz either account holder may show up
to claim them!  Do they put the monies in a box for all eternity?
JUST IN CASE someone shows up with a valid credential 50 years hence?

Instead, they garbage collect.  Accounts that haven't been referenced
in N years are automatically closed (and monies go ???).  Or, mailed
statements that are returned by USPS as "undeliverable" trigger similar

I.e., you have to have some periodic activity that FORCES TICKET HOLDERS
to show that their tickets (capabilities) are still "of interest".

Quoted text here. Click to load it

<grin>  In this case, you appear to have overlooked some complexity!  :>

IIRC, the Hurd people went through a similar "challenge" when they
looked at moving to L4.  It, being one of those "smaller, simpler,
faster" kernels didn;t provide the same sorts of mechanism that Mach
afforded so, trying to *emulate* those behaviors *on* L4 ended up
making the L4 implementation as sluggish as the Mach approach!

You always trade away something as you move down in complexity.  I can
do a context switch in near zero time -- *if* I don't have to preserve
any process state!!  :>  (while this *sounds* ridiculous, you can
actually be very effective in creating applications with this model!
But, you have to be very disciplined, as well -- cuz *it* doesn't
do much FOR you!)

Tea time...  Then, The Pork Dish!  (yummmm!)

Object life history (with a side of Capabilities)
Hi George,

Brief (ha!) description of how objects come into being and
how actors get access to them.  And how their interactions are
controlled and revoked.

[mixture of Mach-speak and each of the other lexicons we've
loosely employed]

Sketching as you read may help!

When a task is created (by <someone>/<something>), in addition
to the typical resources packed in that container (memory,
executable, etc) there is a *Handle* to a Namespace_t.  The Handle
is effectively a send right to a particular port_t.  That Handle
is *handled* by the Namespace Server (Handler).  I.e., the Namespace
Handler holds the receive right for that port_t.

(remember the task is an object and *also* is represented by a Handle!)

Any communications directed to that port_t (Handle) end up *at*
the Namespace Handler -- arriving on that specific port ("assigned"
to that particular task).  I.e., the Handle represents the task's

All Namespace operations (methods) expect a Handle BACKED BY SOMETHING
THAT UNDERSTANDS NAMESPACE IDL as an argument.  Other arguments vary
with the operation (method) being invoked.  For example, lookup(name),
create(name), destroy(name), etc.

The <someone>/<something> that caused this Namespace object to be
created (evident below), may have prepopulated the namespace with
certain names that the task will recognize, by convention (can
vary with task; e.g., may also include a name like "[parent]").

The <someone> may elect to allow the task to modify this namespace
(add, delete, rename, etc).  Or, it may *lock* the namespace from
the task's perspective.

Once the task is executing, it can lookup() particular names in its
namespace that have been previously agreed upon, by convention.
[note that the conventions need only be consistent between a task and
he creator of its namespace!  one task can know an object by the
name "trunk release" and another could know an object by the name
"boot opener".  Names are just convenient mnemonics.]

Invoking the lookup() stub with the args required causes an IPC/RPC
message to be created, args marshalled, etc. and then delivered to the
port *in* the Namespace Handler that is associated with that task's
Handle (Namespace).

*Because* the task holds a valid send right for the port, the kernel
allows the message to be delivered.

The Namespace Handler receives the message, examines the method being
sought and decides if messages of this type (method) are permitted
ON THIS PORT!  I.e., if the Handle has the appropriate "authorizations"
defined when <someone> created this binding.  If not, it returns FAIL

Assuming the Handle for the task's namespace ALLOWS the lookup() method
to be invoked by that task (WTF?), the args are unmarshalled and the
operation (method) attempted.  A result (if required) is then propagated
back to the calling task (using a *reply* port that the task has
included in the message -- i.e., you can support async methods if you
plan on watching that reply port *later* for the reply to this

[the IDL allows various types of "methods" to be designed -- including
"routines" that return no value.]

In the event of a lookup(), the task is often trying to establish
initial contact with some other Handler -- to create an object of the
type Handled by the Handler.  Pretend we're interested in Files
(bad choice but easy to spell!  :> )

[Of course, objects of that type may already have been created by
<someone> and placed into task's Namespace before the task began
executing!  The Handles for any of these will already have been
bound to appropriate names in the task's Namespace.  E.g., "log".]

Now it gets clever...

For initial contact with a named Handler, you need a "public" access
method that is enabled "for all".  The Handles to this perhaps just
implement create() -- for the type of object managed by that
Handler (server).  So, a task can contact the File Server to create
a new file!

For untrusted tasks, when <someone> builds the initial Namespace for
the task, he doesn't bind the "trusted" public Handle into the
server.  But, instead, has the Handler creates *another* such Handle.
Same permissions (typically).  Then, <someone> binds the name for
this Handler into the Namespace that it has been building for the
task.  So, "__File_Server__" is bound to a particular port *in*
the File Server.

And, this port may be different for each such task!

All such "public" Handles into the File Server are serviced by one or
more threads.  I can assign a thread per port (Handle), a thread
for the entire service INTERFACE (not counting threads that actually
BACK specific Files), or lump groups of ports into port sets so
a single thread can handle requests coming in on ANY of these ports.

The ask creates a File, for example.  This causes the File Server to
create a new port for that File and pass a send right (Handle) to
that "object" back to the task in the reply to the create() method.

Note that this object now exists yet has no *name*.  No problem!
The task knows it by its handle.  It can read and write it and
even destroy it -- without ever giving it a formal *name*!
Furthermore, it can pass the Handle on to someone else and *they*
can read and write it as well (assuming the task doesn't tell the
File Server to reduce the authorizations on a *new* Handle passed
instead!).  As long as there are Handles referencing the File,
it exists.

If it wants to *name* the File, it can (assuming <someone> gave him
that authorization for HIS Namespace) bind an Name to the object.
some other task holding a copy of that Handle can create a different
name for this same object (File) in *its* namespace!  Or, remove the
name, etc.

[Namespaces are just convenience objects.  BUT, have the effect of
limiting what you can talk to!!  If you can't reference it, you can't
talk to it!]

Now, assume someone holding a Handle to this File starts misbehaving.
Keeps trying to write to the file.  Doing this repeatedly when you
have already been told (error return) that you don't have permission
is abusive -- wastes the File Server's resources!

But, because that offender holds a valid send right to this object, the
kernel will keep passing along those RPC's!

The File Server can take matters into its own hands and destroy the
port on which the RPCs are arriving.  The kernel will then know not
to pass the RPC along -- the task has effectively lost the right
to send to that port!

Ah, but a determined adversary just reconnects to the File Server via
the public port and starts the process all over again!

*BUT*, the only way the adversary can talk to the File Server is via
the public port that was created JUST FOR HIM!  I.e., __File_Server__
maps to a specific port that, effectively, identifies *him* (his
"public" connection).

When the File Server gave permission to access that particular File
that was later abused, it knew which pubic port was associated with it!
So, the File Server can delete *that* port thereby severing the
task's only means of talking to the File Server publicly.

[You can also create initial "credentialed" connections whereby each
actor identifies itself with a unique token -- a *port* that it creates
exclusively for that particular connection creation operation.  As
ports are unforgeable, only a task holding this port can act *as* this

He can still exercise any Handles that he has outstanding for other
File objects -- until he gets abusive with those, etc.

So, a server can isolate itself from its clients, selectively.
(remember, everything is essentially a server -- for *something*!)

 From the other side, when everything drops (destroys, discards)
every Handle to an object, that object can no longer be accessed!
So, it can be deinstantiated and its resources freed.  (this of course
is done by the Handler for that type of object)

E.g., when the last Handle into a particular Namespace is dropped,
the Namespace itself can disappear (which, in turn, causes handles
to all the objects referenced in the namespace to go away... etc.)

Simiarly, when the last Handle into the Namespace Handler (!!)
goes away, the Namespace Handler itself can go away!!

[an unreferenceable object is just a waste of resources!]

This is where the notification of Handler death comes into play.
Automatic self-GC without a sweeper!

So, "init()" can do its job and then release the Handles that it held
to all these "top level servers" *knowing* that the last task to
reference any of them will effectively determine their fate -- without
init() acting like a zombie clinging to them.

<frown>  As I said, a picture makes things more intuitive.

Site Timeline