David Wagner wrote:
>Constantine Plotnikov writes:
>>>>"... capabilities are useless for giving meaningful guarantees
>>about authority graph in the system in presence of covert channels".
>>Note that I do not believe that capabilities are useless for this task,
>>they give some guarantees, but these guarantees just are not absolute.
>>>>>>I don't follow your reasoning. I don't think this is accurate.
>One general methodology is to (i) assume that all secrets are actually
>known to the adversary, and (ii) trace all possible configurations of
>the pointer graph.
>>One consequence of (i) is that all crypto is insecure. To give another
>example, in the following code:
> Object m(int x) {
> if (x == secret)
> return new SuperPowerfulCapability();
> return null;
> }
>one assumes that m() might return SuperPowerfulCapability to its caller.
>>Note that in a type-safe programming language, even knowing all the secrets
>in the universe doesn't help you to forge references. Consequently, one can
>gain an upper bound on the set of references any piece of code might obtain,
>and this gives an upper bound on the propagation of capabilities. Obviously
>the upper bound might be too loose in some cases, but I think it can still
>be useful in many cases.
>>I'm not sure what you mean by "useless" or "not useless but guarantees just
>are not absolute". Guarantees are never absolute: they are always predicated
>on assumptions (e.g., that the underlying hardware works as expected). What
>pitfall in particular were you worried about with the above methodology?
>>Lets make a contrived example:
1. There is component A that listen to network commands.
2. There is a component B that can launch ballistic missiles.
3. The component B is not reachable from component A using pointers.
By analysis of capability graph we might conclude that component A
cannot invoke any function of component B using capability invocations.
However if authors of component A and component B are in conspiracy, it
might be still possible to give a command by network to launch ballistic
missile using covert channels. An analysis of capablity graphs analysis
by itself cannot give a guarantees about de-facto authority graphs.
However de-jure authority graphs (capabilities) can be used to limit
covert channels in particular situation if we bother doing it.
Lets now return to original texts about exceptions:
>I agree that there is value in looking for programming language mechanisms
>that reduce the likelihood of unintentional leakage of secrets. But what
>I'm arguing is that there's no point trying to forbid malicious code from
>deliberately leaking secrets. Given the existence of covert channels, you
>probably can't prevent it anyway. "Don't forbid what you can't prevent."
>and
> I'd prefer a system where programmers can reason about their code and we
> can make some meaningful guarantees. If we can't make any guarantees,
> or if we're just going to kludge up something that will only slow an
> attacker down for a few minutes, we can do that without inventing a new
> programming language.
I interpret the text as the following:
/We should not bother plugging exception data leak hole because it does
not gives meaningful guarantees about data leaks. We can leak data
anyway using covert channels./
If we apply the same logic to capabilites, we should not bother with
them too, because using just them we cannot make guarantees about
de-facto authority graph. So my opinion is that the maxim "Don't forbid
what you can't prevent." does not hold water if we take it too
literally. It should be replaced with something weaker like "security
policy should give only those promises that can be proved".
In case of ballistic missiles, the policy established with capabilty
graph only guarantees that ballistic missile cannot be launched from
network aware component by chain of capability invocations. To make
stronger guarantee like "there is no way to launch missile by network
command", we need to do additional activities like code audit, resource
patitioning etc.
Constantine