On Sat, Mar 29, 2014 at 6:06 AM, Moxie Marlinspike
<moxie at thoughtcrime.org> wrote:
>>> On 03/27/2014 02:49 AM, Ben Laurie wrote:
>> On 27 March 2014 05:57, Joseph Bonneau <jbonneau at gmail.com> wrote:
>>> If the community doesn't think building such an auditing system buys much
>>> perhaps it's a waste of engineering effort better spent elsewhere, because
>>> the cost of running this would not be zero.
>>>> I think that raising the visibility of a targeted attack by the
>> otherwise-trusted authority is vital to any kind of discovery system
>> for identifier-to-key mappings. How else do you avoid the CA scenario
>> all over again?
>> I just don't see how this raises the visibility. 3rd parties can't
> audit the log to determine a MITM attack is happening, so people can
> only audit the log for their own communication. Those are the same
> people who would opt into in-band verification.
I think we should look at the user effort to participate in this more closely:
Imagine Alice authenticating Bob's public key:
- Bob has to register with a trusted 3rd-party monitor to receive
notifications (over email/text) when the public-key(s) for his account
change. Once he does this, he doesn't have to do extra work for every
"Alice".
- Alice has to register with a trusted 3rd-party monitor to receive
the latest "root hashes". Once she does this, she doesn't have to do
extra work for every "Bob".
These registrations could be assisted by the service's default app.
Provided it's a local app and not a web app, it could be audited, so
it would be risky for the service to backdoor it.
So the useability cost here would largely be on registration with
trusted 3rd-party monitors, and receiving key-change notices. Seems
like that would be lower than having each pair of users perform a
fingerprint validation.
Of course, the infrastructure costs and system complexity are much higher.
> There is maybe some sense that the log provides "proof" that the people
> verifying their own communication can use to publish their findings of a
> MITM, but since the log itself is controllable by those parties (they
> are capable of changing their own keys to whatever they would like in
> the log), everyone still just has to take their word for it.
For this reason, while I agree there's some "deterrence value" in
threatening to expose the service if it launches a MITM, I think this
value is limited (the service has a good chance of trying out a MITM
and getting away with it, while the user either ignores the notice or
freaks out to a world that doesn't believe them).
I also think services would be reluctant to advertise this as a
feature ("If you ever get a key-change notification you didn't expect,
just freak out and tell everyone we're discredited!"), and might be
reluctant to adopt this due to the reputation risk.
So how to explain and market this to services, as well as their users,
seems like an open question.
Trevor