This is a report-back, since I know other people wanted to attend. I'm not a lawyer, but I develop software to improve communications security, I care about these questions, and I want other people to be aware of the discussion. I hope I did not misrepresent anything below. I'd be happy if anyone wants to offer corrections.

Background

The two most common characteristics people want from a secure instant messaging program are:

Authentication

Each participant should be able to know specifically who the other parties are on the chat.

Confidentiality

The content of the messages should only be intelligible to the parties involved with the chat; it should appear opaque or encrypted to anyone else listening in. Note that confidentiality effectively depends on authentication -- if you don't know who you're talking to, you can't make sensible assertions about confidentiality.

As with many other modern networked encryption schemes, OTR relies on each user maintaining a long-lived "secret key", and publishing a corresponding "public key" for their peers to examine. These keys are critical for providing authentication (and by extension, for confidentiality).

But OTR offers several interesting characteristics beyond the common two. Its most commonly cited characteristics are "forward secrecy" and "deniability".

Forward secrecy

Assuming the parties communicating are operating in good faith, forward secrecy offers protection against a special kind of adversary: one who logs the encrypted chat, and subsequently steals either party's long-term secret key. Without forward secrecy, such an adversary would be able to discover the content of the messages, violating the confidentiality characteristic. With forward secrecy, this adversary is be stymied and the messages remain confidential.

Deniability

Deniability only comes into play when one of the parties is no longer operating in good faith (e.g. their computer is compromised, or they are collaborating with an adversary). In this context, if Alice is chatting with Bob, she does not want Bob to be able to cryptographically prove to anyone else that she made any of the specific statements in the conversation. This is the focus of Monday's discussion.

To be clear, this kind of deniability means Alice can correctly say "you have no cryptographic proof I said X", but it does not let her assert "here is cryptographic proof that I did not say X" (I can't think of any protocol that offers the latter assertion). The opposite of deniability is a cryptographic proof of origin, which usually runs something like "only someone with access to Alice's secret key could have said X."

If you're not doing anything wrong...

The discussion was well-anchored by a comment from another participant who cheekily asked "If you're not doing anything wrong, why do you need to hide your chat at all, let alone be able to deny it?"

The general sense of the room was that we'd all heard this question many times, from many people. There are lots of problems with the ideas behind the question from many perspectives. But just from a legal perspective, there are at least two problems with the way this question is posed:

laws themselves are not always just (e.g. consider chat communications between an interracial couple in the USA before 1967, if instant messaging had existed at the time), and

law enforcement (or a legal adversary in civil litigation) may have a different understanding or interpretation of the law than you do (e.g. consider chat communications between a corporate or government whistleblower and a journalist).

In these situations, people confront real risk from the law. If we care about these people, we need to figure out if we can build systems to help them reduce that legal risk (of course we also need to fix broken laws, and the legal environment in general, but those approaches were out of scope for this discussion).

The Legal Utility of Deniability

Monday's meeting was called specifically because it wasn't clear how much real-world usefulness there is in the "deniability" characteristic, and whether this feature is worth the development effort and implementation tradeoffs required. In particular, the group was interested in deniability's utility in legal contexts; many (most?) people in the room were lawyers, and it's also not clear that deniability has much utility outside of a formal legal setting. If your adversary isn't constrained by some rule of law, they probably won't care at all whether there is a cryptographic proof or not that you wrote a particular message (In retrospect, one possible exception is exposure in the media, but we did not discuss that scenario).

Places of possible usefulness

So where might deniability come in handy during civil litigation or a criminal trial? Presumably the circumstance is that a piece of a chat log is offered as incriminating evidence, and the defendant is trying to deny something that they appear to have said in the log.

This denial could take place in two rather different contexts: during rules over admissibility of evidence, or (once admitted) in front of a jury.

In legal wrangling over admissibility, apparently a lot of horse-trading can go on -- each side concedes some things in exchange for the other side conceding other things. It appears that cryptographic proof of origin (that is, a lack of deniability) on the chat logs themselves might reduce the amount of leverage a defense lawyer can get from conceding or arguing strongly over that piece of evidence. For example, if the chain of custody of a chat transcript is fuzzy (i.e. the transcript could have been mishandled or modified somehow before reaching trial), then a cryptographic proof of origin would make it much harder for the defense to contest the chat transcript on the grounds of tampering. Deniability would give the defense more bargaining power.

In arguing about already-admitted evidence before a jury, deniability in this sense seems like a job for expert witnesses, who would need to convince the jury of their interpretation of the data. There was a lot of skepticism in the room over this, both around the possibility of most jurors really understanding what OTR's claim of deniability actually means, and on jurors' ability to distinguish this argument from a bogus argument presented by an opposing expert witness who is willing to lie about the nature of the protocol (or who misunderstands it and passes on their misunderstanding to the jury).

The complexity of the tech systems involved in a data-heavy prosecution or civil litigation are themselves opportunities for lawyers to argue (and experts to weigh in) on the general reliability of these systems. Sifting through the quantities of data available and ensuring that the appropriate evidence is actually findable, relevant, and suitably preserved for the jury's inspection is a hard and complicated job, with room for error. OTR's deniability might be one more element in a multi-pronged attack on these data systems.

These are the most compelling arguments for the legal utility of deniability that I took away from the discussion. I confess that they don't seem particularly strong to me, though some level of "avoiding a weaker position when horse-trading" resonates with me.

What about the arguments against its utility?

Limitations

The most basic argument against OTR's deniability is that courts don't care about cryptographic proof for digital evidence. People are convicted or lose civil cases based on unsigned electronic communications (e.g. normal e-mail, plain chat logs) all the time. OTR's deniability doesn't provide any legal cover stronger than trying to claim you didn't write a given e-mail that appears to have originated from your account. As someone who understands the forgeability of e-mail, i find this overall situation troubling, but it seems to be where we are.

Worse, OTR's deniability doesn't cover whether you had a conversation, just what you said in that conversation. That is, Bob can still cryptographically prove to an adversary (or before a judge or jury) that he had a communication with someone controlling Alice's secret key (which is probably Alice); he just can't prove that Alice herself said any particular part of the conversation he produces.

Additionally, there are runtime tradeoffs depending on how the protocol manages to achieve these features. For example, forward secrecy itself requires an additional round trip or two when compared to authenticated, encrypted communications without forward secrecy (a "round trip" is a message from Alice to Bob followed by a message back from Bob to Alice).

Getting proper deniability into the mpOTR spec might incur extra latency (imagine having to wait 60 seconds after everyone joins before starting a group chat, or a pause in the chat of 15 seconds when a new member joins) or extra computational power (meaning that they might not work well on slower/older devices) or an order of magnitude more bandwidth (meaning that chat might not work at all on a weak connection). There could also simply be complexity that makes it harder to correctly implement a protocol with deniability than an alternate protocol without deniability. Incorrectly-implemented software can put its users at risk.

I don't know enough about the current state of mpOTR to know what the specific tradeoffs are for the deniability feature, but it's clear there will be some. Who decides whether the tradeoffs are worth the feature?

Other kinds of deniability

Further weakening the case for the legal utility of OTR's deniability, there seem to be other ways to get deniability in a legal context over a chat transcript.

There are deniability arguments that can be made from outside the protocol. For example, you can always claim someone else took control of your computer while you were asleep or using the bathroom or eating dinner, or you can claim that your computer had a virus that exported your secret key and it must have been used by someone else.

If you're desperate enough to sacrifice your digital identity, you could arrange to have your secret key published, at which point anyone can make signed statements with it. Having forward secrecy makes it possible to expose your secret key without exposing the content of your past communications to any listener who happened to log them.

Conclusion

My takeaway from the discussion is that the legal utility of OTR's deniability is non-zero, but quite low; and that development energy focused on deniability is probably only justified if there are very few costs associated with it.

Several folks pointed out that most communications-security tools are too complicated or inconvenient to use for normal people. If we have limited development energy to spend on securing instant messaging, usability and ubiquity would be a better focus than this form of deniability.

Secure chat systems that take too long to make, that are too complex, or that are too cumbersome are not going to be adopted. But this doesn't mean people won't chat at all -- they'll just use cleartext chat, or maybe they'll use supposedly "secure" protocols with even worse properties: for example, without proper end-to-end authentication (permitting spoofing or impersonation by the server operator or potentially by anyone else); with encryption that is reversible by the chatroom operator or flawed enough to be reversed by any listener with a powerful computer; without forward secrecy; or so on.

As a demonstration of this, we heard some lawyers in the room admit to using Skype to talk with their clients even though they know it's not a safe communications channel because their clients' adversaries might have access to the skype messaging system itself.

My conclusion from the meeting is that there are a few particular situations where deniability could be useful legally, but that overall, it is not where we as a community should be spending our development energy. Perhaps in some future world where all communications are already authenticated, encrypted, and forward-secret by default, we can look into improving our protocols to provide this characteristic, but for now, we really need to work on usability, popularization, and wide deployment.

Thanks

Many thanks to Nick Merrill for organizing the discussion, to Shayana Kadidal and Stanley Cohen for providing a wealth of legal insight and legal experience, to Tom Ritter for an excellent presentation of the technical details, and to everyone in the group who participated in the interesting and lively discussion.

Comments on this Entry

As a client developer, I think OTR should strive to have as few downsides as possible. I want to get to the point where many people use it opportunistically, without even knowing what it is or what the specific security characteristics are.

In that sense, deniability is important. It saves a step of asking the user "Do you want to turn on OTR? That will sign your messages. Cryptographic signing means ...", which will probably be long and boring and scare many users off. But with OTR, encrypted conversations are just as deniable as unencrypted chats so in that regard nothing changes for the user.

Yes, in practice deniably encrypted messages are as strong as unencrypted, unsigned messages. But I think that's fine. The problem isn't that deniablity is useless, the problem is that messages or emails without any cryptographic proof are considered hard enough evidence in court today.

I completely agree with you that OTR should strive to have as few downsides as possible. If we can provide deniability at no extra operational, theoretical, or implementation cost, we should do it. But i also think it's important that multiparty OTR should actually exist, and be usable and interoperable; Otherwise, people will just use google chat or unencrypted IRC or jabber or skype, which have even worse security properties (especially when you consider the server operator as a potential adversary, which is unavoidable these days).

At the moment, i think the only attempt at an active implemention of mpOTR is CryptoCat which itself has abandoned deniability in group chat. While i'd love to argue with Nadim for having sacrificed that feature (who doesn't love arguing with Nadim, really?), i think he made the right choice here in shipping a functional implementation that still tries to provide authenticity, confidentiality, and forward secrecy to the users. We need more implementations and wider adoption of these other three features sooner rather than later. If someone can propose deniability for mpOTR that doesn't impose implementation or operational costs, that's great; we should patch up the deployed base to include it. In the meantime we need to actually build a deployed base to get people off of cleartext group chat.

To clarify further: i'm not arguing that we should drop deniability from traditional (2-party) OTR, where it has been working for years. I'm very happy that it's there and it works, even if it seems unlikely to have much legal impact. But my point here (and Monday's discussion) was about group chat (multi-party OTR, or mpOTR), not 2-party chat.

If getting deniability done right is contributing to a delay in mpOTR implementation and deployment, i think we should be willing to sacrifice that characteristic for the sake of actually shipping code that provides end-to-end authenticity and confidentiality and forward secrecy for group chat.

Enrico Zini points out that a cryptographic proof of origin for any particular chat statement might also be deniable based on the context-sensitivity of any particular chat message or conversation. This is another "outside the protocol" form of deniability.

If the cryptographic proof of origin that results from a non-deniable secure chat mechanism only covers individual messages (and does not produce a cryptographically-verifiable transcript of the entire conversation), then the nature of most chats makes it seem likely that there could also be missing messages, or that the contextual meaning of each message could be quite different.

For example, if the prosecution attempts to frame Alice as a spy with a transcript from a chat with non-deniable per-message proofs of origin like this:

Carol: Can you get me the secret files?Alice[verifiably-signed]: Sure, I'll give them to you when I see you on Thursday.

Then Alice can argue that this transcript is the result of a selective modification (evidence tampering) of this actual discussion:

Carol: Hey, do you still have my spare house keys?Alice[verifiably-signed]: Sure, I'll give them to you when I see you on Thursday.

Or it could even have been extracted from this discussion:

Carol: Can you get me the secret files?Alice[verifiably-signed]: No, even if I had access, I would never violate my NDACarol: Oh well.Carol: Hey, do you still have my spare house keys?Alice[verifiably-signed]: Sure, I'll give them to you when I see you on Thursday.

Even if the entire chat (both directions, timestamped, with no elisions or insertions possible) is covered by a non-deniable cryptographic signature, it could have simply been preceded by a phone call or an e-mail about how delicious Alice's father's vegan lasagna is, and how silly he is for trying to hide the recipe from his kids and their friends who want to make it themselves.

The point here is that depending on the structure of the cryptographic proof of origin (what data specifically is signed, what other contextual framing is necessary to understand the signed data), there may be still other ways to for Alice deny the imputed meaning, even if the protocol lacks the "deniability" property that OTR provides (and mpOTR aims for).