Large-scale storage systems often attempt to achieve two seemingly conflicting goals: (1) the systems need to reduce the copies of redundant data to save space, a process called deduplication; and (2) users demand encryption of their data to ensure privacy. Conventional encryption makes deduplication on ciphertexts ineffective, as it destroys data redundancy. A line of work, originated from Convergent

Encryption [28], and evolved into Message Locked Encryption [12], strives to solve this problem. The latest work, DupLESS [11], proposes a server-aided architecture that provides the strongest privacy. The DupLESS architecture relies on a key server to help the clients generate encryption keys that result in convergent ciphertexts. In this paper, we first provide a rigorous proof of security, in the random oracle model, for the DupLESS architecture which is lacking in the original paper. Our proof shows that using additional secret, other than the data itself, for generating encryption keys achieves the best possible security under current deduplication paradigm.We then introduce a distributed protocol that eliminates the need for a key server and allows less managed systems such as P2P systems to enjoy the high security level. Implementation and evaluation show that the scheme is both robust and practical.

Indistinguishability-based definitions of cryptographic primitives such as encryption, commitments, and zero-knowledge proofs are proven to be impossible to realize in scenarios where parties only have access to non-extractable sources of randomness (Dodis et al., FOCS 2004). In this work we demonstrate that it is, nevertheless, possible to quantify this secrecy loss for non-extractable sources such as the (well-studied) Santha-Vazirani (SV) sources. In particular, to establish meaningful security guarantees in scenarios where such imperfect randomness sources are used, we define and study differential indistinguishability, a generalization of indistinguishability inspired by the notion of differential privacy.

We analyze strengths and weaknesses of differential indistinguishability both individually as well as under composition, and we interpret the resulting differential security guarantees for encryption, commitments, and zero-knowledge proofs.

Surprisingly, indistinguishability with uniform randomness carries over to differential indistinguishability with SV randomness: We show that all primitives that are secure under a traditional indistinguishibility-based definition are differentially secure when they use (a bounded amount of) SV randomness instead of uniform randomness.

We study the asymptotic-capacity-achieving score function that was recently proposed by Oosterwijk et al. for bias-based traitor tracing codes. For the bias function we choose the Dirichlet distribution with a cutoff. Using Bernstein\'s inequality and Bennett\'s inequality, we upper bound the false positive and false negative error probabilities. From these bounds we derive sufficient conditions for the scheme parameters. We solve these conditions in the limit of large coalition size $c_0$ and obtain asymptotic solutions for the cutoff, the sufficient code length and the corresponding accusation threshold.

The code length converges to its asymptote approximately as $c_0^{-1/2}$, which is faster than the $c_0^{-1/3}$ of Tardos\' score function.

In our paper at PROOFS 2013, we formally studied a few known countermeasures to protect CRT-RSA against the BellCoRe fault injection attack. However, we left Vigilant\'s countermeasure and its alleged repaired version by Coron et al. as future work, because the arithmetical framework of our tool was not sufficiently powerful. In this paper we bridge this gap and then use the same methodology to formally study both versions of the countermeasure. We obtain surprising results, which we believe demonstrate the importance of formal analysis in the field of implementation security. Indeed, the original version of Vigilant\'s countermeasure is actually broken, but not as much as Coron et al. thought it was. As a consequence, the repaired version they proposed can be simplified. It can actually be simplified even further as two of the nine modular verifications happen to be unnecessary. Fortunately, we could formally prove the simplified repaired version to be resistant to the BellCoRe attack, which was considered a \"challenging issue\" by the authors of the countermeasure themselves.

We present the first general MPC protocol that satisfies the following: (1) the construction is black-box, (2) the protocol is universally composable in the plain model, and (3) the number of rounds is constant. The security of our protocol is proven in angel-based UC security under the assumption of the existence of one-way functions that are secure against sub-exponential-time adversaries and constant-round semi-honest oblivious transfer protocols that are secure against quasi-polynomial-time adversaries. We obtain the MPC protocol by constructing a constant-round CCA-secure commitment scheme in a black-box way under the assumption of the existence of one-way functions that are secure against sub-exponential-time adversaries. To justify the use of such a sub-exponential hardness assumption in obtaining our constant-round CCA-secure commitment scheme, we show that if black-box reductions are used, there does not exist any constant-round CCA-secure commitment scheme under any falsifiable polynomial-time hardness assumptions.

We remark that the structure of bilinear groups of a large composite order(at least 1024 bits) could make group operation inefficient and lose the advantages of elliptic curve cryptography which gained mainly from smaller parameter size. As of 2013, the longest parameter recommended by NIST for elliptic curves has 571 bits.

From the practical point of view, such an algebraic structure is unlikely applicable to cryptographic schemes.

Real-world cryptographic protocols, such as the Transport Layer Security (TLS) and Secure Shell (SSH) protocols, support the negotiation of different combinations of cryptographic algorithms, often known as ciphersuites. An individual ciphersuite can be modelled as an authenticated and confidential channel establishment (ACCE) protocol, and recently all widely deployed TLS ciphersuites have been individually proven ACCE-secure. In practice, users often re-use long-term keys across ciphersuites, for example using the same digital signature key in two different signed Diffie--Hellman (DH) ciphersuites. Recently, a cross-ciphersuite attack on TLS was discovered in which a signed elliptic curve DH structure can be interpreted as a signed finite-field DH structure, breaking authentication. Thus, ACCE security of individual ciphersuites does not generically imply collective security when long-term keys are re-used across ciphersuites.

We investigate the security of multi-ciphersuite protocols with re-used long-term keys. We show how to \"open\" the ACCE definition slightly so that, after each ciphersuites has been proven secure individually, they can then be used together in a secure multi-ciphersuite protocol, even when long-term keys are re-used across ciphersuites, provided the ciphersuites\' messages satisfy an independence property. We apply our definitions and composition theorem to the SSH protocol, showing that signed Diffie--Hellman SSH ciphersuites are individually ACCE-secure; they also satisfy the preconditions of our composition theorem, and thus SSH is multi-ciphersuite-secure even with re-use of long-term keys.

We show that if NC^1 \\neq L, then for every element g of the alternating group A_t, circuits of depth O(log t) cannot distinguish between a uniform vector over (A_t)^t with product = g and one with product = identity. Combined with a recent construction by the author and Viola in the setting of leakage-resilient cryptography [STOC \'13], this gives a compiler that produces circuits withstanding leakage from NC^1 (assuming NC^1 \\neq L). For context, leakage from NC^1 breaks nearly all previous constructions, and security against leakage from P is impossible.

We build on work by Cook and McKenzie [J.\\ Algorithms \'87] establishing the relationship between L = logarithmic space and the symmetric group S_t. Our techniques include a novel algorithmic use of commutators to manipulate the cycle structure of permutations in A_t.

This paper studies software optimization of Elliptic Curve Cryptography with 256-bit prime fields. We propose a constant-time implementation of the NIST and SECG standardized curve P-256, that can be seamlessly integrated into OpenSSL. This accelerates Perfect Forward Secrecy TLS handshakes that use ECDSA and/or ECDHE, and can help improving the efficiency of TLS servers. We report significant performance improvements for ECDSA and ECDH, on several architectures. For example, on the latest Intel Haswell microarchitecture, our ECDSA sign is 2.33x faster than OpenSSL\'s implementation.

Public-Key Encryption (PKE) and Message Authentication (PKMA, aka as digital signatures) are fundamental cryptographic primitives. Traditionally, both notions are defined as non-interactive (i.e., single-message). In this work, we initiate rigorous study of (possibly) {\\em interactive} PKE and PKMA schemes. We obtain the following results demonstrating the power of interaction to resolve questions which are either open or impossible in the non-interactive setting.

Efficiency/Assumptions.

One of the most well known open questions in the area of PKE is to build, in a ``black-box way\'\', so called chosen ciphertext attack (CCA-) secure PKE from chosen plaintext attack (CPA-) secure PKE. In contrast, we show a simple $2$-round CCA-secure PKE from any (non-interactive) CPA-secure PKE (in fact, these primitives turn out to be equivalent). Similarly, although non-interactive PKMA schemes can be inefficiently built from any one-way function, no efficient signature schemes are known from many popular number-theoretic assumptions, such as factoring, CDH or DDH. In contrast, we show an efficient $2$-round PKMA from most popular assumptions, including factoring, CDH and DDH.

Advanced Properties.

It is well known that no non-interactive signature (resp. encryption) scheme can be {\\em deniable} (resp. {\\em forward-secure}), since the signature (resp. ciphertext) can later ``serve as an evidence of the sender\'s consent\'\' (resp. ``be decrypted if the receiver\'s key is compromised\'\'). We also formalize a related notion of {\\em replay-secure} (necessarily) interactive PKMA (resp. PKE) schemes, where the verifier (resp. encryptor) is assured that the ``current\'\' message can only be authenticated (resp. decrypted) by the secret key owner {\\em now}, as opposed to some time in the past (resp. future). We observe that our 2-round PKMA scheme is both replay-secure and (passively) deniable, and our 2-round PKE scheme is both replay- and forward-secure. We also define and construct stronger forms of necessarily interactive PKE/PKMA schemes, called {\\em confirmed encryption} and {\\em confidential authentication}.

Anonymous Key Exchange.

We extend our definitional framework for interactive PKE and PKMA schemes to give definitions and constructions of (necessarily interactive) {\\em anonymous key exchange} (1-KE), where an anonymous (unkeyed) party establishes a key with an authenticated (keyed) party. Unlike the prior work, defining 1-KE by ``downgrading\'\' the hairy and complex definition of {\\em mutually authenticated} key exchange (2-KE), our definition is very ``short\'\' and easy to understand. We also show simple and general connections between anonymous KE and (interactive) confirmed PKE/confidential PKMA schemes. As a result, we obtain old and new schemes for anonymous KE in a clean and modular manner. For example, we obtain the first $2$-round anonymous KE which is both (passively) deniable and forward-secure.