Monday, December 7, 2015

In Crypto 2015, Elias, Lauter, Ozman and Stange [ELOS15] proposed a successful attack on the decision version of the non-dual Ring-LWE problem over number fields $K$ for two specific families of defining polynomials. Both families have the common feature that the coefficients of the polynomials are dependent on the modulus $q$.

We extended this attack to the search version of Ring-LWE by using that the error distributions in such number fields are very skewed by moving away from the Minkowski space. Particularly, the largest errors either wrap around modulo $q$ (and thus make it impossible to decrypt the secret), or the smallest ones are so negligible that we obtain exact linear equations that reveal the secret. Our attack applies to every modulus up to $q$, and requires less samples.

Let $M$ be the $n \times n$ matrix of the canonical embedding $K \to \mathbb{C}^n$ with respect to the polynomial basis of $K$, and let $N$ be its inverse. The "skewness" of the errors can be nicely described by the singular value decomposition (SVD) of the matrix $N$ and the condition number $k(N) = s_1(N)/s_n(N)$, where $s_1$ and $s_n$ are the biggest and the smallest singular values correspondingly.

Recall that the SVD is given by $N = U \cdot \Sigma \cdot \overline{V}^t$, where $U$, $V$ are unitary matrices and $\Sigma$ is a diagonal matrix containing the singular values. The image of a unit sphere under $N$ will result in an ellipsoid whose axes are defined by the columns of $U$, with length equal to the corresponding singular value. Hence, the condition number captures how heavily this ellipsoid is deformed.

All vulnerable instances in [ELOS15] are defined by polynomials $f_{n, a, b} = x^n +ax + b$, where $a$ and $b$ are such that $f(1) = 0 \bmod q$, which is the property that they exploit, but that our attack does not need. If we consider their first family of samples, where $a = 0$ and $b = q - 1$, the SVD of $N$ is of the form $I_{n \times n} \cdot \Sigma \cdot \overline{V}^t$. Thus a spherical error distribution on the Minkowski space with the parameter $\sigma$ induces an elliptical distribution on the number field, where the $i$th coordinate is distributed by a Gaussian with $\sigma_i = s_i(N) \cdot \sigma$. Moreover, the condition number is close to the modulus $q$, that is, the elliptical distribution is stretched along axes whose lengths range roughly from $\sigma_1$ to $q \cdot \sigma_1$. For the second family we did not find such an explicit form of the SVD. Nevertheless, the unitary matrix $U$ remains close to being diagonal (see the heat map below) and the singular values form a near-geometric series again leading to a condition number close to $q$.

It can be easily seen that for both families and for the choices of $\sigma$ made in [ELOS15], the Ring-LWE distribution generates negligible errors in the higher terms of the polynomial basis. More precisely each Ring-LWE sample $(a, b = a \cdot s + e)$ can be written as

where $M_{a}$ corresponds to multiplication by $a$. By rounding, the higher error terms are removed and the last equations become exact equations in the coefficients of the secret. If the highest $\left\lceil n/k \right\rceil$ error terms round to zero then we need only $k$ Ring-LWE samples to reveal the secret (for these concrete examples 10 samples amply suffice).

Friday, December 4, 2015

It is the day after AsiaCrypt 2015 and there are two workshops being
held in Auckland. The one which is most relevant for my research is that
on Lattice Based Cryptography; which consists of four talks. One by
Jung Hee Cheon on "Multilinear Maps and Their Cryptanalysis", one by
Amit Sahai on "Obfuscation", one by Fre Vercauteren on "Weak Instances
of RLWE" and one by Martin Albrecht on "Small Secret LWE".

Cheon first described a very naive version of multi-linear maps and then
went on to show how this can be attacked by creating non-trivial
encodings of zero, and then taking greatest common divisors. Then he
went on to generalise this naive scheme to the CLT scheme (which is a
bit like the DGHV FHE scheme). The naive attack does not apply to CLT as
the dimension increased, meaning taking naive greatest common divisors
would not work. Cheon then showed how to extend the naive attack to the
CLT case by turning the gcd extraction into an eigenvalue extraction
problem. This done by building quadratic forms which represent encodings
of zero. The result is that for the CLT scheme one can break the
equivalent of the DLP problem.

Cheon then went on to present the GGH scheme, which is a bit like the
NTRU FHE scheme; except the instead of encrypting via c=[(m+r*p)/z] for
an integer p, one encodes via c=[(m+r*g)/z] for a polynomial g which
generates the ideal lattice <g>. Modifying the prior attack in
this situation allows us to recover a basis of this ideal. But finding a
short vector in this lattice can be hard. However, by utilizing
encodings of zero one can actually solve the equivalent of the CDH
problem.

Both attacks rely heavily on the presence of encodings of zero. So the
attacks do not apply to situations in which one does not publish such
encodings; i.e. applications such as indistinguishability Obfuscation
(iO).

Amit Sahai then gave an introduction to iO; he motivated it via an
analogy of an attacker who captures your brain and is able to read and
tamper with every neuron, yet we still do not want the attacker to know
what we are thinking about. This is the problem which obfuscation tries
to solve in the computing realm. Martin pointed out that this would be a
great way to produce malware!

Amit then put Multi-Party Computation within this analogy. He suggested
we can think of MPC as protecting our brain against the tampering
adversary, by dividing the brain up into portions. As long as one
portion is kept out of the adversaries control we can use MPC to protect
our thoughts. Obfuscation tries to do the same, without there needing
to be an honest part of the brain.

Any program which is suitable for obfuscation must be unlearnable from
query access to the program. Since otherwise the adversary could learn
the program from the input/output behaviour. However, black-box
obfuscation has been shown to be impossible; essentially because their
are contrived programs which are unlearnable but for which one cannot
produce an obfuscation, since any obfuscated program has an explicit
attack against it.

This is why iO as a concept was presented; since it at least seems
possible to achieve. The idea is that if you have two equivalent
programs and we obfuscate one of them, then the adversary cannot tell
which one we obfuscated. One way of thinking of this is as a
psuedo-canonicalizer. The question is what useful can one do if we could
create an obfuscator which satisfied the iO definition. Amit gave the
application of building demo versions of software, without needing to
re-engineer the software.

Fre Vercauteren then discussed a more in depth analysis of a paper from
CRYPTO this year on Weak Instances of Ring-LWE. The CRYPTO paper gave
instances where decision Ring-LWE was easy, but search appeared to be
hard. However, Fre's talk showed that the search problem was in fact
easy from the start, and thus the CRYPTO paper was less surprising than
it at first seemed to be. As with all things on Ring-LWE the question
arises as to how to choose the error distributions.

Fre spend the first part of his talk discussing the geometry of number
fields, and in particular the Minkowski embedding. The Ring-LWE problem
generates errors according to a discrete Gaussian distribution in the
Minkowski embedding, Poly-LWE is to generate the errors according to a
discrete Gaussian in the polynomial embedding.

Eisentrager et al discussed cases for which Poly-LWE was easy, these
were then extended by Elias et al to special cases of decision Ring-LWE.
They did this by mapping the special Ring-LWE instance to a special
Poly-LWE instance.This is done by pulling back the problem from Ring-LWE
to Poly-LWE via the matrix which defines the Minkowski embedding. The
Poly-LWE attack requires that q is larger than f(1), and hence q will
"kind of show up" in the coefficients of the defining polynomial f. So
the fields being attacked, are very special indeed.

Tuesday, December 1, 2015

As a new Phd student working on the HEAT project at the UPMC, I spent the last two months reading some stuff about Fully Homomorphic Encryption. Among my different readings, I have noticed an article presenting a new promising open-source C++ library: NFLlib. It has been developed for both the HEAT project and CRYPTOCOMP and is dedicated to ideal lattice cryptography.

This library deals with the arithmetic on the wide spread cyclotomic rings $\mathbb{Z}_q[X]/(X^n+1)$ with $n$ a power of two. At the difference of the Microsoft library SEAL which implements the YASHE encryption scheme, NFLlib provides the elementary functions to build any cryptosystems on those cyclotomic rings. By using Chinese Reminder Theorem, an optimized Number Theoretic Transform and different implementation optimization techniques for SIMD instructions it achieves better performances than the classical libraries NTL and FLINT.

Monday, November 16, 2015

It is not just the HEAT project which is working on trying to find applications of Somewhat Homomorphic Encryption in practice; others around the world are also doing this. In particular Microsoft Research Labs have just announced a project called SEAL(Simple Encrypted Arithmetic Library) which is detailed in the following paper: http://research.microsoft.com/pubs/258435/ManualHE.pdf with code available here http://sealcrypto.codeplex.com/

The Microsoft work is aimed at medical information, in particular bioinfomatics algorithms related to genomic data. The SEAL project is built around the YASHE SHE scheme based on the cyclotomic ring defined by $X^n+1$, for $n$ a power of two. The associated paper explains how large integers and floating point operations can be encoded within the YASHE scheme.

Wednesday, November 4, 2015

To analyze the efficiency of the homomorphic encryption schemes at the core of the HEAT project, we studied how the Rényi divergence can be used instead of the statistical distance. The Rényi divergence is a measure of closeness of two probability distributions. In the HEAT project, we obtained that the latter divergence can often be used as an alternative to the statistical distance in security proofs for lattice-based cryptography, and therefore in homomorphic encryption schemes.

Tuesday, October 20, 2015

Last week, the HEAT summer school on Mathematical and Practical aspects of Fully Homomorphic Encryption and Multilinear Maps took place in (almost) sunny Paris. The talks all covered a range of subjects, with around three talks per topic, which allowed for a good in-depth understanding of what was being presented.

The school started off with defining what Fully Homomorphic Encryption is, and why we would want to use it. The main application is, of course, outsourcing computation. I have a bunch of data I want to compute on, but do not want (or cannot) compute it all by myself, so I would like to outsource it to an external server. But I may not trust the server, and do not want it to learn anything about my data. So I would like to have an encryption scheme where I encrypt my data, then send it off to someone to perform computations on it, and at the end I recover the result of the computation. This is exactly what FHE gives us.

Most FHE schemes are based on the Learning with Errors problem, which was presented in detail on the first day. This was presented in [R05]. This was followed by a discussion of the [GSW13] cryptosystem, based on the approximate eigenvector method.

This was followed by a discussion of the Ring Learning with Errors problems, and a brief presentation of the various reductions, as presented in [LPR10]. A presentation of the BGV cryptosystem was given, together with an introduction about the use of Galois theory in RLWE schemes, in order to use the “packed ciphertexts” method.

The third day of the summer school was the day for Multilinear Maps. The CLT and GGH schemes were discussed, and also a brief discussion on attacks. The summary is that although there are zeroizing attacks for all the constructions at the moment, these do not seem to affect the obfuscation applications. All the attacks are constructed from low-level encodings of zero, which is why the attacks do not apply to obfuscation applications.

The morning of the fourth day was very mathematical, and consisted of two talks on the algebraic geometry of numbers and the LLL and BKZ algorithms and on recovering short generators in principal ideal, see here. This was followed by a presentation of zeroizing attacks on multilinear maps. The final day of the school was devoted to obfuscation. This started with stating the surprising fact that if $P=NP$ then this implies that any program can be obfuscated. Thus, unlike any other cryptography, iO cannot imply hardness; but it can be used to leverage hardness. The idea of using straddling sets from [BGKPS] to construct obfuscation from branching programs was also discussed.

Tuesday, September 22, 2015

There are a number of different Ring-LWE based homomorphic encryption schemes. The two most famous being the BGV scheme (due to Brakerski, Gentry and Vaikuntanathan) and a modification of the famous NTRU encryption scheme. Each of these schemes comes in two flavours, in the first (traditional) flavour the messages are held in the lower order bits, whilst in the so-called scale-invariant flavours the messages are held in the upper order bits. The advantage of the scale-invariant systems are that they avoid the need for strong noise control procedures such as modulus switching.

A lot of work has been conducted into optimizing these schemes, but many papers only consider a certain set of parameters (for example some consider characteristic two plaintext spaces only), or some consider only one form of key-switching operation (there are two main variants of key switching in the literature). Generally speaking the usual consensus is that the scale-invariant version of the NTRU scheme (called YASHE) is to be preferred. This consensus is because YASHE offers the benefits of scale-invariance. In addition ciphertexts consist of only one ring element; as opposed to the two needed for the BGV schemes.

This consensus is however based on limited exploration of the design space. In HEAT output "Which Ring Based Somewhat Homomorphic Encryption Scheme is Best?"by Bristol researchers Ana Costache and Nigel P. Smart the authors explore the design space in more detail. They examine all four schemes using a consistent mathematical model of noise growth for the first time; previous work uses a different model for each scheme in each paper. They conclude that whilst YASHE may be preferable for small characteristic plaintext spaces, for large plaintext spaces the BGV scheme (in the non-scale invariant form) comes out being significantly more efficient. This conclusion holds irrespective of the number of levels (i.e. multiplicative depth) which one requires the homomorphic encryption scheme to support.

Such an analysis has important implications for practical applications of homomorphic encryption. Different application domains will place different constraints on the plaintext spaces. For example if one is evaluating binary circuits in an application then a characteristic two plaintext space is to be preferred. However, if one is evaluating a complex arithmetic expression (such as in a statistical analysis) than one may model the plaintext as integers and hence utilize a large characteristic to avoid integer overflow.

Monday, July 13, 2015

This paper proposes an instruction-set hardware architecture for all
building blocks required in polynomial ring based fully homomorphic schemes and
uses the hardware architecture to instantiate the somewhat homomorphic
encryption scheme YASHE on a large Virtex 7 FPGA. The building blocks present
in the architecture are sufficiently generic to allow implementation of other
homomorphic schemes based on RLWE or NTRU.

System Setup

The computations are performed in a modular polynomial rings of the
form R = Z[x]/(f(x)) where f(x) is a monic irreducible polynomial
Φd(x) of degree n. To utilize single instruction
multiple data (SIMD) operations f(x) is a cyclotomic polynomial. The
plaintext space is R2.

The used parameter supports homomorphic evaluations of SIMON-64/128
(having a multiplicative depth of 44); in particular d = 65535 (and thus
the degree of f(x) is n= 215), log2(q)
= 1228 and the discrete Gaussian distribution with parameter is 8. The chosen
parameter set meets at least 128-bit security and SIMD in 2048 slots.

High-level optimizations

Two main operations in the YASHE scheme are the YASHE.Add for
homomorphic addition and the YASHE.Mult for homomorphic multiplication. While
YASHE.Add is relatively simple, YASHE.Mult isvery costly
since one first needs to compute c1 · c2 over the
integers, then scale the result by t/q and round, before mapping back
into the ring Rq. For the polynomial multiplication of such a
large degree n = 215,
an NTT is used over other polynomial multiplication algorithms thanks to
its almost linear time complexity. Moreover to tackle the problem of long integer
arithmetic CRT is used to split the coefficients into 30-bit small
coefficients. Application of the CRT helps to achieve parallelization and to
utilize the small DSP multipliers available in FPGAs.

High-level Architecture

The instruction-set architecture shown in the figure has three main
components: a computer, a HE-coprocessor and an external memory. The
HE-coprocessor is implemented on an FPGA and supports NTT of polynomials,
coefficient wise additions, subtractions and multiplications, computation of
the residues using the CRT, computation of the coefficients modulo a large
modulus from the CRT-residues, and scaling of the coefficients. The computer
works in master-mode and instructs the HE-coprocessor to perform the required
operations. Since the FPGA has a limited amount of internal memory in the form
of block RAMs, only few residue polynomials can be kept during a computation. A
large external memory is used to store the set of cipher-text polynomials.

The HE-coprocessor has a set of parallel processors, where each
parallel processor is composed of several cores. The parallel processors are
independent of each other and supports a high degree of parallelism and
scalability. Since the architecture is very large, special care is taken to
reduce the routing complexity. For example, a routing-friendly parallel NTT algorithm
is used in the HE-coprocessor.

Experimental Results

The designed HE-coprocessor with 8 parallel processors, each
processor having 16 parallel cores, 16 small-CRT units and 2 large-CRT units was
compiled for the Xilinx Virtex-7 XC7V1140T-2 FPGA, the largest device of the
Virtex-7 FPGA family. The architecture consumes 23 % of the available
registers, 50 % of LUTs, 53 % of DSP slices, and 38 % of block RAMs. This
implementation evaluates SIMON-64/128 in approximately 157.7s (at 143MHz) and
it processes 2048 cipher-texts at once giving a relative time of only 77ms per
block. This is 26.6 times faster than the leading software implementation on a
4-core Intel Core-i7 processor running at 3.4GHz. The timing values do not
consider the overhead of data transfer between the external memory and the
HE-coprocessor.

Future Work

1. The architecture does not implement the required interface
between the external memory and the FPGA-based HE-coprocessor. The cost of data
transfer is very important considering the vast amount of data exchange during
various operations; and hence the interface should be properly designed. It
should be noted that most of the memory access can be performed in parallel
with the computation using a ping-pong scheme of two sets of block RAMs in the
FPGA. The FPGA and the master-computer operate on these two sets alternatively
between two consecutive instructions: when the FPGA operates on the first set,
the master-computer operates on the second, and vice versa. Since only 38% of
the available block RAMs are utilized in the proposed architecture, such a
ping-pong styled memory access seems practical in order to reduce the cost of
memory access.

2. The computation time can be reduced by performing low-level
optimizations in the architecture. For example, it might be possible to
increase the operating frequency beyond 200 MHz by optimizing the long critical
paths that are present in the architecture. The cycle requirement of the NTT
algorithm can also be reduced. Moreover, since the implemented architecture
consumes only ~50% of the available resources, it might be possible to put more
number of parallel processors in the FPGA. This will require area reduction of
each processor and a better placement-routing optimization.

3. The results are obtained from a single FPGA. Since the
HE-coprocessor is composed of independent parallel processors, several FPGAs
can be used to scale down the computation time.

Monday, July 6, 2015

Homomorphic cryptography (at the core of the HEAT project) allows to securely delegate computation over encrypted data and is a very active research area. At ACM-CCS 2014, a top-tier conference on computer and communications security, a new scheme claimed to be the "most efficient of those that support an additive homomorphic property" was proposed by Cheon, Lee and Seo.

Understanding the security of the homomorphic cryptographic schemes is therefore of utmost importance, especially to select the most efficient and secure systems in HEAT. In this paper that will appear at CRYPTO 2015, a top-tier conference in cryptography, we show that the latter scheme is completely insecure. We present new lattice-based attacks that are effectively devastating for the proposed constructions. More precisely, we show that the parameters proposed by Cheon et al. and originally aiming at 128-bit security can be broken in a matter of seconds. And while it is possible to select parameters outside of the range in which our attacks run in polynomial time, they have to be so large as to render the proposed constructions severely uncompetitive (e.g. our asymptotic estimates indicate that 128 bits of security against our attacks require a modulus of at least 400,000 bits).

Friday, July 3, 2015

This paper reports on a hardware architecture implementing all building blocks used in polynomial ring based fully
homomorphic schemes. As an example, the YASHE encryption
scheme is implemented on top of this architecture and the
SIMON-64/128 block cipher was executed in the encrypted domain.

The building blocks are integrated in an instruction-set
coprocessor, which can be controlled by a computer for
evaluating arbitrary functions (up to the multiplicative
depth 44 and 128-bit security level).

This implementation evaluates SIMON-64/128 in approximately
157.7s (at 143MHz) and it processes 2048 ciphertexts at once
giving a relative time of only 77ms per block. This is 26.6
times faster than the leading software implementation on a
4-core Intel Core-i7 processor running at 3.4GHz.

Wednesday, July 1, 2015

In this blog post we outline the main
operations in the YASHE fully homomorphic encryption (FHE) scheme. For the exact details see the paper introducing YASHE.

López-Alt, Tromer, and
Vaikuntanathan proposed a (multi-key) FHE scheme based on the work by Stehlé and Steinfeld in which a provably secure version of NTRUEncrypt is presented with security based on standard problems in ideal lattices. The
FHE scheme from López-Alt, Tromer, and Vaikuntanathan needs to make an
additional assumption relating to the uniformity of the public key, the
so-called decisional small polynomial ratio (DSPR) assumption, to allow
homomorphic operations and remain semantically secure. Brakerski introduced an
approach to limit the noise growth during homomorphic operations via a
tensoring technique.

In this paper, Yet Another Somewhat Homomorphic Encryption (YASHE) scheme is introduced which
incorporates the best of these techniques. YASHE avoids the DSPR assumption by
using the techniques described by Brakerski and construct a new fully
homomorphic encryption scheme from the Stehlé and Steinfeld version based on
standard lattice assumptions and a circular security assumption.

Besides this theoretical advantage,
YASHE has other attractive properties

The ciphertext consists of only a single ring element (as in this paper) as opposed to the two
or more ring elements for schemes based purely on the ring learning witherrors (RLWE) assumption.

In the following I will describe
the more practical variant of YASHE (denoted YASHE’ in the paper). Note that this practical variant YASHE' does need the DSPR assumption.

In
order to show how YASHE works we need to fix some parameters. Selecting secure
parameters is a difficult task by itself for which tools have been generated
(see for instance our previous blog post).
For this post I simply assume correct and secure parameters have been chosen
(but see the paper for some example parameters). Given the security parameter
$\lambda$, fix a positive integer $d$ and modulus $q$ that determine $R={\mathbb Z}[X]/(\Phi_d(X))$, and $t$ with $1< t < q$, and distributions $\chi_{\mathrm{key}}$, $\chi_{\mathrm{err}}$ on $R$.

The message space is $R/tR=({\mathbb Z}[X]/(\Phi_d(X)))/(t({\mathbb Z}[X]/(\Phi_d(X)))).$The function $P_{w, q}$ is a generalization of the PowersofTwo and $D_{w, q}$ is a generalization of BitDecomp from this paper. Instead of a radix-2 representation these functions use a radix-$w$ system. These function take a single ring element and output $\ell_{w,q} = \lfloor\log_w(q)\rfloor + 2$ ring elements. The choice of $w$ is important since it allows for a trade-off between efficiency and error growth. Key generation. KeyGen $(d,q,t,\chi_{\mathrm{key}}, \chi_{\mathrm{err}}, w)$

The key switching algorithm transforms an intermediate encryption into a ciphertext that can be decrypted with $f$ itself. The evaluation key is ${\sf evk} = [P_{w,q}(f) + \vec{e} + h \cdot \vec{s}]_q$, where $\vec{e}, \vec{s} \leftarrow \chi_{\mathrm{err}}^{\ell_{w,q}}$ are vectors of polynomials sampled from the error distribution $\chi_{\mathrm{err}}$. This key is a vector of quasi-encryptions of the secret key $f$ under its corresponding public key. It is required for the homomorphic multiplication operation and is therefore made public. This means, we need to make a circular security assumption, namely that the scheme is still secure even given that ${\sf evk}$ is publicly known.

YASHE in practiceThis practical version of YASHE has been used in order to ensure privacy of sensitive medical data. In this work it is shown how to privately conduct predictive analysis tasks on encrypted data using homomorphic encryption. As a proof of concept a working implementation of a prediction service running in the cloud, which takes as input private encrypted health data, and returns the probability for suffering cardiovascular disease in encrypted form. Since the cloud service uses homomorphic encryption, it makes this prediction while handling only encrypted data, learning nothing about the submitted confidential medical data.