Abstract

For any pair (X, Z) of correlated random variables we can think of Z as a randomized function of X. If the domain of Z is small, one can make this function computationally efficient by allowing it to be only approximately correct. In folklore this problem is known as simulating auxiliary inputs. This idea of simulating auxiliary information turns out to be a very usefull tool, finding applications in complexity theory, cryptography, pseudorandomness and zero-knowledge. In this paper we revisit this problem, achieving the following results:

(a)

We present a novel boosting algorithm for constructing the simulator. This boosting proof is of independent interest, as it shows how to handle “negative mass” issues when constructing probability measures by shifting distinguishers in descent algorithms. Our technique essentially fixes the flaw in the TCC’14 paper “How to Fake Auxiliary Inputs”.

(b)

The complexity of our simulator is better than in previous works, including results derived from the uniform min-max theorem due to Vadhan and Zheng. To achieve \((s,\epsilon )\)-indistinguishability we need the complexity \(O\left( s\cdot 2^{5\ell }\epsilon ^{-2}\right) \) in time/circuit size, which improve previous bounds by a factor of \(\epsilon ^{-2}\). In particular, with we get meaningful provable security for the EUROCRYPT’09 leakage-resilient stream cipher instantiated with a standard 256-bit block cipher, like \(\mathsf {AES256}\).

Our boosting technique utilizes a two-step approach. In the first step we shift the current result (as in gradient or sub-gradient descent algorithms) and in the separate step we fix the biggest non-negative mass constraint violation (if applicable).

In the original setting we have \(\mathcal {Z} = \{0,1\}^{\lambda }\). In the proof of the claimed better bound \(O\left( s\cdot 2^{3\lambda }\epsilon ^{-2}\right) \) there is a mistake on page 18 (eprint version), when the authors enforce a signed measure to be a probability measure by a mass shifting argument. The number M defined there is in fact a function of x and is hard to compute, whereas the original proof amuses that this is a constant independent of x. During iterations of the boosting loop, this number is used to modify distinguishers class step by step, which drastically blows up the complexity (exponentially in the number of steps, which is already polynomial in \(\epsilon \)). In the min-max based proof giving the bound \(O\left( s\cdot 2^{3\lambda }\epsilon ^{-4}\right) \) a fixable flaw is a missing factor of \(2^{\lambda }\) in the complexity (page 16 in the eprint version), which is because what is constructed in the proof is only a probability mass function, not yet a sampler [Pie15].

where the inequality line follows from \(\tilde{h}^{t+1}(x,z_0) < 0\) and Eq. (16). But by the definition of \(z_0= z_{\text {min}}^{t}(x)\) we have \(\tilde{h}^{t+1}(x,z_0) = \min _{z} \tilde{h}^{t+1}(x,z)\). Since this value is negative, we get

which is in addition trivially true if \(\tilde{h}^{t+1}(x,z) \geqslant 0\) for all z. Since we have \(\textsf {NegativeMass}\left( {h}^{0}(x,\cdot ) \right) = 0\), expanding this recursion till \(t=0\) gives an upper bound \(|\mathcal {Z}|\gamma \cdot \sum _{j\leqslant t+1} \left( 1-|\mathcal {Z}|^{-2}\right) ^{j}\) which is smaller than by \(|\mathcal {Z}|^{3}\gamma \) by the convergence of the geometric series. This finishes the proof of the first part.

To prove the second part, recall that by the definition of \(z_0\) we have \(\tilde{h}^{t+1}(x,z_0) = \min _{z} \tilde{h}^{t+1}(x,z)\). Suppose that \(\tilde{h}^{t+1}(x,z_0) < 0\) (that is, there is a negative mass in \(\widetilde{h}^{t+1}(x,\cdot )\)). Now, by the definition of \(h^{t+1}\), we get

Note that this inequality is true even if \(\widetilde{h}^{t+1}(x,z_0) = 0\), that is \(\widetilde{h}^{t+1}(x,z) \geqslant 0\) for all z as then \({h}^{t+1}(x,z)\geqslant 0\) for all z. By expanding this recursion, and noticing that \(\min (h^{0}(x,z),0) = 0\) for all x, z by definition, we get