Abstract

We give Proofs of Work (PoWs) whose hardness is based on well-studied worst-case assumptions from fine-grained complexity theory. This extends the work of (Ball et al., STOC ’17), that presents PoWs that are based on the Orthogonal Vectors, 3SUM, and All-Pairs Shortest Path problems. These, however, were presented as a ‘proof of concept’ of provably secure PoWs and did not fully meet the requirements of a conventional PoW: namely, it was not shown that multiple proofs could not be generated faster than generating each individually. We use the considerable algebraic structure of these PoWs to prove that this non-amortizability of multiple proofs does in fact hold and further show that the PoWs’ structure can be exploited in ways previous heuristic PoWs could not.

This creates full PoWs that are provably hard from worst-case assumptions (previously, PoWs were either only based on heuristic assumptions or on much stronger cryptographic assumptions (Bitansky et al., ITCS ’16)) while still retaining significant structure to enable extra properties of our PoWs. Namely, we show that the PoWs of (Ball et al., STOC ’17) can be modified to have much faster verification time, can be proved in zero knowledge, and more.

Finally, as our PoWs are based on evaluating low-degree polynomials originating from average-case fine-grained complexity, we prove an average-case direct sum theorem for the problem of evaluating these polynomials, which may be of independent interest. For our context, this implies the required non-amortizability of our PoWs.

Notes

Acknowledgements

We are grateful to Oded Goldreich and Guy Rothblum for clarifying definitions of direct sum theorems, and for the suggestion of using interaction to increase the gap between solution and verification in our PoWs. We would also like to thank Tal Moran and Vinod Vaikuntanathan for several useful discussions. We also thank the anonymous reviewers for comments and references.

The bulk of this work was performed while the authors were at IDC Herzliya’s FACT center and supported by NSF-BSF Cyber Security and Privacy grant #2014/632, ISF grant #1255/12, and by the ERC under the EU’s Seventh Framework Programme (FP/2007-2013) ERC Grant Agreement #07952. Marshall Ball is supported in part by the Defense Advanced Research Project Agency (DARPA) and Army Research Office (ARO) under Contract #W911NF-15-C-0236, NSF grants #CNS-1445424 and #CCF-1423306, the Leona M. & Harry B. Helmsley Charitable Trust, ISF grant no. 1790/13, and the Check Point Institute for Information Security. Alon Rosen is also supported by ISF grant no. 1399/17. Manuel Sabin is also supported by the National Science Foundation Graduate Research Fellowship under Grant #DGE-1106400. Prashant Nalini Vasudevan is also supported by the IBM Thomas J. Watson Research Center (Agreement #4915012803), by NSF Grants CNS-1350619 and CNS-1414119, and by the Defense Advanced Research Projects Agency (DARPA) and the U.S. Army Research Office under contracts W911NF-15-C-0226 and W911NF-15-C-0236.

In this section, we prove a stronger direct sum theorem (and, thus, non-batchable evaluation) for Open image in new window. That is, we prove Theorem 2.13.

In particular, it is sufficient to define a notion of batchability for parametrized families of functions with a monotonicity constraint. In our case, monotonicity will essentially say “adding more vectors of the same dimension and field size does not make the problem easier.” This is a natural property of most algorithms. Namely, it is the case if for any fixed d, p, Open image in new window is \((n,t,\delta )-batchable\).

Definition A.1

A parametrized class, \(\mathcal {F}_\rho \), is not \((\ell ,t,\delta )\)-batchable on average over \(\mathcal {D}_\rho \), a parametrized family of distributions if, for any fixed parameter \(\rho \) and algorithm \(\mathsf {Batch}_\rho \) that runs in time \(\ell (\rho )t(\rho )\) when it is given as input \(\ell (\rho )\) independent samples from \(D_\rho \), the following is true for all large enough n:

Remark A.2

We use a more generic parameterization of \(\mathcal {F}_\rho \) by \(\rho \) rather than just n since we need the batch evaluation procedure to have the property that it should still run quickly as n shrinks, as we use downward self-reducibility of Open image in new window, even when p and d remain the same.

We now show how a generalization of the list decoding reduction of [BRSV17a] yields strong batch evaluation bounds. Before we begin, we will present a few Lemmas from the literature to make certain bounds explicit.

First, we present an inclusion-exclusion bound from [CPS99] on the polynomials consistent with a fraction of m input-output pairs, \((x_1,y_1),\ldots ,(x_m,y_m)\). We include a laconic proof here with the given notation for convenience.

Lemma A.3

([CPS99]). Let q be a polynomial over \(\mathbb {F}_p\), and define \({{\mathrm{Graph}}}(q):=\{(i,q(i))\ |\ i\in [p]\}\). Let \(c>2\), \(\delta /2\in (0,1)\), and \(m\le p\) such that \(m>\frac{c^2(d-1)}{\delta ^2(c-2)}\) for some d. Finally, let \(I\subseteq [p]\) such that \(|I|=m\). Then, for any set \(S=\{(i,y_i)\ |\ i\in I\}\), there are less than \(\lceil c/\delta \rceil \) polynomials q of degree at most d that satisfy \(|{{\mathrm{Graph}}}(q)\cap S|\ge m\delta /2\).

Corollary A.4

Let S be as in Lemma A.3 with \(I=\{m+1,\ldots ,p\}\), for any \(m<p\). Then for \(m>9d/\delta ^2\), there are at most \(3/\delta \) polynomials, q, of degree at most d such that \(|{{\mathrm{Graph}}}(q)\cap S|\ge m\delta /2\).

where \({{\mathrm{Arith}}}(n)\) is a time bound on arithmetic operations over prime fields size O(n).

Theorem A.7

For some \(k \ge 2\), suppose Open image in new window takes \(n^{k-o(1)}\) time to decide for all but finitely many input lengths for any \(d = \omega (\log {n})\). Then, for any positive constants \(c,\epsilon >0\) and \(0<\delta <\varepsilon /2\), Open image in new window is not

Proof

Let \(m=n^{k/(k+c)}\), as before. By Proposition 4.5, Open image in new window with vectors of dimension \(d=(\frac{k}{k+c}))^2\log ^2n\) is \((m,m^c)\)-downward reducible to Open image in new window with vectors of dimension \(\log ^2(n)\), in time \(\tilde{O}(m^{c+1})\).

For each \(j\in [m^c]\)\(X_j=(U^{j1},\ldots ,U^{jk}) \in \{0,1\}^{kmd}\) is the instance of boolean-valued orthogonal vectors from the above reduction. Now, consider splitting these lists in half, \(U^{ji}=(U^{ji}_0,U^{ji}_1)\) (\(i\in [k]\)), such that \((U^{j1}_{a_1},\ldots ,U^{jk}_{a_k})\in \{0,1\}^{kmd/2}\) for \(\varvec{a}\in \{0,1\}^k\). Interpret \(\varvec{a}\) as binary number in \(\{0,\ldots ,2^k-1\}\). Then, define the following \(2^k\) sub-problems:

where \(\delta _i\) is the unique degree \(2^k-1\) polynomial over \(\mathbb {F}_p\) that takes value 1 at \(i\in [2^k]\) and 0 on all other values in \([2^k]\). Notice that \(D_j(i)=A^{i-1}\) for \(i\in [2^k]\).

Let \(r>2^{k+1}d/\delta ^2\log m\). \(D_j(2^k+1),D_j(6),\ldots ,D_j(r+2^k)\). By the properties of \(\mathsf {Batch}\) and because the \(D_j(\cdot )\)’s are independent, \(D_1(i),\ldots ,D_{m^c}(i)\) are independent for any fixed i. Thus,

for \(\delta r/2\)i’s with probability at least \(1-\frac{4}{\delta r}=1-1/\text {polylog}(m)\), by Chebyshev.

Now, because \(\delta r/2 > \sqrt{16dr}\), we can run the list decoding algorithm of Roth and Ruckenstein, [RR00], to get a list of all polynomials with degree \(\le 2^{k+1}d\) that agree with at least \(\delta r /2\) of the values. By Corollary A.4, there are at most \(L=3/\delta \) such polynomials.

By a counting argument, there can be at most \(2^{k}d\left( {\begin{array}{c}L\\ 2\end{array}}\right) =O(dL^2)\) points in \(\mathbb {F}_p\) on which any two of the L polynomials agree. Because \(p>n^k>2^kd\left( {\begin{array}{c}L\\ 2\end{array}}\right) \), we can find such a point, \(\ell \), by brute-force in \(O(L\cdot dL^2\log ^3(dL^2)\log p)\) time, via batch univariate evaluation [Fid72]. Now, to identify the correct polynomials Open image in new window, one only needs to determine the value Open image in new window. To do so, we can recursively apply the above reduction to all the \(D_j(\ell )\)s until the number of vectors, m, is constant and Open image in new window can be evaluated in time \(O(d\log p)\).

Because each recursive iteration cuts m in half, the depth of recursion is \(\log (m)\). Additionally, because each iteration has error probability \(<4/(\delta r)\), taking a union bound over the \(\log (m)\) recursive steps yields an error probability that is \(\varepsilon <4\log m/ (\delta r)\).

We can find the prime p via \(O(\log m)\) random guesses in \(\{m^k+1,\ldots ,2m^k\}\) with overwhelming probability. By Corollary A.6, taking \(r=8d\log m/\delta ^2\), Roth and Ruckenstein’s algorithm takes time \(O(d^2/\delta ^5\log ^{5/2} m{{\mathrm{Arith}}}(m^k))\) in each recursive call. The brute force procedure takes time \(O(d/\delta ^3\log ^3(d/\delta ^2)\log m)\), which is dominated by list decoding time. Reconstruction takes time \(O(\log m)\) in each round, and is also dominated. Thus the total run time is

Williams, V.V.: Hardness of easy problems: basing hardness on popular conjectures such as the strong exponential time hypothesis. In: Proceedings of International Symposium on Parameterized and Exact Computation, pp. 16–28 (2015)Google Scholar