Abstract

We introduce the novel notion of a Proof of Human-work (PoH) and present the first distributed consensus protocol from hard Artificial Intelligence problems. As the name suggests, a PoH is a proof that a human invested a moderate amount of effort to solve some challenge. A PoH puzzle should be moderately hard for a human to solve. However, a PoH puzzle must be hard for a computer to solve, including the computer that generated the puzzle, without sufficient assistance from a human. By contrast, CAPTCHAs are only difficult for other computers to solve — not for the computer that generated the puzzle. We also require that a PoH be publicly verifiable by a computer without any human assistance and without ever interacting with the agent who generated the proof of human-work. We show how to construct PoH puzzles from indistinguishability obfuscation and from CAPTCHAs. We motivate our ideas with two applications: HumanCoin and passwords. We use PoH puzzles to construct HumanCoin, the first cryptocurrency system with human miners. Second, we use proofs of human work to develop a password authentication scheme which provably protects users against offline attacks.

Notes

Acknowledgments

The authors thank paper shepherd Peter Gaži for his very constructive feedback which helped us to improve the quality of the paper. In particular, we are thankful for his suggestions about formalizing security statements involving hard AI problems.

The authors also thank Andrew Miller, and the PC of ITCS 2016 and TCC 2016B for their helpful comments.

Remark. Intuitively, \((\mathrm {sample}, \beta )\) messages correspond to an honest party seeking a sample generated by the fixed program d on input \(\beta \). Recall that \(\mathcal {A} \) is meant to internalize the behavior of honest parties.

The experiment \({\mathbf {Real}}(1^\lambda )\) is as follows:

Throughout this experiment, a random oracle \(\mathtt {RO}\) is implemented by assigning random outputs to each unique query made to \(\mathtt {RO}\).

Whenever \(\mathcal {A} \) sends a message of the form \((\mathrm {RO}, x)\), this is forwarded to \(\mathtt {SimRO} \), which produces a response to be sent back to \(\mathcal {A} \).

2.

\(\mathtt {SimRO} \) can make any number of queries to the Samples Oracle \(\mathcal O \).

3.

In addition, after \(\mathcal {A} \) sends messages of the form \((\mathrm {sample},\)\(\beta )\), the auxiliary tape of \(\mathcal {A} \) is examined until \(\mathcal {A} \) adds entries of the form \((\beta , p_{\beta } )\) to it. At this point, if \(p_{\beta } \ne d(F(\beta ))\), the experiment aborts and we say that an “Honest Sample Violation” has occurred. Note that this is the only way that the experiment \(\mathbf {Ideal} \) can abort. In this case, if the adversary itself “aborts”, we consider this to be an output of zero by the adversary, not an abort of the experiment itself.

The output of the experiment is the final output of the execution of \(\mathcal {A} \) (which is a bit \(b\in \{0,1\}\)).