Here \(\varepsilon, \delta\) are small positive numbers. It’s easy to see that, under reasonable assumptions, this agent 1-boxes on Agent Simulates Predictor. But it can’t use the full strength of \(\mathbb{P}_n\) in its counterfactual reasoning, and this is a problem.

Differential privacy

To illustrate the problem, add a term to the utility function that sometimes rewards two-boxing:

So if \(\neg X_{n-1}\), two-boxing is the more attractive option, which is a contradiction. (I’m rounding \(\varepsilon\) to zero for simplicity.)

The problem is that the counterfactual has to rely on \(\mathbb{P}_k\)’s imperfect knowledge of \(X_{n-1}\). We want to combine \(\mathbb{P}_k\)’s ignorance of \(\operatorname{explore}_0\) with \(\mathbb{P}_n\)’s knowledge of \(X_{n-1}\).

If \(X\) is independent of \(A\) conditioned on \(\operatorname{explore}_0\) with respect to \(\mathbb{P}_k\), then we can do this:

This is more accurate than \(\mathbb{E}_n[U | A = a \wedge \operatorname{explore}_0]\), and unbiased.

If \(X\) is not independent of \(A\) conditional on \(\operatorname{explore}_0\), we can introduce an auxilliary variable and construct a version of \(X\) that is independent. This construction is a solution to the following differential privacy problem: Make a random variable \(Y\) that is a function of \(X\) and independent randomness, maximizing the mutual conditional information \(H(X;Y | A)\), subject to the constraint that \(A\) is independent of \(Y\). Using the identity

\[ H(X|A) = H(X;Y|A) + H(X|AY) \]

we see that the maximum is attained when \(H(X|AY) = 0\), which means that \(X\) is a function of \(A\) and \(Y\).

Now here’s the construction of \(Y\):

Let \(\mathcal{X}\) be the finite set of possible values of \(X\), and let \(\mathcal{A}\) be the finite set of possible values of \(A\). We’ll iteratively construct a set \(\mathcal{Y}\) and define a random variable \(Y\) taking values in \(\mathcal{Y}\). To start with, let \(\mathcal{Y} = \emptyset\).

and for each \(a' \in \mathcal{A} \backslash \{a\}\), choose some \(f(a') \in \mathcal{X}\) such that \(\mathbb{P}(X = f(a'), Y \notin \mathcal{Y} | A = a') > 0\). Then make a random binary variable \(T_{a'}\) such that

and add \(y\) to \(\mathcal{Y}\). After repeating this process \(|\mathcal{X}||\mathcal{A}|\) times, we are done.

We can do this with a logical inductor as well. In general, to get a sentence \(T\) such that \(\mathbb{P}_k(T \wedge B | C) \approx p\), take \(T := \mathbb{P}_k(T \wedge B | C) < p \wedge B \wedge C\).

Now given random variables \(U\) and \(A\), and some informative sentences \(\phi_1, \dots, \phi_\ell\), let \(X \in \{T, F\}^\ell\) be the random variable encoding the values of \(\phi_0, \dots, \phi_{\ell-1}\). The above construction works approximately and conditional on \(\operatorname{explore}_0\) to give us a random variable \(Y\) that is approximately independent of \(A\) conditional on \(\operatorname{explore}_0\) with respect to \(\mathbb{P}_k\). Now we define

which does not lead to contradiction. In fact, there are agents like this that do at least as well as any constant agent:

Theorem

Let \(U_n(\mathbb{P}, A)\) be a utility function defined with metasyntactic variables \(n\), \(\mathbb{P}\), and \(A\). It must be computable in polynomial time as a function of \(A\), \(\mathbb{P}_{f_i(n)}(A = a)\), and \(X := \mathbb{P}_{f_i(n)}(X) < p\), where \(f_i\) can be any polytime functions that doesn’t grow too slowly and such that \(f_i(n) < n\). Then there exists a logical inductor \(\mathbb{P}\) such that for every \(n\), there exists \(k < n\), \(\varepsilon, \delta > 0\), and a pseudorandom variable \(Y\) such that the agent \(A\) defined below performs at least as well on \(U_n\) as any constant agent, up to a margin of error that approaches \(0\) as \(n \to \infty\):

Proof sketch

Choose \(k\) smaller than the strength parameter of the weakest predictor in \(U_n\). If \(a_n\) is the best constant policy for \(U_n\), assume \(A_n = a_n\). Since \(\mathbb{P}_n\) can compute \(U_n\), our agent’s factual estimate \(\mathbb{E}_n[U_n | A_n = a_n]\) is accurate, and the counterfactual estimate \(\mathbb{E}_n[U_n | A_n = a']\) for \(a' \neq a_n\) is an accurate estimate of the utility assigned to the constant policy \(a'\), as long as we make \(Y\) rich enough. So the agent will choose \(a_n\). Thus we have an implication of the form “if \(\mathbb{P}\) believes \(A_n = a_n\), then \(A_n = a_n\) is true”, and so we can create a logical inductor \(\mathbb{P}\) that always believes that \(A_n = a_n\) for every \(n\) by adding a trader with a large budget that bids up the price of \(A_n = a_n\).

Isn’t this just UDTv2?

This is much less general than UDTv2. If you like, you can think of this as an agent that at time \(k\) chooses a program to run, and then runs that program at time \(n\), except the program always happens to be “argmax over this kind of counterfactual”.

Also, it doesn’t do policy selection.

Next steps

Instead of handing the agent a pseudorandom variable \(Y\) that captures everything important, I’d like to have traders inside a logical inductor figure out what \(Y\) should be on their own.

Also, I’d rather not have to hand the agent an optimal value of \(k\).

Also, I hope that these counterfactuals can be used to do policy selection and win at counterfactual mugging.

This doesn’t quite work. The theorem and examples only work if you maximize the unconditional mutual information, \(H(X;Y)\), not \(H(X;Y|A)\). And the choice of \(X\) is doing a lot of work — it’s not enough to make it “sufficiently rich”.