The idea that a free agent "could have done otherwise" is a key element in the libertarian argument. One way to see that an agent could have done otherwise is to study the choices made in more or less identical conditions.

Choosing between real alternative possibilities is seen as a condition for moral responsibility, although freedom in this sense is prior to moral issues. In recent philosophical jargon, it is known as (PAP) the "principle of alternative possibilities."

A person is morally responsible for performing a given act only if he could have acted otherwise.

PAP is under attack by many compatibilists and determinists. It has been twisted by compatibilists into sophisticated logical arguments and "thought experiments" that purport to prove that the ability to do otherwise under identical conditions is impossible.

If such a capability did exist, it could only be arbitrary (indeed), capricious (perhaps), and irrational, to this day leading even some libertarian thinkers to doubt that an "intelligible" account can be given of free choice (e.g, Robert Kane).

Keith Lehrer thought he could prove that someone who showed he could do something (by doing it) could equally refrain, and therefore establish that he could always do otherwise. Lehrer thought this argument strong enough to constitute An Empirical Disproof of Determinism, published in his 1966 collection of essays, Freedom and Determinism. He said there (p.177)

"I now wish to argue that we can know empirically that a person could have done otherwise. A person could have done otherwise if he could have done what he did not do. Moreover, if it is true at the present time that a person can now do what he is not now doing, then, later, it will be true that he could have done something at this time which he did not do. This, of course, follows from the fact that "could" is sometimes merely the past indicative of "can." What I now want to argue is that we do sometimes know empirically that a person can do at a certain time what he is not then doing, and, consequently, that he could have done at that time what he did not then do. Moreover, we can obtain empirical
evidence in such a way that our methods will satisfy the most rigorous standards of scientific procedure."

Now let us suppose that God a thousand times caused the universe to revert to exactly the state it was in at t1 (and let us suppose that we are somehow suitably placed, metaphysically speaking, to observe the whole sequence of "replays"). What would have happened? What should we expect to observe? Well, again, we can't say what would have happened, but we can say what would probably have happened: sometimes Alice would have lied and sometimes she would have told the truth. As the number of "replays" increases, we observers shall — almost certainly — observe the ratio of the outcome "truth" to the outcome "lie" settling down to, converging on, some value. We may, for example, observe that, after a fairly large number of replays, Alice lies in thirty percent of the replays and tells the truth in seventy percent of them—and that the figures 'thirty percent' and 'seventy percent' become more and more accurate as the number of replays increases. But let us imagine the simplest case: we observe that Alice tells the truth in about half the replays and lies in about half the replays. If, after one hundred replays, Alice has told the truth fifty-three times and has lied forty-eight times, we'd begin strongly to suspect that the figures after a thousand replays would look something like this: Alice has told the truth four hundred and ninety-three times and has lied five hundred and eight times. Let us suppose that these are indeed the figures after a thousand [1001] replays. Is it not true that as we watch the number of replays increase we shall become convinced that what will happen in the next replay is a matter of chance. (Philosophical Perspectives, vol. 14, 2000, p.14)

If God caused Marie's decision to be replayed a very large number of times, sometimes (in thirty percent of the replays, let us say) Marie would have agent-caused the crucial brain event and sometimes (in seventy percent of the replays, let us say) she would not have... I conclude that even if an episode of agent causation is among the causal antecedents of every voluntary human action, these episodes do nothing to undermine the prima facie impossibility of an undetermined free act.
("Van Inwagen on Free Will," in Freedom and Determinism, 2004, ed. Joseph Keim Campbell, et al., p.227)

Robert Kane has argued that randomnmess in the decision need not be there all the time, just enough to be able to say we are not completely determined. Even if just a small percentage of decisions are random, we could not be responsible for them.

We can make a quantitative comparison of the outcome of 1000 thought experiments (or "instant replays" by God as van Inwagen imagines) that shows how the indeterminism in the Cogito Model is limited to generating alternative possibilities for action.

Van Inwagen's results after 1000 experiments are approximately 500 times when Alice lies and 500 times when Alice tells the truth.

Robert Kane is well aware of the problem that chance reduces moral responsibility, especially in his sense of Ultimate Responsibility (UR).

In order to keep some randomness but add rationality, Kane says perhaps only some small percentage of decisions will be random, thus breaking the deterministic causal chain, but keeping most decisions predictable. Laura Ekstrom and others follow Kane with some indeterminism in the decision.

Let’s say randomness enters Kane’s decisions only ten percent of the time. The other ninety percent of the time, determinism is at work. In those cases, presumably Alice tells the truth. Then Alice’s 500 random lies in van Inwagen’s first example would become only 50.

In the two-stage model, we have first “free” – random possibilities, then “will” – adequately determined evaluation of options and selection of the "best" option.

Alice’s random generation of alternative possibilities will include 50 percent of options that are truth-telling, and 50 percent lies.

Alice’s adequately determined will evaluates these possibilities based on her character, values, and current desires.

In the two-stage model, she will almost certainly tell the truth. So it predicts almost the same outcome as a compatibilist/determinist model.

The two-stage model is not identical, however, since it can generate new alternatives.

It is possible that among the genuinely new alternative possibilities generated, there will be some that determinism could not have produced.

It may be that Alice will find one of these options consistent with her character, values, desires, and the current situation she is in. One might include a pragmatic (little white) lie, to stay with van Inwagen’s example.

In a more positive example, it may include a creative new idea that information-preserving determinism could not produce.

Alice’s thinking might bring new information into the universe. And she can legitimately accept praise (or blame) for that new action or thought that originates with her.

To summarize the results:

Van Inwagen

Kane

Two-stage Model

Compatiblism

Alice tells truth

500

950

1000*

1000

Alice lies

500

50

0*

0

* (Alice tells the truth unless a good reason emerges from her free deliberations in the Cogito Model, in which case, to stay with van Inwagen's actions, she might tell a pragmatic lie.)

We should also note the Moral Luck criticism of actions that have a random component in their source.