We begin by defining Newcomb's problem. Enter Newcomb's Demon, a rather close
relative of Maxwell's Demon (the similarity goes well beyond a shared affinity
for black boxes). Newcomb's, Demon - call him ND for short - is a paragon of
intelligence, a genius's genius whose acuity transcends the temporal boundaries
of merely ephemeral human beings. Like many geniuses, he displays a certain
playfulness: his "hobby" is testing his own infallibility in the prediction
of human behaviour by making a generous offer to random human playthings. This
offer is couched as a "choice"; though ND's human subjects are in
fact forbidden to predicate their choices on external events, they are tacitly
allowed to use deterministic or nondeterministic internal strategies computed
by corresponding classes of neural events (you may already see the
superficiality of the distinction between inward and outward events; I include
it only as a customary ingredient of the formulation).

Suppose that ND has chosen you as an experimental subject, and that you know
that he has never failed to achieve the result he predicted for any subject
throughout the extensive history of his experimental run. Nothing appears
rigged; in particular, the data are plausibly distributed with respect to the
critical behavior of experimental subjects (if Nozick's data are valid, at a
ratio of approximately 2.5/1). The experimental set-up is minimal. You are led
to a table on which are two boxes. One is transparent, and in it you can
clearly see $1000. The other is dead black and totally opaque. ND tells you
that he has placed $1,000,000 in the black box if and only if he has predicted
that you will take only it and not the other (transparent) box. Otherwise - if
he has predicted that you will try to "outsmart" him and take both
boxes - the black box has been left empty. ND earnestly assures you that this
is no joke; he does not intend to fail. Just in case there are any doubts about
this, he offers to take you through a few trial runs using monopoly money and a
couple of spare boxes (naturally, you will have to leave the room between these
"rehearsals"). You take him up on it. He wins no matter what you
think or do. Finally, he tells you that play time is over; next time, the deal
is for real. He adds that, lest you suppose he has an infinite amount of time
to waste on you, he is giving you a reasonable but finite amount of time to
make your move.

Having been around the block, you have already perceived that this is all a
kind of game in which ND is the "house", and you his opponent. Of
course, since ND is playing for the pure satisfaction of being right while you
are playing for big money, a minor conceptual adjustment is in order. But, all
things considered, you reason that you should nonetheless apply the standard
theory of games in order to maximize your gain. Your reasoning then becomes a
little more involved.

You know of certain principles
of rational decision that seem to apply to this game. One is known as "the
maximization of sub­jective expected utility", and states that you should
try to maxi­mize the sum of the products of the utility (monetary value) of
each distinct possible outcome by the probability of that outcome, given your
"move". Since ND's past performance and serious intent seem to bring
the probability that he has correctly predicted your choice close to unity,
this principle is
telling you in no uncer­tain terms to take only the black box.

But there is a similar rule, called the "dominance principle", which also seems
to apply to your predicament. It states that if, for every possible move (or
prediction) of your opponent, there is a single countermove as advantageous to
you as any other, and more so in at least one case, then you should make that
move. You know that ND has already made his move: there is either $l,000,000 in
the black box, or there is nothing. In either case, your eyes tell you
unequivocally that there is $1000 in the clear box. You reason that you have
nothing to lose, and precisely $1000 to gain, by taking both boxes. The
dominance principle
merely drapes a veil of reason over what your eyes and instincts have already
told you: ($1,001 million or $0.001 million) beats ($l million or $0).
Conclusion: take both boxes.

But this seems to constitute a dilemma. These two principles of rationality, which
are supposed to be mutually consistent, are in this case telling you to do
opposite things. Feverishly - there is a lot of money at stake for you - you
rack your memory for other rules which might help you reconcile or at least
decide between them. What about the gambler's fallacy? No matter how many times
ND has won in the past, isn't his string of victories really just an extremely
unlikely random sequence? Time only runs in one direction, so how can he
possibly claim to be more than a lucky (and very rich) guesser? If this is the
case, then you certainly don't want to embrace the fallacy of assuming that the
outcome of your imminent trial is in any way dependent on chose preceding it.
If nothing else comes to mind, and quick, you're going to have to go the
dominance route and take both boxes.

As if on cue, your neurons begin again to flicker. It occurs to you that you
could make no decisions whatever, and would know nothing at all, without
relying on various scientific axioms. But axioms are mutually independent, or
virtually random with respect to each other. None of them can be deduced from
other axioms. Doesn't accepting these axioms - including the dominance principle or whatever more
primitive concepts underlie it - thus amount to something very much like the
gambler's fallacy? That is, isn't one accepting as a premise that certain
future events will be like similar events from the past, but without being able
to prove it? You are now in the Minotaur's recreatorium without a thread.

You are also roughly as well-informed on what you should do as any
professional scientist or logician who has ever published an analysis of
Newcomb's paradox. Fortunately, I am no professional, and need grind the
favorite axe of no publisher, professor, or em­ployer. The trail that unwinds
below is freshly blazed; though a bit steep in spots, it can be negotiated
purely on the strength of an open and reasonably sharp mind.

The first step towards resolving the paradox is to provide a logical
scaffolding from which Co construct the mathematical model necessary for sound
inference. Past arguments involving the problem have used either the standard
decision-theoretic model of people playing a game, or the linear "arrow of
time" fundamental to classical physics. These two models have been taken
to imply opposite solutions, and this suggests that they be somehow unified in
an extended "meta-model" which adequately relates the concepts to
each other. It has been less obvious just what this higher model should be.

The solution of problems, and the resolution of paradoxes, are inherently
computational activities. What, then, could be a better setting for this
resolution than a computative one? And what could possibly be a more fitting
preface than a brief introduction to the abstract theory of computation?

Consider an acceptor F = (Q, S,d,qo,A). Q is a finite nonempty
set of internal states, S is an
alphabet whose symbols are concat­enated as strings, qo c Q is the
initial state, and A c Q is the set of accepting states triggered only
by input recognizable to F. The transition mapping d, which governs the way in which F changes states, is
deterministic if d:Q x S —> Q, but nondeterministic if d:Q x S
—> 2Q (where 2Q is the set of all subsets of Q). In
the nondeterministic case, d will be
written dn, for clarity. In
terms of human psychology, we might regard the 5-tuple of F as its
"categorical imperative", or accepting syntax, and say that F
projects this syntax onto its universe. Nothing in the universe of F is
recognizable to it but the particular input strings (sense data, facts) which
cause it to pass through some q e A;
they are its phenomenal "reality", a subset of the noumenal
metareality of the wider universe in which strings are representationally gener­ated
and entered by programmers. The restriction to finite Q is pragmatic and
amenable to conditional relaxation.

If F is deterministic, it accepts (recognizes) a string s eS*
if and only if d(qo, s) e A. Since we have defined d only for the individual symbols seS, we must define an extended transition
function d': where l is the null string, d'(q,l) = q; and for all
q e Q, seS,
and s eS*
(where S* is the set of
varirepetitional permutations of the deS)
, d'(q,ss)
= d' (d'
(q, s) ,s). Thus, the accepting
behavior of F is defined inductively for s-quantized
string extensions; the way in which we recognize and assimilate new bits of
information within our reality is specified in d'.
Were we to widen the discussion to imagination, conceptualization,
theorization, or other intrinsic computations, we would need to consider
"ideas"; we would have to generalize from recognition to abstraction
by means of a nonlocal or self-iterating, input-free extension of d. If the reference to "strings"
seems to imply a dimensional limitation on input, this too can be generalized.

Where F is nondeterministic, it accepts a string s eS* if and only if dn'(qo,s) n A = ~Ø. The
nondeterministic extension dn'
of dn is defined by
induction: dn'(qo,
l) = {q} , and dn'(q,ss) = Uq'cdn'(q,s)dn(q',s). I.e., q' is one of the possible
successors of q under dn'
given s; the unextended mapping dn
on singletons of S* then determines the
image under dn' of s plus an
adjoint symbol s, given q'. This is a
classical recursive definition. It describes a stepwise probabilistic
ramification of computative potential whose complexity depends on dn.

Nondeterminism is not always restricted to Q x S; under certain conditions, either Q or S can be extended, or A shifted within Q. This. of course,
entails a modification of F, unless F is defined to allow for parametric
extension and adjustment. To this effect,
let F' be such an open extension. To be meaningful in mechanistic contexts such
as those in which acceptors are usually considered, F' must exist within an
appropriate mechanistic extension of the computative environment of F.
Organisms, being mechanical in the deterministic sense, need not be
distinguished in this extension. Nondeterminism can be used to subtly
manipulate recognition, thus cryptically modifying an acceptor's reality.
Nondeterministic recognition can help to explain the ability of an acceptor to
rapidly sieze certain kinds of higher-order phenomena, or even interact with
higher-order agencies ordinarily insensible to it.

Having thus formalized the logical abstraction of recognition - i.e., the
passive phase of organo-mechanical cognition - we now proceed to the output
behavior of computative automata, or to the active phase of cognition. Consider
a transducer M = (S,Q,T,d,m),
where S is a finite nonempty input
alphabet, Q a finite nonempty state set, T a finite nonempty output alphabet, d:Q x S
—> Q the state-transition function, and m:
Q x S —> T the output function. A
computation of M has internal and external phases; through m, the output that M delivers back to its
outward universe depends on strings of d-iterated
transitional internal states. Thus, m
is a functional of the function d of input.
Together, m and d totally determine the behavior of M. They can be extended from
S and T to S* and T* as for the acceptor F: d'(q,l) = q, m'(q,l) = l;
and d'(q, ss) = d( d'(q,s),s),
m'(q,ss)
= m'(q,s) m(d'(q,s), s). Where appropriate, we can add to M an
initial state ("reset control") qo: M = (S,Q,T,d',m',qo), to be regarded as a
separator of locally independent computations, and put it at the disposal of a
function r c M which determines computational relevancy.

Considered as a robotic brain, M T-behaves according to m, but Q-reasons towards its decisions along
paths generated by d. Where d is deterministic, m may be related to it as a "timing function"
according to which any computation can be arrested (input aborted, regression
terminated) and converted to output on passing certain tests. Where the
duration of the computation is determined with d,
input becomes the only variable. Where input as well is fixed in content and
scheduling, the entire system is tightly determined. As Laplace might have
observed, predicting the behavior of deter­minate mechanisms requires only
data, the means to acquire it, and a valid scientific methodology to organize
and interpret it. While the situation is actually more complex, the fact
remains that were one to play a deterministic game with a deterministic
transducer like M, one would need only a detailed knowledge of its input and
programming to predict the outcome, given analytic tools adequate for that
purpose and consistent with one's own constraints (e.g., the amount of time
available for analysis and strategy). If one's object in the game were merely
the validation of one's prediction, so much the easier to win.

Suppose instead that M has nondetenninistic output capping mn, where state-transition may or
may not be deterministic. Then the prediction of output entails control of mn by the predictor. To win a game
of prediction, one must now control mn
as well dn Sn to the extent
chat it is output-critical; one must take over where the probabilistic mn leaves off. Since whichever
control the transducer has over itself resides in mn and dn,
one must in effect
deprive it of self-control. The relevance to "free will" is obvious.

Computation is purposive. The purpose of an acceptor is pure recognition; no
action is explicitly predicated on its internal transitions. The purpose of a
transducer is conversion of input to output; yet, such a conversion is aimless
unless algorithmic. Like yin and yang, acceptors and transducers are
complementary; only together can they begin to resemble functional systems of
organic complexity. In order to model organic systems, transducers must be
endowed with goals and algorithms comparable to the ends and means of living
beings. Algorithms are themselves purposive procedures which model both
acceptance and transduction. The problems which comprise their input are
scanned by preliminary steps for certain kinds of information, which must in
turn be accepted as parameters by subsequent steps, and so on to the output
stage (at which point the algorithm delivers its answer). The mechanistic
representation of an algorithm must allow for the innate structure of a device,
considered apart from the algorithm itself; this structure may have variant and
invariant aspects. The algorithm simply conforms variables to purpose given the
invariants. As the definitions of F and M might lead one to expect, this
generally involves importing to M the set A c Q defined for F.

Human beings, it is said, are self-programming. Their thought is
polyalgorithmic; useful algorithms are either meta-algorithmically constructed,
or selected from a learned store, to deal with input. If learning,
construction, and selection are deterministic, then they characterize a
deterministic meta-algorithm not fundamentally different from any other
deterministic algorithm we might study. If they are nondeterministic, then they
are characteristic of a nondeterministic meta-algorithm, and likewise. It
follows that the formal transductive model of human nature withstands any
objection from the relative complexity of human mentality or behavior.

Newcomb's object-transducer MN naturally includes an acceptor: MN
(S,Q,qo,d,A,m,T).
Recognition is phasic; a string must often be "pre-accepted" for M to
tell whether to accept or reject it. Ordinarily, this tentative phase of
recognition is easily computed by the physical entities whose behavior is
predicted by ND. To be "real", an input-quantum s must simply possess a certain first-order
predicate, "reality", which - this being a self-validating tautology
- induces a type-theoretic predicate stratification like that involving the old
Cretan, Epimenides. To this sine qua non of recognition corresponds a primary
element q1 of A; no input-quantum failing the q1-test is
reified, whereas all those passing are relayed to Q - q1.
Higher-order recognition of "passed" quanta then proceeds at a rate
determined by the respective computational demands of the
stratified-algorithmic phases of dM.
Corresponding to the structure of A are various ordered states analogous to q1
within their respective levels of acceptance.

Let us narrow the definition of MN in a way consistent with
Newcomb's problem. Suppose that associated with mM
is a threshhold value b > 0 below
which output is nil, but above which a decision will be finalized and
implemented. With each qi c Q we associate a pair of strength
coefficients ai, and ai2, to be incremented and
decremented according to a strategic dM,
appropriate to the Newcomb decision-theoretic context; these represent the
current tendency, given the present amount of input, for MN to
output either possible behavior (taking one or both boxes, respectively). The ai
divide Q into three classes XQ, YQ, and ZQ,
with membership conditions a, > a2, a, < a2, and a,
= a2 respectively. To each qi is attached a total weight ai = |ai, - ai2|.
As soon as the output condition [a = b] is met, a decision and behavior result
which correspond to the Q-class of the current state (note that dM, being strategic, precludes q c
ZQ, or "indecision", at the output deadline). Thus, the
states of Q c MN are preferential and impetal, the graded
fore-images of the outputs they favor. This is just a convenient way to view
the internal configurations to which they correspond, and does not violate the
general definition of transducers. Nor, for that matter, does it violate the
way human beings perceive their own decision-making processes.

The question posed by Newcomb's problem involves the computative analysis,
by a predictive agency with computative characteristics, of the computative
analysis undertaken by a transducer on a given input. That input is the problem
itself, presented in the manner prescribed by the formulation. This situation,
which defines a computative regression, is recursive and inductively extensible.
The regression in turn defines the only soluble context for the higher-level
"paradox" generated by the problem. This context translates as
mechanism. The mechanism is a stratified automaton G containing both the predictor and its object-transducer as
sub-automata. Whether "free will" is defined determlnistically as
mere outside non-interference in m and d, or nondeterministically as the ability of
MN to override any exogenous restriction of mn or dn,
its mechanism is contained in that of G.

Logical diagonalization of the formal computational language generated by
the accepting syntax of MN directly implies that certain structural
aspects of G may be unrecognizable to MN.
In particular, those aspects involving MN-relativized
nondeterminacy, as well as those involving certain higher-order predicates of
the nondistributive, nonlocal organizations involving mM and dM,
are formally undecidable to it and need not be recognized directly by it with
any degree of specificity. To understand why, consider the extent to which a
common computer "recognizes" the extended system including its cpu,
its other components, its programmers, and the environment it inhabits. In
fact, it can recognize nothing that does not conform to its input-to-output
transformational grammar. Even if it were self-analytic, such analysis could be
limited to a schematic electronic syntax which overlooks the material elements
of which it is constructed. In any case, it can make sense of nothing but
strings of input translated and rearranged according to the internal
stratification of its hard and soft programming.

You, your purposes, and your dependencies are undecidable to it, and so are
the mechanisms by which you can predict and control its behavior. It matters
not who formulates this undecidability; if the machine's internal logic is
inadequate to do so, yours surely is not (currently, most mechanical acceptors
are nongeneralistic, treating complementation as negation and negation as
rejection; this bars the tools of diagonalization from their computations).
Should it ignore your higher prerogatives, you could "diagonalize" it
- if nothing extrinsic to the machine were to stop you - with a sledgehammer
whose effects on it do
not depend on its acceptance. By analogy, Newcomb's object-transducer MN
cannot preclude G on grounds of
"insensibility". Nor, for chat matter, can we.

There are many self-styled experts on undecidability who have expressed the
opinion that all attempts to reify Godel's theorem along paranormal lines
reflect a misunderstanding of its "real nature". Such experts are
quite correct in that a misunderstanding exists, but the misunderstanding is
all theirs. What the theorem forces by juxtaposing truth and derivability (or
consistency and completeness) is a hierarchical stratification of classes of
truth functions and the inferential syntaxes which parametrize them. This
stratification follows that of G,
fractionating computative reality along with the "truth" to which it
corresponds.

The stratification of G induces
stratum-relativizations of computative time and space. Thus, the timetype in
which MN computes recognition and output is a mere subtype of that
in which it is programmed. Dynamical "arrows of determinacy" which
are inviolable to MN, being programmed into its accepting syntax,
have no force whatsoever to the programmatic agencies themselves. This applies
just as well to "metrical" restrictions embodied in the MN-syntax;
these may allow MN to recognize nothing but an artificial submetric
of the metric in which these agencies define their own existence. MN
and its reality might consist of quanta with higher-dimensional interpretations
as the termini of channels for the transmission of information between strata.
Metatemporal predicates may exist with respect to which those of MN,
are definite only in a mutual sense; predicates which MN accepts as
"before" and "after" could be the programmatic projections
of "in front of" and "in back of", or any other G-consistent higher-prepositional
relationships.

There can thus exist a mechanism x c G
through which a predictor like ND could measure and/or control the mappings d, m
c MN in ways directly insensible to MN. Where in G relative to MN would such a
predictor have to be located? Precisely where access is available.
Sirnplistically, we might characterize the predictor-M relationship as one of
proper inclusion, where it is understood that prediction is direct rather than
second-hand, and programmatic in the passive and active senses. That is, a
programmer mentally internalizes the structure of that which he programs, and
this internalization amounts to computative inclusion. The fine structure of G, while to a degree analytic, is a natter of
some complexity. For now, it will suffice to have demonstrated the possibility
of x and its utility to well-situated G-subautomata.
Because G is structured to allow for
relativized deteririnacy and nondeterminacy, the solution is invariant with
respect to argumentation involving mind-brain dichotomy. That is, such
dichotomies reduce to distinctions of determinacy and nondeterminacy, and may
be treated in kind.

Restricted dominance, which relies on probabilistic independence derived
from the lower-order, local istic dynamical timetype of MN's
artificially restricted "reality", is revealed under G-extension to be 'itself dominated by
utility. That is, the subjective utility of MN forces the
assimilation by dM of this
entire demonstration, which disables restricted dominance and thus frees the
strategic component to recognize higher patterns among observed data. The principle of restricted dominance,
though valid as long as the reality of MN remains unbreached, loses
all force in the presence of exodynamic influence.

Let's sum it up. You can be modeled as a deterministic or nondeterministic
transducer with an accepting syntax that can be diagonalized, or complemented
by logical self-negation. ND can be modeled as a metalogical, metamechanistic
programmatic agency, some insensible part of whom surrounds you in a
computational space Including physical reality, but not limited thereto. This
space is the mechanistic equivalent of the computative regression around which
Newcomb's problem is essentially formulated. The existence of this space cannot
be precluded by you on the grounds that you cannot directly observe it, nor can
it be said by you to deny ND a mechanism of control and prediction of your
thought and behavior. Additionally, you have an open-ended run of data which
lowers to 1/ 8 the probability that NO is "just lucky". This implies
that mechanism does indeed exist, and warrants the adjunction to the axioms of
physics an independent, empirical physical axiom affirming that mechanism. This
then implies that ND can predict or control human thought and behavior (a
somewhat weaker implication, you will notice, than "omniscience"). ND
possesses means, motive, opportunity...and you. You are "possessed"
by Newcomb's Demon, and whatever self-interest remains to you will make you
take the black box only. (Q.E.D.)

Do not let incredulity cheat you of understanding. It may be hard to accept
the idea of a mathematical proof of the possibility of demons; if so, you may
take solace from the fact that this does not of itself imply their actual
existence. Remember that Newcomb's problem includes a hatful of hypothetical
empirical data that may have no correspondents in your sector of reality. The
proof just given merely prevents you from generally precluding the paranormal
experiences reported by others on logical grounds. That is, it restrains you
from any blanket distribution of too narrow a brand of "rationalism"
over reality. Because I have long been exploring the ramifications of the logic
this proof employs, I already know how easy it would be to deal with whomever
might dispute it on "scientific" or other grounds, regardless of
their prestige or reputed intelligence. There is a certainly a strong
correlation between rationalistic dogma and the persecution of its foes.
Fortunately, it is not too late for "rationalism" to catch up with
the advances of twentieth-century logic.

Regarding free will, consider that demons are generally reputed to offer a
material reward for the surrender of one's soul. Where we define
"soul" as the autoprogrammatic G-extension
M' of one's characteristic transductive representation M, this means only that
one is rewarded for voluntarily moving aside and letting the demon take over as
captain of one's fate. He does this by adjoining to his subject any functional
extensions he requires; e.g., d(dn),
or m(mn). Or, if restricted
determinism prevails but he is not in an especially predictive mood, he might
create a parametric extension FND' (analogous to F') enabling a
hyperdeterninistic override m ? m of
one's deterministic output function. Actually, volition may be irrelevant; one
might be shoved aside rather than nudged. Where freedom is defined negatively
on restricted determinacy, it is duly stratified.

It has by now occurred to most readers that "demon" is a darkly
connoted term that could just as well be replaced with "guardian
angel" or something equally comforting. After all, ND does not ask his
subjects to commit evil, but only to let him do them a rather large favor with
no apparent strings attached. Insisting on "free will" here is
reminiscent of the child who stares defiantly at its mother while touching the
hot stove she has just finished telling it to avoid. If the child's act has any
meaning at all, it is just that poorly timed independence can lead to more painful
dependency down the line (surprisingly, Isaac Asimov advocates just this kind
of defiance in the Newcomb context, and on what he seems to feel are ethical
grounds; but we will deal with ethics later). One cannot be willingly
"enslaved" by ND before being enslaved by the money he offers. Who
among us always behaves in willful defiance of economic reality? My heart goes
out to him, for he is either "enslaved" by an institution, or he is
at the absolute mercy of the spontaneous, unsolicited charity of those whose
own demons generally'predispose them to pass him by.

Having quenched the fire, we will now "mop up". Professor Nozick
himself has remarked on the possible nonrecursivity of scientific laws involved
in the determination of behavior. This, he surmises, prevents the inference of
predictability from determinism. But this conclusion entails the confinement of
these predicates within a restricted syntax, whereas the extensionality of G renders such a constraint powerless. In
other words, restricted recursivity is beside the point for agencies defined to
possess sufficient access to the channels through which such laws project as
restricted reality, and particularly to their source. All ND has to do is run a
G-presimulation of interstratum
transmission. Note that the computat ive requirements for tills, which
correspond to assumptions on the nature of G,
are satisfied given ND's efficacy.

Nozick also remarks on the regression of determinism to self-validating
premises. Self-validating languages comprise their own inferential schemata.
Validation, being inferential to the extent of quantification, follows the G-stratification of inference. So
self-validation is also stratified, and such languages correspond to
appropriate levels of computative reality. For sufficiently expressive
languages, self-validation becomes diagonalistic self-vitiation. Determinism is
thus stratified, and so is its counter­part, freedom. The seemingly infinite
regression can be brought to closure at an ultimate form of undecidability,
which in turn can be locally resolved with a relativized degree of
confirmation.

Regarding various remarks that have been submitted to Nozick by members of
the physics community, I should explain that the above resolution has a
metaphysical generalization of which certain implications may broadly be termed
physical. This generalization, as developed by the author, is called the CTMU,
or Computation-Theoretic Model of the Universe (the acronym has a mnemonic
pronunciation: 'cat-mew). We will later examine chosen aspects of this
model, which transcends characterization as a "theory" by virtue of
its categoricality with respect to computative entities such as human beings.
That is, the CTMU is as close to "absolute truth" as we will ever be
privileged to get along computative or intellectual pathways. If I might be
indulged a bit of testimony: beyond the CTMU ultimate syntax, the lattice which
gives shape to the multilayered layered veil of maya, there is but the light
that shines forever.

Christopher Michael Chappelle-Langan

(copyright 1990 by C. M. Langan. No portion of this article is to be
reproduced in any form without the author's written consent.)

Bibliography: Newcomb's Problem and Two Principles
of Choice
Robert Nozick in Essays in Honor of Carl G. Hempl Editor: Nicholas
Rescher. Humanities Press, 1970 Newcomb's Paradox Revisited
Maya Bar-Hillel and Avishai Margalit in The British Journal For the
Philosophy of Science, vol . 23, No. 4, pp. 295-304; .November, 1972 Reflections on Newcomb's problem: a prediction and free will dilemma
Martin Gardner and Robert Nozick in the Mathematical Games Department of Scientific
American. pp. 102 -106; March, 1974