Pages

Monday, May 31, 2010

The ontological argument for the existence of God (God is defined as having all possible qualities, or as being perfect; and one of these qualities, or one aspect of his perfection, must be existence) has an engaging boldness. There's something about its breathtaking cheek that reminds me of the plea offered by the man convicted of murdering his parents ... who throws himself upon the mercy of the court because he's an orphan. And it does catch something of the self-creatingness and self-containment of religious belief, its ability to suspend itself in space by its own bootstraps. It might indeed usefully adapted to demonstrate the existence even of more modest characters in fiction. I have only to describe the characters in my next novel as having all possible qualities, or as being perfect - and lo! there they are in the real world. Do we even need such elevated characteristics? Nowhere, as far as I can recall, does Jane Austen make the same claim of Mr Darcy, for example. But she does describe him as having a fine, tall person, handsome features, noble mien, and reportedly ten thousand a year, which must surely imply his existence just as clearly, because it's difficult to see how anyone can have a fine, tall person and all the rest of it unless he exists. If God is summoned into being by words then so is Mr Darcy. And indeed they both are so summoned. This is the truth - the truth of fiction.

Thursday, May 27, 2010

Sean writes terrific pieces for the Cosmic Variance blog, along with several other scientists. For some reason, Blogger won't let me link to the blog in my blogroll, so be sure to bookmark it yourself.

Tuesday, May 25, 2010

Now suppose instead of a rover, we are talking about an animal - say, a bear. The bear has to decide whether to go left or right around the rock. It has various goals in mind: eat, find a mate, find a safe place to sleep. And it has memories of places it has found food or mates before, and some sort of mental map that relates those places to the choice to go left or right. The bear, we could say, is "running a survival program" that makes decisions on the basis of its goals and memories. (Again, I am not suggesting that the bear's brain works like a computer. I am using the computer program as an analogy to gain understanding of the levels of description.) Does the flow of electrons and neurotransmitters determine the bear's behavior, or does the bear's "survival program" determine the flow of electrons and neurotransmitters? What does your intuition say?

In the case of the rover, the program and the computer architecture were clearly designed to produce the particular decision-making behavior that we see. But in the bear's case, the hardware and software were designed, too: by the processes of evolution. Moreover, they were specifically designed to allow the bear to realize its long-term goals of survival and reproduction.

A human has these abilities to an even greater extent: a human can imagine different possible futures and make a choice between them. This, it seems to me, is the essence of free will. And it doesn't really matter whether the underlying micro-physics that runs the decision-making "program" is deterministic or indeterministic. Either way, it is me - my goals, desires, memories, plans - that (probabilistically or not) enact the choice.

Monday, May 24, 2010

Let's return to the idea of levels of description. An action can be described on many different levels: the electron/molecular level, the neural level, and the mental level of thoughts and desires. Very possibly there are intermediate levels of description in terms of various brain subsystems, but I don't think those levels are currently well enough understood to be very useful.

At the level of electrons, atoms, and molecules, the description ought to be quantum mechanical, and thus indeterministic.

At the level of individual neurons, the description might be effectively deterministic, or it might not. It all depends on whether the neurons act as quantum amplifiers (like the Geiger-counter-bomb) or quantum dampers (like a computer). It seems conceivable to me (having very little knowledge of neuroscience) that at times, a single ion tunneling across the cell wall could be the difference between a neuron firing or not. This would make the neuron a quantum amplifier: the randomness of the tunneling event could get amplified and end up determining whether or not you perform some action. But maybe not - maybe the neuron is more like a computer transistor, which is designed so that a few electrons more or less don't make a difference to the outcome.

What about the level of mental events? Here the description is so loose, I would find it difficult to call it deterministic. Suppose I am considering which university to attend. I have reasons in favor or against both of my top choices: one is closer, the other gave me more aid, one is small and intimate, the other is large and has lots of opportunities.... But someone else with the same list of reasons could end up with the opposite decision, without being irrational about it. I suppose we could put weights on all the reasons and devise a formula that would determine the result - but how to decide on the weights and the formula? And would it work again if I had to make the decision again? So perhaps at this level Ekstrom's "caused but not determined" makes sense.

In my opinion, many of the difficulties involved with free will are the result of confusing different levels of description. Here is Ekstrom, for example (Free Will, pp. 195-196):

The idea that we can direct our behavior by our thoughts ... is welcome, but it is only superficially comforting. It comforts until we think about the possibility that even our thoughts are driven to be what they are by previous neurophysiological events (between which there are deterministic causal links), a chain going backward through events in our childhood brains and to events prior to our birth.

Notice how she slides from the mental level, to the neural level, to the micro-physical level of "events prior to our birth" without batting an eye. (Clearly there cannot be a neural level before there are neurons, so she must be thinking here of the electronic/atomic level.)

Sometimes the argument is phrased in terms of ultimate responsibility: You cannot have ultimate responsibility for something that you do not have control over. You do not have control over the events of the distant past that are the causes of your behavior today (if determinism is true). Therefore, you do not have ultimate responsibility for your actions.

But recall the Mars rover that had to turn left or right when confronted with a large rock. It would be very strange to say the computer program that made the rover turn left doesn't have responsibility for the decision to turn left because, at the electronic level, everything is determined by the laws of physics. That seems to get the causality exactly backwards. We would rather say that the computer program, together with the computer hardware, caused the electrons to flow in such a way as to make the rover turn left. So here is one case where the higher level is the cause of what's happening at the lower level, rather than the other way around. Or perhaps this is using "cause" in a different sense - another thing I find missing in the philosophers I've read is a careful analysis of causation.

Sunday, May 23, 2010

Early scientists like Isaac Newton believed they were discovering the principles by which God governed his creation. Such "laws of nature" were absolute; they admitted no exceptions and could not be broken. A similar vein of thought can be found in Einstein: "I want to know God's thoughts; the rest are details," and in modern theoretical physics, where some speak of an ultimate theory, a Theory of Everything (TOE).

According to another vein of thought, the laws of nature are descriptions, not commands. They are a way of organizing a large body of observations according to the regularities found among them. They are mathematical models that capture some aspect of reality. This view acknowledges that such laws are always provisional and approximate. There will always be some realms - some scales of size or energy - in which the known laws have not been tested, and in which they may well fail to be exactly true.

Many philosophers of free will seem to adhere to the seventeenth-century view of the laws of nature. This is especially evident in their discussions of determinism. There are, it is assumed, some ultimate Laws that, God-like, determine everything that will ever happen.

An exception is the philosopher Norman Swartz, who argues that if we take seriously the view of laws as descriptions, not commands, then the problem of free will does not even arise. I am not completely persuaded by his argument: read it yourself and see what you think. But I agree that thinking of laws as descriptions is an important step in the right direction.

These issues leap to the foreground in discussions of quantum mechanics. If we take the mathematical formalism of quantum mechanics to be an absolute command that admits no exceptions, then we are driven to a metaphysically absurd interpretation (the Many-Worlds Interpretation). But if we admit that quantum mechanics is merely the most accurate description we can give of certain systems, then such absurdities aren't necessary.

I see the significance of this view for free will in the possibility that there is more than one description of a certain event that is (approximately) a true description. The existence of a (valid) description at the level of electrons does not rule out the existence of a (valid) description at the level of mental events.

Saturday, May 22, 2010

Luke at Common Sense Atheism has several video lectures on the fine tuning problem: the claim that the physical parameters of the universe (as reflected in constants of nature that appear in our fundamental theories of physics) must lie in a very narrow range, or life would never have formed. What conclusion one draws from the argument depends on the speaker: for some, it is an argument for the existence of some sort of a god, for others, it is just an intriguing scientific puzzle.

But maybe it's not really a problem at all. The difficulty with the argument is that we have only one universe to refer to. To get any idea about how likely or unlikely are the observed values, we would need a whole raft of universes, each with (possibly) different values of the parameters, so that we could get an idea of what ranges they can lie in and how they are distributed inside those ranges. That is, we need some idea of the probability space that we are working with.

Here's part of a comment I wrote on Luke's site:

Neither here, nor in the Nunley video, nor in the extensive discussion that followed that video do I see anyone address the fundamental issue of fine tuning: namely, that in order to talk about the probability of anything you have to have some idea of the probability space. All the card and firing squad analogies are wildly misleading, because in those cases we KNOW what the probability space is, roughly at least.

It's not like being dealt a royal flush. It's like being dealt a hand in which you don't know how many suits there are, you don't know how many card values there are, and you don't even know what game you're playing. (I could add "you don't even know how many cards you are holding," seeing as we don't know which of the physical constants are independent.)

If you prefer firing squads, it's like you don't know how many marksmen there are, you don't know how far they are standing from you, you don't know how well they are trained, and you don't know how many of them are shooting blanks.

[The comment in question seems to have disappeared from the site, presumably a victim of Luke's recent virus troubles.]

Friday, May 21, 2010

This post is part of my already-much-longer-than-I-expected-and-still-growing series on free will.

What I would like to see from these free will philosophers is some sort of model that shows how will works and why or in what sense it can be considered free. (I was hoping for this sort of model from Freedom Evolves, but was disappointed.) What follows is an off-the-top-of-my-head attempt that's just to illustrate the sort of model I have in mind.

No one has ever announced that because determinism is true thermostats do not control temperature.

-- Robert Nozick, quoted in Dennett, Elbow Room, p. 51

Let's start with a thermostat. As Nozick says, we have no problem with the idea that the thermostat controls temperature. But there is also the purely physical level of description, in which the parts of the thermostat are obeying physical laws, without any concern for what they are controlling or if they are controlling it. So already with this simple system, we can talk about it on two level, though, obviously, there is no question of any sort of free will involved.

Bumping it up a notch, let's consider the Mars rover that I discussed earlier. Let's focus on a single line of code that determines whether, when confronted with a large rock in its path, the rover will turn left or right. That line might look something like this (vastly oversimplified, of course):

(A) If X, turn left, else, turn right.

Here X is a variable that can only take on the values 0 or 1. If X is 1, the machine turns left, otherwise it turns right. X depends on some list of inputs: what the rock looks like in the video input, what the angle of tilt of the ground is on either side, whether there seems to be clearer path on one side, etc. Some considerations might favor left and some might favor right, but all of them must be boiled down and weighted so that a clear, and deterministic, decision is made.

We can, of course, talk about all this on the level of electrons, voltages, and circuits. The electrons are just following the laws of physics, without any sort of decision in mind. Yet, the physical system has been carefully set up (by the computer engineers and the programmer) so that the laws of physics result in a decision about something important to the rover's goals. Actually, it is the goals of the mission scientists in this case - but the rover may be thought of as analogous to a simple organism, "designed" and "programmed" by evolution to achieve its goals of survival and reproduction.

Now bump it up another notch. Suppose there is some sort of monitoring segment of the rover's program. Let's call it the monitor for short. It keeps track of X, as well as many other important variables in the program. After the variable X is evaluated, but before statement (A) executes, the monitor looks at the result and has the opportunity to override it. Why might it want to do this? Well, the monitor has access to a wider range of information than the inputs to X. It might be monitoring the level of charge in the batteries, the distance left to go the next goal, and so forth: the long termgoals, as opposed to the local situation that is dealt with by the inputs to X. For instance, if the battery is low and there is more sun on one side of the rock than the other, then the need to maintain power might outweigh the local considerations of topography that X takes into account.

The monitor, I propose, plays a similar role to Ekstrom's evaluative faculty. It is responsible for taking into account the overall state of the organism, as well as both short-term and long-term goals. And - I suggest - it gives the organism something like free will: the ability to pause and consider alternative possibilities and their consequences before proceeding.

Why would you want such a faculty? Why not, for instance, just shovel those additional considerations into the determination of the variable X? From a programming point of view, there may be other reasons to want such a monitoring program, and it might just make more sense to include this override capability in the monitor rather than in X. Flipping the switch on our intuition - thinking about an evolutionary sequence instead of a human design - it might be that the evaluative faculty represents an evolutionary step that overlays the earlier, more mechanical, system. "If it ain't broke, don't fix it" probably goes for evolution, too: if some system is working well as it is, then, rather than tinkering with that system, it might be preferable to add a new system on top of the old one. (Of course, there is nothing more or less "preferable" to Evolution herself - she merely allows what works to succeed and what fails to fail. What I mean is that a mutation that changes the old system might be disastrous, while a mutation that adds a little bit of monitoring might enable enhanced survival without messing up the old system.)

But why not have the monitor do all the work? Why not monitor all the variables of the program, and all the conditions of the environment, and consider all the various permutations of options and outcomes? As Dennett points out in Freedom Evolves, there simply isn't enough time to consider all things. If you tried to consider all your options - cut your fingernails, cut your hair, walk the dog, eat breakfast, jump out the window, nail your hand to the table, eat the curtains, eat the dog, walk the cockroach, ... - you would never get out of bed in the morning. We are overwhelmed at every instant by input - sights, sounds, smells - one of the most important tasks is to ignore stuff: to filter out the unimportant, in order to focus on the important. The other important task is to filter the possible outputs - choose, from the infinite range of possible actions, the one to do next. A monitor that tried to take everything into account, to consider every possible course of action, would be useless. It would never get anything done, and the organism would die.

I don't know if I've managed to capture some hint of the nature of free will in this very simple model, but just for kicks let's see what it implies about Ekstrom's theory of free will. Suppose I put such an evaluative faculty, such a monitor, into my Mars rover. Would I want to make it deterministic or probabilistic? From the point of view of control, determinism is the clear winner. As the programmer or human monitor of the rover, I want to know what it's going to do in a certain situation and why it's doing it - what portion of the program sent it down that path. From the point of view of an organism - I'm not sure. It might be helpful in certain situations to have a random component to one's actions - in flight from a predator, for instance. But in considering the issue of free will, well, a random component might make my actions less predictable, but would it make them more free? Wouldn't I rather make the optimal decision based on the (necessarily limited) inputs I have available, instead of flipping a coin to determine my actions? Personally, I think I would prefer that my actions be determined by my deliberation process, not just probabilistically caused by it.

When Ekstrom faces up to what is the good of an indeterministic component to the evaluative faculty, all she can come up with is, "Well, we need it to avoid determinism, because determinism is unthinkable." But if all it does is to lose us some amount of control over our actions, maybe we don't want it after all. Maybe it's time for another look at determinism.

Wednesday, May 19, 2010

One of the great things about Ekstrom's Free Will is the way she anticipates objections. There were many times as I was reading the book that I started to think, "But what about...?" and almost immediately found her posing the same question and answering it. At the end of Chapter 4, she gives a whole list of potential objections to her approach. She doesn't pitch herself softballs, either - she does a great job of putting the opposing case as strongly as possible. For the most part, I think she succeeds in answering these objections.

The most difficult issue she faces is the problem of control. The indeterminism in a libertarian account has to occur somewhere - but where should it go? If there is indeterminism in the process of deliberation, so that it is a matter of chance whether some consideration occurs to the deliberator, then the outcome of the deliberation seems to be a matter of pure luck, rather than something that is under the deliberator's control (p. 121). Likewise, if there is randomness after the deliberation is complete, then we have the same problem. So the only logical place for indeterminacy to occur is during the deliberation process itself. This all seems correct to me.

But then, given the same mental state before the deliberation begins and the same external inputs, the deliberator could come to a different conclusion. This, of course, is what the libertarian wants: the possibility of different outcomes. But if the outcome is not determined by the state of the deliberator or the external inputs, then it seems to be still out of the deliberator's control. Ekstrom sees this, and effectively punts: you have to have indeterminism somewhere, she says, or revert back to determinism - which she has already rejected.

Ekstrom makes no attempt to explain why there should be indeterminism at this point, apart from a brief comment that it might arise via amplification of quantum randomness (p. 124, citing Robert Kane). She just insists that we must have indeterminism to have free will, and it must occur in this way if it is to make any sense.

Kane's approach has the opposite problem. He locates the indeterminism in quantum events occuring in the brain, so we know why (in the reductionist sense) the indeterminism is there. But this puts it too early in the causal chain - random quantum events in the brain are outside the control of the agent, and so their outcomes are purely a matter of luck. (Dennett argues this point against Kane in Freedom Evolves.)

So it seems that indeterministic accounts of free will have a serious problem: We can get indeterminism from quantum physics, but we can't get it where we need it for free will.

Tuesday, May 18, 2010

Last time, we saw how Laura Ekstrom gives a libertarian account of free will.There are a lot of things I like about Ekstrom's account. As an indeterminist myself from the point of view of physics, I am partial to an indeterministic account of free will.I also like Ekstrom's clear definition of a "self" - something lacking in other accounts I've read. And I like how Ekstrom doesn't shy away from the consequences of her approach. If her account conflicts with how we view moral responsibility, well, then maybe our view of moral responsibility is wrong.

But I have some problems with her approach. I'll start out with a couple of general complaints. In contrast to Daniel Dennett, she seems unaware of current research in psychology and neuroscience. I think her account of the decision-making process, for example, could have benefited from a more science-based approach. And I wish she had made some attempt to deal with the level problem: how a description at the level of reasons and preferences interacts with a description at the level of neurons and electrons. Her account takes only the higher-level processes into consideration. But determinism, if there is such a thing, would occur at some lower level.

Why did the free agent decide in that way? Because of reasons x,y, z, and so on. Why did those reasons lead him to decide as he did? The determinist would answer: Because of a deterministic causal law linking such reasons to such a decision. But the proposed account answers: Because the agent exercised his evaluative faculty in a particular way. Why? For reasons that inclined but did not necessitate a particular outcome to his deliberation process.

But a deterministic account need not link reasons to particular decisions. A deterministic account could operate entirely at the level of (say) individual neurons: If neuron A fires when neuron B is in state X, then neuron B will fire.... This account doesn't deal with the level of reasons and decisions at all.

It seems it might be possible for indeterminism to reign at the level of reasons and decisions, even if determinism reigns at some lower level of description. This is what a compatibilist would argue, and Ekstrom doesn't seem to recognize even the possibility of such an account.

(Contrast Robert Kane's libertarian account of free will: he traces the indeterminacy down to the level of individual quantum events occurring in the brain.)

Another problem arises from the idea that an act is only free to the extent that it is undetermined by the reasons that occur in the deliberation process. (Free Will, p.125):

We sometimes speak of a range of freedom of action, some acts being fully free and others less so. The probabilistic model gives one way of making sense of degrees of freedom. Perhaps the most free acts derive from preferences whose probability of occurring was raised by the occurrence of certain previous considerations to values within a range of, say, 0.2-0.8, whereas the act would be less free when resulting from a preference at either end of the spectrum, that is, in cases where the considerations made the probability of the preference's occurence near 0.9 or 0.1.

So, it seems that if I really, really, really, really wanted to kill someone, then my action wasn't a free act - and I shouldn't be held morally responsible for it. This seems odd. It also brings to mind Dennett's comment on Martin Luther: "Whatever Luther was doing, he was not trying to duck responsibility." (Elbow Room, p.133)

When we have very strong feelings about something, we identify with the feeling - it seems to be a part of us, to express something about our innermost self. Ekstrom's account has it the other way around - our innermost self is expressed only in those decisions where we don't feel strongly either way, where our gut says, "Meh...."

Monday, May 17, 2010

Ekstrom's next step is to tackle where in the decision-making process the indeterminacy should arise. She describes the decision-making process as period of critical evaluation of various factors leading up to a preference - a settled decision about what to desire - or an intention - a settled decision about what to do.

If there were indeterminacy between the intention to act and the act itself, it would be a very odd thing indeed. For then we would sometimes find ourselves having decided to do something but then failing to do it. It seems to be the case that every time I intend to reach for my pen I succeed in doing so. And if I did fail sometimes, then it seems my failure wasn't a free action on my part, because I had intended to succeed. So indeterminacy between intention and action doesn't seem to provide free will of the sort we want.

Likewise, if the indeterminacy arises between my forming a judgment about what to do and the actual intention to perform the act, then I don't seem to be in control of the outcome.

So the indeterminacy must be pushed back even further, into the deliberation process itself.

But what is the self that is doing the deliberating? According to Ekstrom, the self (or agent) is an evaluating and choosing faculty by which the agent creates preferences and acceptances, together with the preferences and acceptances themselves.

Ekstrom's solution, then, is to locate the indeterminacy in the deliberative process by which the self - the evaluative faculty - causes the formation of an intention to act. In her words (Free Will, p.115), she endorses:

Type 3d Theory - An action is free only if it results, by a normal causal process, from a pertinent intention ... that is caused by the agent, where this latter term ... is reducible to event-causal terms.

I take "event-causal" to refer to normal neuro-physiological processes. Even though it is the agent that is doing the causing, this is not an "agent-causal account", because Ekstrom insists that this causation is normal physical causation, not some mysterious power that pertains to intelligent agents alone.

But it is crucial for Ekstrom that the type of causation that the agent/self engages in is indeterministic, as she has already rejected any sort of deterministic account.

To Ekstrom, here's how things look. I don't want my actions to be uncaused, because I want to be in control of them: I want them to be caused by me. But I don't want them to be deterministically caused, for the reasons outlined earlier. So Ekstrom endorses what she calls a Type 3 theory: free actions are indeterministically caused, in some manner involving the self.

Ekstrom needs to explain two things: what is indeterministic causation, and what is a self. She starts with indeterministic causation.

Not all causes (according to Ekstrom) are deterministic causes. She gives three examples of indeterministic causes.

Contact with an infected person may be the cause of my getting a disease. This is so even though such contact does not guarantee that I get the disease - it only increases the probability of my getting it.

A child's falling and scraping his knee may cause him to cry - even though he might not always cry from a fall.

A Geiger counter exposed to a radioactive source and connected to a bomb may cause the bomb to explode. But if the counter is set up so that it only activates the bomb if a certain number of clicks occur in a given time interval, then it is undetermined whether the bomb will be set off.

I have some difficulty with these examples. In the first two cases, it may be that the outcome (getting the disease or crying) is a perfectly deterministic result of several causes. Being exposed to the disease is only one part of the overall cause, the state of my own health might provide the remaining necessary and sufficient conditions for my getting the disease. Likewise, the child's state of mind when he falls might determine whether or not he cries. Only in the third case is there true indeterminacy: the Geiger counter acts as an amplifier of quantum indeterminacy.

In fact, the only true indeterminacy that I know of is quantum indeterminacy. All other types of indeterminacy come from lack of information about some part of the system. (In quantum systems, even with the maximum possible information about the state of the system there is still indeterminacy.) It seems to me that if Ekstrom wants to succeed in this approach, she will have to track the indeterminacy down to some underlying quantum event (as Robert Kane does), or else show that there is some alternative source of true indeterminacy. At this point, however, she simply notes that others have given accounts of probabilistic causation and moves on.

Saturday, May 15, 2010

Ekstrom's Chapter 4 is titled "Varieties of Libertarianism." This is not, of course, libertarianism in the political sense, but in the philosophical sense. Ekstrom identifies three types of libertarian free will. She dismisses the first two, so I will not say much about them.

Type 1 theories - Free actions are uncaused.

The main problem with this approach is that, if my action is not caused by anything, then it is not caused by me, in particular. That seems to mean that it is out of my control.

Type 2 (agent-causal) theories - Free actions are not caused by any sequence of physical events, but by the agent who is acting. Here an agent is taken to be something that is not reducible to a physical description (in terms of systems of neurons, etc), but is a substance or entity in its own right. Chisholm says an agent is a "prime mover unmoved."

All this (it seems to me) has an uncomfortably religious ring to it. The agent is a sort of immaterial spirit that, in some unspecified way, is able to act on the material body to bring about actions. I gather from Ekstrom that these theories have, in fact, come in for some "heckling" from other philosophers. She points out three difficulties for agent-causal approaches:

How can there be two different types of causation - agent causation and ordinary physical causation - interacting in a single human being?

If an agent has no physical structure, then how can we understand the changes in the agent that bring about a free choice at a particular time?

What evidence is there of a separate, non-physical, sort of causation?

On a proper libertarian account, in Ekstrom's view, free actions should be caused, and specifically caused by an agent: I want to be in control of my actions. But these actions should not be deterministically caused, for then there is no room for freedom of will. Her challenge then is to explain what an agent (or self) is, and to do so in such a way that the agent is capable of causing actions in a non-deterministic way. Thus she introduces her preferred type of libertarianism:

Ekstrom is an incompatibilist. That is, she believes that free will is incompatible with a deterministic universe. After some preliminary remarks in Chapter 1, she sets out several arguments to this effect in Chapter 2. She starts with the Consequence Argument, which I have discussed before. She then presents several more rigorous versions of this argument.

For example, there is the question of whether someone "could have done otherwise." (This is due to Van Inwagen.) Suppose that at time t I did not raise my hand. If the universe is deterministic, then the state of the universe at some remote past time, together with the laws of nature, imply the fact that I did not raise my hand at time t. Yet I claim that I could have raised my hand at time t had I chosen to do so. But I can't claim that I could have done anything about the state of the universe at some time before I was born. Nor can I claim to be able to change the laws of nature. (No magic allowed!) Either I am wrong in thinking I could have done otherwise, or the universe is not deterministic.

Now, I have already explained that (according to quantum mechanics) the universe is not deterministic. So these arguments don't bother me all that much. Yet I have difficulty seeing how an indeterministic universe helps with free will. Well, this is just what Ekstrom is going to show - so she says. We'll have to wait and see.

There are several ways of responding to the "could not have done otherwise" argument. (One of the great strengths of Ekstrom's book is the way she consistently presents responses and counter-arguments to her points - very impressive in a book only 236 pages long!) One way out is to assert that I could have done something else if something had been different. If what had been different? Well, there are several routes one can take.

One route is to say I could have done something different if my mental state had been different. I could have raised my hand at time t if I had felt like doing so, for instance.

Another route is to say I could have done otherwise if the whole past of the universe had been different - so that I ended up in the state (at time t) of desiring to raise my hand.

Another route is to say (bizarrely, to my mind) that I could have done otherwise if the laws of nature had been different.

Or, one could argue (as Dennett does) that the ability to do otherwise is highly overrated. Dennett cites Martin Luther, who famously said, "Here I stand, I can do no other." Perhaps Luther was exaggerating, but, "Whatever Luther was doing, he was not trying to duck responsibility." (Elbow Room, p.133)

Ekstrom points out difficulties with each of these possible compatibilist replies. She then turns to consideration of libertarian accounts of free will, which I will discuss next time.

Reading [David] Deutsch [a prominent physicist and defender of the many-worlds interpretation] has encouraged me to adapt his approach to solving another mystery which has vexed theorists for many years - the single sock problem. This is the converse of quantum interference, where a single particle apparently acquires virtual partners. It is a matter of common knowledge that pairs of socks, while passing unobserved through the closed system of a washing machine, are repeatedly reduced to single socks, or at any rate to pairs consisting of one actual sock and one virtual sock undetectable by observation. The most likely explanation for this, it now dawns on me, by analogy with Deutsch, is that the missing socks are abstracted by fairy folk, and taken off to fairyland to be unwoven and reworked into garments for elves and pixies. This does, it's true, involve postulating an extensive fairy economic system about which we know little, but it does solve the problem of the virtual socks - and it does actually explain another mystery, which is where elves and pixies get their clothes from, a question that neither classical nor quantum physics has ever been able to answer satisfactorily.

And at least it requires the existence of only one fairyland, not trillions of fairylands.....

Not even flying-saucer theory and the vanishing sock postulate require metaphysical hardware on this stupendous scale [i.e., that of the many-worlds interpretation], even though they are both attempting explanations for phenomena that bulk a lot larger than those puzzling but faint shadows on the laboratory screen. (pp. 440-441)

Thursday, May 6, 2010

If you're not familiar with the density matrix formulation, you might be suspicious about my claim. Does the statistical interpretation really do away with wave function collapse, or did I somehow hide a collapse inside the formalism? But it really does the job. There is only one postulate needed to prove that the final mixed state is the correct description of the beam state, that is the postulate that expectation values (average values) of measurable quantities are given by the usual quantum mechanical expression.

In fact, the statistical interpretation is essentially what Max Born originally proposed when he introduced the probability interpretation of the wave function way back in the early days of quantum mechanics. So why did the collapse idea become so prominent?

For one simple reason: in the statistical interpretation, the wave function is only something that keeps track of what we know about the system. It is not an object/entity that exists physically in space and time.

This is a radical departure from normal physics. Physicists are used to thinking of their abstractions - electric and magnetic fields, for instance - as things that really exist "out there." And it makes sense, intuitively, that if there are rules about the how the world behaves, they ought to be rules about the things that exist objectively in the universe. And these rules are expressed mathematically.

But if the wave function doesn't exist "out there", if it is only an expression of what we know, then why should it obey a mathematical equation? Why should our knowledge about the system follow a strict mathematical law?

So many physicists preferred to think of the wave fucntion as something that exists "out there." But when you do that, you suddenly discover the collapses. Every time you learn something new about the system (perform a measurement), you need to discard your old wave function and replace it with a new one. Hence, all the to-do about collapsing wave functions.

Notice, though, that the collapses happen precisely when we learn something about the system. That is, they happen when our information changes. So, to me, it makes eminent sense to think of the wave function as embodying our knowledge of the system, rather than thinking of it as an independently existing object. It is then perfectly natural that it should change whenever our information about the system changes.

But then we are left with two rather puzzling questions:

Why should our knowledge about a physical system obey a mathematical equation?

What is the physical reality that the wave function describes - and why can't we just write an equation for it?

I don't know the answers. As far as (1), I would just comment that it's not so clear why the physical objects themselves should obey mathematical equations, either. But we are so used to the idea that it doesn't really seem strange - until we stop to think about it. And as for (2) - maybe this hints at the true message of quantum mechanics: that we can only have imperfect knowledge of any physical system. The ultimate reality might be inaccessible to us.

Sunday, May 2, 2010

I apologize to any readers who aren't interested in these somewhat technical quantum mechanics discussions, but as Frickle has been asking about my views I thought I would continue on the topic.

In the Copenhagen interpretation of quantum mechanics (QM), if you make a measurement of a superposition state then it "collapses" into one of the constituent states. (I discussed superposition states briefly in my last post but one.) Physicists have spent decades discussing when the collapse occurs and even building models of the collapse process. This talk always brings to mind an image of a small boy with a screwdriver, standing next to a pile of dismantled chair parts, saying "It just collapsed!" In fact, there is no need for a collapse postulate if we properly understand the nature of a quantum state.

Ballentine defines a quantum state as follows:

Any repeatable process that yields well-defined probabilities for all observables may be termed a state preparation procedure. It may be a deliberate laboratory operation, ... or it may be a natural process not involving human intervention. If two or more procedures generate the the same set of probabilities, then these procedures are equivalent and are said to prepare the same state. (Quantum Mechanics, p. 33)

More simply, a state is "an ensemble of similarly prepared systems."

Think about a standard beam-splitting experiment. What emerges from the apparatus is a superposition of, say, left and right states: (|L> + |R>)/sqrt(2). Now let's say we measure the particle and find it's in the left beam. In the Copenhagen interpretation one would say the state has collapsed into pure |L>. But remember, a state is an ensemble of similarly prepared systems. One particle does not an ensemble make. To create a true |L> state, we would need to continue to measure the emerging particles and discard the result every time we measure an |R> particle. We can accomplish this easily - say by putting a brick in front of the right-hand beam. Now if someone says to me, "Hey, the wave function collapsed!" I'll respond, "Of course it 'collapsed' - you put a big honkin' brick into the apparatus!"

Now you are probably thinking, "It can't be that simple." But I think it really is that simple. Whenever you encounter wave function collapse in the standard interpretation, if you look at it closely, you will find the brick.

Let's look at a classic quantum experiment: the two-slit experiment. As is well known, if you allow a particle beam to pass through a barrier with two slits, you will see an interference pattern (resulting from the superposition of the two beams). However, if you place a detector near one of the slits so that you can tell which slit the particle passes through, then the interference pattern disappears. The standard explanation is to say that the detector collapses the wave function, so there is no longer a superpostion and no longer any interference pattern.

To see what's really going on, let's model the detector as an atom that is in its ground state, |0>. When the particle passes through the slit, the atom is put into its excited state, |1>.

Suppose we place this detector near the left-hand slit. Initially, the beam+atom system is in some state |i>. After the beam passes through the slit we have the (combined) final state
|f> = (|L>|1> + |R>|0>)/sqrt(2).
Now, suppose that the detector atom gets excited. Then we know that the beam particle is in the state |L>. But has the state "collapsed"? Remember that the initial state |i> represents an ensemble of similarly prepared systems. In that ensemble, some runs of the experiment will produce an excited detector atoms and some will not. So the apparently innocent words "suppose that the detector atom gets excited," actually represent our ignoring of all the parts of the ensemble in which the detector atom doesn't get excited.

Think of it this way: we run the experiment, and on the first trial the detector atom doesn't get excited. So we ignore that run. Second run, same result - so we ignore that one, too. Third run, the detector atom gets excited. "OK - that's the situation I was talking about. Now the wave function has collapsed!"

Nope - it only "collapsed" because you put a brick in the beam - by ignoring the results you weren't interested in.

That is to say, if we look only at the runs in which the atom gets excited, we will produce the |L> beam state. (Likewise if we keep only the runs in which the atom isn't excited - then we produce the |R> beam state.) Mathematically, this corresponds to projecting out one component of the wave function. That's the import of the words "suppose that the detector atom gets excited." They mean we have chosen to consider only a part of the total ensemble.

What if we keep both sets of results? There is a well-defined procedure for finding the state in cases like this. We start by forming the density matrix for the combined system:
|f><f| = (|L>|1> + |R>|0>)(<L|<1| + <R|<0|)/2
Then we trace out the atom's state. This amounts to discarding terms that involve |0><1| or |1><0| in the expansion of the density matrix above. We end up with the reduced density matrix that describes the beam state:
(|L> <L| + |R> <R|)/2
This is what is called a mixed state. It cannot be written in terms of a wave function. It has a natural interpretation in the statistical interpretation: it is an equal mixture of the pure |L> state and the pure |R> state. But it is NOT a superposition state: there is no fixed phase between the two possibilities, so there is no interference.

If you didn't follow all of the above, here's the bottom line: normal QM leads to the conclusion that there won't be any interference when we place a detector near one of the slits - and we can come to this conclusion without any mention of wave function collapse, if we treat the detector in a properly quantum mechanical manner.

OK - this has already become a very long and technical post, so I'll save some final comments for next time.