Do we have free will?

Extract from Pages 177 – 180.

Another natural conclusion of the mechanistic view is that we don’t really have free will. Our instantaneous decisions are made by the neurological and chemical interactions within the physical worlds of our brains and bodies, and these are governed by the deterministic laws of science, and uncertainty at the quantum level. When ‘we’ make a decision, really, physics is making it for us.

This is indeed my view. I have no problem with it. I am still me and you are still you. Unfortunately others do, and use it as a reason to reject the scientific explanation and cling to superstition. It seems odd that people are unwilling to accept that you are your will, free or not, unless they inject something indefinable to define them. This is getting biology and philosophy backward. You don’t decide your decisions; your decisions decide you. Why is a spirit more acceptable in terms of selecting your decisions than neuroscience? Why do people think that being controlled by the laws of science makes them less of a person than being controlled by an evil superstition? A purely mechanistic view does not entirely undermine personal autonomy, accountability, or responsibility. It does, however, mean that we, humans, can engineer the ‘soul’. We can, theoretically if not currently practically ‘cure the soul’ of the psychopath, cheer up the depressed, motivate the lazy, or temper the compulsive. And when we get there this will have tremendous ethical consequences for medicine, justice, retribution, and rehabilitation.

This kind of thinking could help us to discard dogma and creatively cultivate compassion in our society, but the religious must resist. They must cling to their iron-age lies, to the cost of the heart in society. Nearly a hundred years ago, taking this materialist approach in What I Believe Bertrand Russell opined:

“I merely wish to suggest that we should treat the criminal as we treat a man suffering from the plague. Each is a public danger, each must have his liberty curtailed until he has ceased to be a danger. But the man suffering from plague is an object of sympathy and commiseration, whereas the criminal is an object of execration. This is quite irrational. And it is because of this difference of attitude that our prisons are so much less successful in curing criminal tendencies than our hospitals are in curing disease.”[i]

Let me restate a couple of these points. A purely mechanistic view does indeed negate free will at the most basic scientific level, but doesn’t really affect the perceived free will on a practical level. You are still you. Your decisions, free or not, define you, not vice versa. And to resist this because it limits your freedom of will is an inappropriate response, since accepting a spirit that makes your decisions for you is equally limiting; why would you think a supernatural ‘insertion of will’ is any more ‘free’ than a chemical one?

I think the requirement for supernatural free will is unnecessary and contrary, not only to physics, but to evolution. Biologists can confirm that the simplest organisms on the planet have no free will. Their biology is so defined that they react automatically in response to stimuli; such as food concentrations or light. With more sophisticated creatures, perhaps termites, the responses are more complicated, but still appear automated. We know, unless you reject evolution, that all creatures exist on a continuum of life-forms. A continuum that is now fragmented, but whose gaps, were at some point populated. There are no dramatic sea-changes in life; we are all the children of something very similar to ourselves, going all the way back to these earliest microbial automata.

…

What about non-biological computers? We assume that a PC is not self-aware. This seems probable, but we cannot be sure. I cannot prove that my phone isn’t conscious other than by an arbitrary definition of ‘conscious’ in terms of response to certain inputs. A computer of sufficient power and of a particular design will indeed one-day be defined as self-aware. To some large degree, I suspect this is simply a matter of the programming. I.e. If the program tells itself it is aware, then it is aware. When we reach the point of artificial self-awareness I contend we will have created conscious life. But I don’t think this will be a special moment, because I don’t think consciousness really exists anyway. It’s an illusion we have faked in ourselves, and will one day fake in computers. If this is sufficiently well developed, then turning off a computer and discarding its memory would be no less of a moral disgrace than murdering a friendless orphan.

This really puts early-stage abortion into perspective. It is like saying, because we agree that terminating The Terminator is wrong, so therefore is smashing my smart phone. The level of awareness and potential for suffering is very different; just as it is between killing a fully formed human and discarding a few rapidly multiplying cells.

We really must discard the concept that our thoughts are anything special. It’s just processing. There is no ghost in the machine. The ghost is merely the program running on the machine. In this context, the phrase “we don’t think; we just think we think” starts to make sense. And I don’t really believe there is anything special about consciousness. It’s an illusion. No, not even. Because we think we think we are conscious, we seek the illusion of consciousness. So I would rather say that consciousness is not even an illusion; it’s an illusion of an illusion.