Posted
by
Soulskill
on Monday October 21, 2013 @03:54PM
from the whoa-man-that's-deep dept.

KentuckyFC writes "The problem of free will is one of the great unsolved puzzles in science, not to mention philosophy, theology, jurisprudence and so on. The basic question is whether we are able to make decisions for ourselves or whether the outcomes are predetermined and the notion of choice is merely an illusion. Now a leading theoretical physicist has outlined a 'Turing Test' for free will and says that while simple devices such as thermostats cannot pass, more complex ones like iPhones might. The test is based on an extension of Turing's halting problem in computer science. This states that there is no general way of knowing how an algorithm will finish, other than to run it. This means that when a human has to make a decision, there is no way of knowing in advance how it will end up. In other words, the familiar feeling of not knowing the final decision until it is thought through is a necessary feature of the decision-making process and why we have the impression of free will. This leads to a simple set of questions that forms a kind of Turing test for free will. These show how simple decision-making devices such as thermostats cannot believe they have free will while humans can. A more interesting question relates to decision-makers of intermediate complexity, such as a smartphone. As the author puts it, this 'seems to possess all the criteria required for free will, and behaves as if it has it.'"

It is important to note that satisifying the criteria for ass igning oneself free will does not imply that one possesses consciousness. Having the capacity for self-reference is a far cry from full self-consciousness."

"It is important to note that satisifying the criteria for ass igning oneself free will does not imply that one possesses consciousness. Having the capacity for self-reference is a far cry from full self-consciousness."

Except that this is a bald statement, without anything to back it up, and which is very likely false.

There are other statements in the paper that I would consider grievous errors. For example, on p. 13, the author states:

"Installed in the computer or smart phone, the operating system is computationally universal and capable of fully recursive reasoning. (There is a subtlety here in that computational universality requires that you be able to add new memory to the computer or smart phone when it needs more â" for the moment letâ(TM)s assume that additional memory is at hand.) Consequently, the operating system can simulate other computers, smart phones, and Turing machines. It certainly possesses the capacity for self reference, as it has to allocate memory space and machine cycles for its own operation as well as for apps and calls."

Which I consider to be patently false. For one thing, he is crossing a rather serious boundary between computation and reasoning. He apparently considers them equal, which is a false premise to start with, and which makes shaky ground indeed on which to build the rest of his comment.

As Douglas Hofstadter demonstrates with thorough precision in his "Godel, Escher, Bach" book, it takes a minimum amount of complexity (far beyond anything we have built) in order to show any meaningful degree of self-reference. Your typical Turing machine has not, in fact, shown itself capable.

A Turing machine is a "complete" computational machine in that any calculation that can be done on one can be done on another. But nobody has ever discovered how to make them do the kind of things the author asserts.

The whole thing, to me, looks like yet another physicist / mathematician attempting to make the giant leap from physics to metaphysics, and falling face down in the gap between. Given the statements I have read in this paper, I simply cannot take it seriously.

All devices need to be aware of themselves. Know exactly where their memory bytes are, how to use its processor, and output to the screen. Devices could not function without a highly detailed and absolute awareness of self.

The test consists of answering questions about yourself and your thought processes, including things you "believe" and predicting future behavior. It's hard to come up with a definition of self awareness that's much better than being capable of answering those types of questions. In other words, his test assumes a device with enough self awareness to complete the test, which is where an iPhone (and every other device) fails.

My thermostat believes it's Napoleon, and whenever I wander by it on the way to the restroom at night, it always bugs me about how we should be invading Russia and to please make sure I never ship him off to Elba or some such nonsense.

The fact that a smartphone (Or I assume by extension any personal computer) can qualify should be an indcator that the test itself is flawed. Just like how many early definitions of Life applied to Fire (breaths, eats, grows, responds to outside stimuli, etc) even though it is just a chemical reaction.

Depends on who you ask. Some people would not necessarily believe that he or she is 'just a chemical reaction'. As unhip as it is, I really don't think I'm 'just a chemical reaction'. I have will. I don't know about the rest of the world, but I know I have a will. Now when you come back and start flaming me for believing what is known as a properly basic belief (that I am real), just keep in mind that you're not real and therefore your arguments to the contrary matters about as much as cleverbot's.

I think the big problem with believing that people are real is that it feels supernatural. And since arm chair scientists are allergic to the idea that there exists a nature outside of our nature (that is a super nature or supernatural - not to be confused with magic), they will go through gyrations to deny such an obvious truth as in that 'I am real'.

Now cleverbots, bring on the pitchforks. Be sure to downvote this to (Score:-1 Probably a Christian) if you have mod points.

Note the "just" qualifier in "it is just a chemical reaction". A fire is a chemical reaction and nothing more. We are chemical reactions and something more, and it's that "something more" that makes us alive. Defining what that "something more" consists of is an ongoing problem for everyone from physicists to biologists to philosophers to clergy.

He was merely pointing out that for some attempts at defining what "something more" is, fire would also errantly qualify as life.

Early definitions of fire may have been flawed, but our current definitions for life are pretty arbitrary. The definition of life is engineered to include the things we want to include among the living.

It is pretty easy to come up with a definition of free will that specifically includes people and excludes computers (e.g. smartphones), but what purpose would that serve?

We have a definition of life that puts bacteria and humans as equals. I'm sure some people would feel like we are more alive than bacteri

At what arbitrary point does a chemical reaction jump from being 'just' a chemical reaction to being a chemical reaction that qualifies as 'life'?

Note that this is fundamentally human-centric question. Life is a word that we made up, there is no intrinsic property of life. If I take a handful of carbon, water, and trace elements, then use a magic machine to put them together in a new shape that farts and asks for tea, I've not imbued the items with some material substance that was not there before to make i

At what arbitrary point does a chemical reaction jump from being 'just' a chemical reaction to being a chemical reaction that qualifies as 'life'?

Note that this is fundamentally human-centric question. Life is a word that we made up, there is no intrinsic property of life.

True that we made up the word life, but untrue that it has no property. It's also worth pointing out that we are the only species that can communicate complex concepts, that we know of, so it's not a relevant point to make. For all we know whales could have defined "life" long before us and we don't understand what they are saying, or species that went extinct millions of years ago could have been first to discuss "life".

While I agree that we can't pinpoint a precise definition of "thing" that makes some

The funny thing about definitions is we often don't understand what they are describing until later on, and when we finally do we may wind with a definition which lies in contrast to our initial intuition. A good example would be "temperature." You may start out only with an idea that somethings feel warm or cold. Then you discover that you can use a thermometer to be quantitative about it, so now temperature is defined by the expansion of a particular liquid at normal pressure. But that doesn't make sen

My smartphone definitely has free will. I can not predict when it will reboot on its own, when it will freeze on a screen or when it will lie to me about notifications. I think it not only has free will, but is also a sociopath!

Don't forget the random auto-"corrections" that it makes to what you type. Sometimes I think my phone is trying to get me killed...

[Text to Wife] Honey I'll be picking up some (chicken) chicks to eat tonight. See you at (home) hate you (gorgeous) gordo lady! P.S. (Veronica) Erotica at work was crazy today, tell you all about it later.

An abbott in "A Canticle for Leibowitz" had a balky piece of high technology in his office and shouted something to the effect "It has a soul, I tell you! It knows the difference between good and evil and it has chosen evil!".

Nope. The author actually cites Dennett's book and the argument made is completely different. Instead of reading third-hand reporting written by a journalist, try the original paper at http://arxiv.org/pdf/1310.3225v1.pdf [arxiv.org]

The proof is an extension of Turing’s halting problem in computer science. This states that there is no general way of knowing how an algorithm will finish, other than to run it. What’s more, any attempt to determine the decider’s decision independently must take longer than the decider itself.

Since when does a simulation need to take longer than reality? The author assumes that a human mind is the most efficient vehicle to arrive at that human's decisions. This is not necessarily the case. I can run a simulation of an old computer on a much faster new computer to figure out what the old computer will do before it does it.

It doesn't. WOPR taught us that. It ran through thousands of nuclear engagement simulations and scenarios in just a few minutes, and any real engagement would last for at least an hour.

I'm not sure whether my thermostat has free will or not. I have been asking it repeatedly "are you a decider?", and I can't decide if it lacks enough decision making ability to answer or knows I'm testing it and is refusing to answer on the grounds it may incriminate itself.

No, he doesn't assume that. This is what you get for reading a slashdot summary rather than the original paper, which contains sentences such as

The indeterminate nature of a decision to the decider persists even if a neuroscientist monitoring her neural signals accurately predicts that decision before the decider herself knows what it will be.
Source: http://arxiv.org/pdf/1310.3225v1.pdf [arxiv.org]

"This states that there is no general way of knowing how an algorithm will finish, other than to run it. This means that when a human has to make a decision, there is no way of knowing in advance how it will end up. In other words, the familiar feeling of not knowing the final decision until it is thought through is a necessary feature of the decision-making process and why we have the impression of free will."

The conclusion from the halting problem to human decision making doesn't hold. Even if we allow th

The only thing sloppy here is the slashdot summary and poor journalistic reporting. This is not what the original paper reasons at all. A usual, go to the primary reference: http://arxiv.org/pdf/1310.3225v1.pdf [arxiv.org]

PS the author explicitly states that someone may simulate the decider's decision process faster than the decider and predict the outcome of the decision before the decider makes it. What's discussed is the indeterminancy of the decision to the decider himself.

Just because an entity's actions or decisions may be predictable does not mean that they have any less free will, it only means that previously identified habits or patterns have been identified which can be reasonably shown to influence the outcome.

If a small child puts their hand on a hot stove for the first time and they get burned, the fact that they aren't liable to do that again is fairly easy to predict, but isn't remotely an indication that some of their free will has been taken from them. If anything, the fact that they are not consciously making the specific choice to avoid their own discomfort in the future only affirms their free will, even though this is an expected and predictable response.

Have you ever asked: What is the best place to draw the boundary of this system (or rather the boundary of each nested semi-autonomous subsystem), especially in cases where it isn't crystal clear, like an ant colony, a virus+modified-host lifesystem, a port city.

The best boundary definition is probably informational (process-description-oriented) rather than physical-snapshot based. Question: Which subset of stuff around here acting together has the most to do with (the most influence over) its own evolutio

Saying an iPhone is conscious (an important component of free will) just because it tries to run your life is silly pseudoscience meant for news articles and not real thought. An iPhone runs your life because Apple programmed it that way.

1. It’s your birthday. Someone gives you a calfskin wallet. How do you react?2. You've got a little boy. He shows you his butterfly collection plus the killing jar. What do you do?3. You’re watching television. Suddenly you realize there’s a wasp crawling on your arm.4. You’re in a desert walking along in the sand when all of the sudden you look down, and you see a tortoise, Tony, it’s crawling toward you. You reach down, you flip the tortoise over on its back, Tony. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t, not without your help. But you’re not helping. Why is that?5. Describe in single words, only the good things that come into your mind about your mother.

If they pass that portion of the test, engage them in some more dialog - more rhetorical in nature than direct questions...

6. In a magazine you come across a full-page photo of a nude girl.7. You show the picture to your husband. He likes it and hangs it on the wall. The girl is lying on a bearskin rug.8. You become pregnant by a man who runs off with your best friend, and you decide to get an abortion.9. Last question. You're watching an old movie. It shows a banquet in progress, the guests are enjoying ra

I'm sure that given a strict interpretation of the set of criteria listed, not many folks would likely have free will. The first questions 1-3 sort of indicate the ability to make a decision, but the last question "can you predict your decision in advance?" is likely to be true for many decisions that people might make.

For example, a movie comes out (say like gravity or elysium). Certainly, you are a decider (you can choose to go or not go to the movie and say bicycle or go to a party), and you can make y

Again, someone ran into the halting problem and thought they could say something profound about it. Worse, they got tangled up with "free will", which is theology, not physics or compute science.

A deterministic machine with finite memory must either repeat a state or halt. The halting problem applies only to infinite-memory machines. A halting problem for a finite program can be made very hard, even arbitrarily hard, but not infinitely hard.

As a practical matter, there's a widely used program that tries to solve the halting problem by formal means - the Microsoft Static Driver Verifier. [microsoft.com] Every signed driver for Windows 7 and later has been through that verifier, which attempts to formally prove that the driver will not infinitely loop, break the system memory model with a bad pointer, or incorrectly call a driver-level API. In other words, it is trying to prove that the driver won't screw up the rest of the OS kernel. This is a real proof of correctness system in widespread use.

The verifier reports Pass, Fail, or Inconclusive. Inconclusive is reported if the verifier runs out of time or memory space. That's usually an indication that the driver's logic is a mess. If you're getting close to undecidability in a device driver, it's not a good thing.

"someone" in this case refers to a respected physicist and quantum information theorist who at least deserves that you at least bother to criticize what's in the actual paper rather than a lousy slashdot summary, rather than putting up a strawman completely unrelated to anything the author actually claims. http://arxiv.org/abs/1310.3225 [arxiv.org]

[Smartphones] 'seems to possess all the criteria required for free will, and behaves as if it has it.'

It doesn't have "free" will. It's actually the will of the application developers imposed upon your device. But you let them when you installed their app, so it's ok.

I'm not sure physicists are the best people to decide what has free will or not (or even exhibits the behavior of having free will). Free will involves not just having choices, but making the choice based on a difference in the weighing of various factors. Choosing at random is not free will, though choosing to choose at random is. Assigning a

Your musings are explicitly addressed in the paper. As usual, the journalist reporting on the research, and worse, the slashdot summary, manage to completely misrepresent and sensationalize. Go to primary sources: http://arxiv.org/abs/1310.3225 [arxiv.org]

This states that there is no general way of knowing how an algorithm will finish, other than to run it.

The above statement is simply false.
What Wikipedia has is closer to what I remember from college:

Given a description of an arbitrary computer program, decide whether the program finishes running or continues to run forever.

Halting problem has nothing to do with knowing how a specific computer program will work. Or knowing what the program will return.
There are plenty of examples of programs we can safely know will stop. I can look at a specific program and deduce its output. With many programs I can do this faster than running the program. But the general idea is kinda of sound. Some calculations take time or additional data. B

I have never understood the assumption that free will means choices cannot be known ahead of time. To me, it seems that the presence of free will can potentially mean that outcomes are *more* constrained than in a strictly physical system, i.e., inspection of the quantum mechanical wave function may not lead to a solid prediction on whether I will or will not kill someone, but if I have chosen to follow a moral prohibition against murder, then it can be known (at least to myself) that I will not kill them.

Philosophers, theologians, men of great intellect and depth have spent lifetimes failing to completely define what exactly "free will" even is, and these guys think they have a test for whether it's present or not? Oi!

Sort of like the Glasgow Conscoiusness Scale (GCS) - from what I can undersand, your average block of wood rates around a 3-4.

Seriously, the whole "do we have free will" case is a prime example of trying to find an answer without knowing what the question is.

If its about determinism, then quantum mechanics and chaos theory deliver a double whammy to that: one says that you can't predict the behaviour of many complex systems unless you can measure the parameters to perfect accuracy... the other limits what you can measure to perfect accuracy...

since we cant know the decision making process for an agent external to ourselves, we devise a turing test — which says nothing more than if it quacks like a duck, it IS a duck (and never mind that we decieve ourselves with decoys and clever ploys).

however, for our own agency, we can raise the exceptional condition and follow the path through introspection — which is fraught with subjective bias.. if we attain some objectivity in our own comiserations — we do find that almost everything we

The very concept of free will is itself a silly one, devised by simple-minded people. And it has absolutely NOTHING to do with science.

First of all there really is no such thing as "free will", REGARDLESS of whether the universe is deterministic or not; the concept is by nature a contradiction. The generally accepted definition of free will is "I am the ultimate cause of my actions". To put it another way, "I am the ultimate originator of my will". If you are the religious type, then when you say "you", you're talking about some abstract notion of a soul, and we can't really delve any further. But this is a scientific paper, so "you" means the collection of thoughts, memories, and wills residing in your skull. So really we're saying "my will determines my will", which of course doesn't make sense! You couldn't have "chosen" your "original" will (which went on to determine your future wills); you weren't born yet! It is a prime example of causa sui.

But moving on to the paper, it's rife with invalid assumptions. For example: "If decisions are freely made, then those decisions can form the basis for condemning people to prison". That assumes that we condemn a person to prison because they made a bad decision and they "deserve it". That's an oversimplification. We condemn people to prison in order to dissuade other people from committing crimes, and to reduce the likelihood of condemned people committing more crimes. Free will and determinism have nothing to do with it.

Also, the paper never really attempts to form a test for free will. The poor summary is more to blame here than the paper itself. The paper forms a test for the PERCEPTION of free will, which the author arbitrarily defines as "being unable to know the result of a decision before actually making that decision" (which implies recursive reasoning, which is the main criteria for the test). So a thermostat does not have free will because an external device could easily predict its output. But a computer has the perception of free will, because as an extension to Turing's halting problem, it is possible to create algorithms where it is impossible to know the output faster than it takes to actually go through the algorithm.

What does this really mean, practically speaking? Absolutely nothing. These are concepts that have been discussed for many years; nothing is being added here. It's disappointing that this kind of thing is able to make it to the Slashdot front page.

Humans don't have free will. There's no reason to believe the answer to question #4 is no. The neurons composing our brain deterministically (given a specieid set of stimuli, they had a calculatable response). With sufficient knowledge on the layout and state of someone's brain, you could calculate what their response to a given stimuli would be.

But the people who programmed her do. She's just (well) designed to *appear* to have it.

But is there really any difference between having free will and appearing to have free will? Or, put another way, is there really any difference between the illusion of free will and free will? Is "free will" even a clearly defined concept? Some philosophers think not.

My suspicion on the answer to this question, for people, depends entirely on what the actions being judged are. For example, "Your Honor, my client cannot be held responsible for his crimes, because he has no free will." "Why of course, honey, I picked out those flowers especially because I thought you might like them."

To those who say they have no free will: "If you have no free will then you are a machine. Beware, it is easier to justify discarding/destroying/retask a machine that no longer 'meets the specifications'".As for the question we don't even have proof that the physicist's definition of free will is correct, much less the OPs. The physicist is assuming free will = not knowing the final decision. But that's ridiculous! He hasn't even explained Consciousness - which is the "knowing" phenomena of how "we know we

"But is there really any difference between having free will and appearing to have free will? Or, put another way, is there really any difference between the illusion of free will and free will? Is "free will" even a clearly defined concept? Some philosophers think not."

I think I am in the camp of something like "Whether anyone has free will or not for religious reasons, let's assume free will, then does an AI have free will? Yes."

In the many millions of funds I don't have, I believe that all thoughts are m

Whatever the heuristics are between the "beings", the act of decision is the same. And that's why it's not a magical "human right of free will". AI Free Will is a snap. We're just desperately afraid of it. See T2, "If the wrong heuristic gets in there..." - well that's what sociopathic killers are. Humans running a badly flawed HumanOS.

Well, there is one small difference. With an AI, one can always, precisely, deconstruct why and how the system makes the decision that it makes, unless it uses truly random

With a human intelligence (HI), one cannot ever deconstruct why and how the system makes the decision that it makes. It is "random" in at least the sense of being unpredictable at countless levels involving the whole non-Markovian process of evolution from the very first cell through to the present organism making the decision. Worse, even the human itself doesn't know why it makes the decision it makes, not really. Chocolate or Vanilla ice cream today? "Chocolate because I like chocolate more than vanilla" is ultimately semantically null, because one cannot answer why one likes chocolate more than vanilla, and no matter what set of reasons one cooks up for it the ultimate answer is associated with a subjective response that is a sublime blend of (evolutionarily and experientially) preprogrammed stuff, experience, and the "mood of the moment", utterly unpredictable.

Unfortunately, these are [currently] unprovable assertions about a complex process that might turn out to be totally physical and explainable. "Chocolate stimulates the pleasure center of the brain more than vanilla due to a lifetime of changes in palette sensitivity" is a totally possible, non-mysterious answer. One could even imagine an advanced MRI showing the differences in neuron firing. But just because the process is so complex it can't be reverse-engineered, that doesn't mean it's random. Our lack of ability to predict it does not mean it's "unpredictable" in the mathematical sense.

Personally, I believe humans DO have free will - which I understand as the ability to choose an action contrary to the influence of instinct or conditioning. It may be difficult or impossible to know when this choice has been made, and it may be true that it's in fact rarely used, but it is an important philosophical distinction. I don't believe computers, as currently conceived as purely deterministic processors, are capable of free will. Even RNG don't change that - deterministically following a randomly-presented path is still deterministic. I do believe there is something "special" about humans in this regard - I don't think any animals currently have this ability (who knows about aliens - the universe is large).

As for religious implications, I see no conflict between the ideas that the capacity for free will is acquired by means of millennia of evolution of the brain, culminating in sufficient complexity for self-representation and consideration of alternative futures, granting non-deterministic ability; and "God made us that way." From my point of view, "intelligent design" and "natural evolution" are the same thing.

So you think we have something special that makes us more complex than, say, a dog or a cat? Many animals have been shown to have abstract thought abilities, and we just thought they are less intelligent (in the sense of the ability of reasoning, not if they can remember more things, or calculate faster), actually just did not have the need to evolve a complex spoken language to help them represent those abstract thoughts.

Dogs are able to understand human gestures, and can understand certain words/phrases.

If the question boils down to whether a person is a deterministic system or not, even that is an open question. Perhaps people and animals are too random to be called properly deterministic. Neurons are and other cells are highly nonlinear analog systems that be subject to macroscopic effects based on quantum noise. If that is the case it may be possible to duplicate a human being in principle -- down to the level of the quantum state of each constituent particle, but still not be possible in principle t

Its been awhile since I was in school (for philosophy), or reading up on the current discussion, but as far as I know this is still a massive debate, with very little, if any, agreement between philosophers (or psychologists, or neurologists, or cognitive scientists, or programmers, or physicists, or whoever else's feild this topic touches).

That said, there is a large debate on whether there is a difference between agency as a thing, and the perception of agency. Go read up on Searle's Chinese Room, and t

In this case I propose it is actually the user of the device that has the free will, because they are initiating the algorithm. The programmer is involved in Siri's free will in the same way that your parents are involved in yours; setting it in motion.

And we do know the expected outcome, if we have enough knowledge. So even then it fails. The observer's lack of knowledge cannot in itself tell you anything about what they are observing. So really that is just conflating the possibility of failure in a syste

No, it's not. The whole question is mis-asked. Raymond Smullyan's piece Is God A Taoist? [mit.edu] has the best explanation I've seen:

Mortal:
What do you mean that I cannot conflict with nature? Suppose I were to become very stubborn, and I determined not to obey the laws of nature. What could stop me? If I became sufficiently stubborn even you could not stop me!

God:
You are absolutely right! I certainly could not stop you. Nothing could stop you. But there is no

The article is about whether a person or device believes it has free will. And believing free will is based on not being able to predict the outcome of a decision until the decision is complete.

The decision that flips the screen orientation alone in a smartphone is an example of that decision. It has to get the current measurements, combined with an accelerometer reading to see if it has been stopped, and at least a few previous measurements to determine if it has been moved enough to qualify for flipping

"experiments demonstrating that motor action potentials appear before a decision to move is made. That is, free will is an illusion. "no. All the proves is the system is complex, and that some action are automatic, like blinking.Just because some action happen involentarily doen't mean no action is a product of free will.

There are some very specific tests I'd like to see done.For example: without free will how to you come to a new idea? how do you work out a math problem you have never done before?

Q1: Am I a decider?Q2: Do I make my decisions using recursive reasoning (ie using a process that can be simulated on a digital computer)?Q3: Can I model and simulate—at least partially—my own behaviour and that of other deciders?Q4: Can I predict my own decisions beforehand?

“If you answered Yes to questions 1 to 3, and you answer Yes to question 4, then you are lying. If you answer Yes to questions 1,2,3, and No to question 4, then you are likely to believe that you have free will,” says Lloyd.

Answering those questions myself, I consider myself to be a decider (as in, I make decisions), I can model/simulate my and others' actions (I pride myself on it), and I can predict my own decisions (because I can model/simulate them). So I'm lying. But where is the lie? Am I misinterpreting the term "decider"?

No, you're misinterpreting their usage of "predict". They were careful to use model and simulate in Q3 but predict in Q4. The point is that you cannot "predict" your decision, you can only make your decision. If you could build a computer system capable of predicting your decision before you made it, you would quickly realize you do not have free will, because every decision you make would have been predicted by your computer system.

And then there's the question of having free will. I have the freedom to modify my thinking processes at any time should I not like a decision I have arrived at. Thus I have free will - at least I consider it to be - yet I would answer "yes" to all four questions.

The point is that if you answered yes to the first three questions, then yo