To follow this blog by email, give your address here...

Friday, April 16, 2010

Owning Our Actions: Natural Autonomy versus Free Will

At the Toward a Scinece of Consciousness conference earlier this week, I picked up a rather interesting book to read on the flight home: Henrik Walter's "The Neurophilosophy of Free Will" ....

It's an academic philosophy tome -- fairly well-written and clear for such, but still possessing the dry and measured style that comes with that genre.

But the ideas are quite interesting!

Walter addresses the problem: what kind of variant of the intuitive "free will" concept might be compatible with what neuroscience and physics tell us.

He decomposes the intuitive notion of free will into three aspects:

Freedom: being able to do otherwise

Intelligibility: being able to understand the reasons for one's actions

Agency: being the originator of one's actions

He argues, as many others have done, that there is no way to salvage the three of these in their obvious forms, that is consistent with known physics and neuroscience. I won't repeat those arguments here. [There are much better references, but I summarized some of the literature here, along with some of my earlier ideas on free will (which don't contradict Walter's ideas, but address different aspects)]

Walter then argues for a notion of "natural autonomy," which replaces the first and third of these aspects with weaker things, but has the advantage of being compatible with known science.

First I'll repeat his capsule summary of his view, and then translate it into my own language, which may differ slightly from his intentions.

He argues that "we possess natural autonomy when

under very similar circumstances we could also do other than what we do (because of the chaotic nature of the brain)

this choice is understandable (intelligible -- it is determined by past events, by immediate adaptation processes in the brain, and partially by our linguistically formed environment)

it is authentic (when through reflection loops with emotional adjustments we can identify with that action)"

The way I think about this is that, in natural autonomy as opposed to free will,

Freedom is replaced with: being able to do otherwise in very similar circumstances

depends sensitively on our internals, in the sense that slight variations in the environment or our internals could cause us to do something significantly different

we can at least roughly model and comprehend in a rational way, as a dynamical unfolding from precursors and environment into action was closely coupled with our holistic structure and dynamics, as modeled by our phenomenal self

then there is a sense in which "we own the action." And this sense of "ownership of an action" or "natural autonomy" is compatible with both classical and quantum physics, and with the known facts of neurobiology.

Perhaps "owning an action" can take the place of "willing an action" in the internal folk psychology of people who are not comfortable with the degree to which the classical notion of free will is illusory.

Another twist that Walter doesn't emphasize is that even actions which we do own, often

depend with some statistical predictability upon our internals, in the sense that agents with very similar internals and environments to us, have a distinct but not necessarily overwhelming probabilistic bias to take similar actions to us

This is important for reasoning rationally about our own past and future actions -- it means we can predict ourselves statistically even though we are naturally autonomous agents who own our own actions.

Free will is often closely tied with morality, and natural autonomy retains this. People who don't "take responsibility for their actions" in essence aren't accepting a close dynamical coupling between their phenomenal self and their actions. They aren't owning their actions, in the sense of natural autonomy -- they are modeling themselves as NOT being naturally autonomous systems, but rather as systems whose actions are relatively uncoupled with their phenomenal self, and perhaps coupled with other external forces instead.

None of this is terribly shocking or revolutionary-sounding -- but I think it's important nonetheless. What's important is that there are rational, sensible ways of thinking about ourselves and our decisions that don't require the illusion of free will, and also don't necessarily make us feel like meaningless, choiceless deterministic or stochastic automata.

Here is my take on free will, which I posted at http://fora.humanityplus.org/index.php?showtopic=86&pid=325&start=&st=#entry325 in my ill fated attempt to threaded discussion going on the h+ site. It is not that dissimilar to what Ben has written above.

It was in response to an article by Anthony Cashmore at http://www.physorg.com/news186830615.html in which he said “free will is an illusion.”The most relevant part of my response is below===============================

I would argue that by many common notions of “free will” we humans have free will --- even if that free will is just the result of computations governed by physical states and laws --- and, even if its behavior might, in some sense, be predetermined.

What does “free will” mean to most people. Commonly, it means things like --- one’s mind is capable of making choices --- it is capable of weighing and judging between options based on current information, beliefs and values --- we can alter our decisions by how we think about them --- we make many of our own decisions --- and --- many of our decisions have not been decided in advance.

I would argue that we already have artificial intelligences, whose computational processes have all these characteristics. And we each have had a life-time of experience indicating to us that our own minds have these characteristics.

It is impossible to model all the complexity of physical reality with a computer more simple than that of reality, itself. Even if one believed in a clock-work universe, in a universe of the complexity of ours, many predetermined outcomes would not be known in that universe until they were actually computed by it. So even in a clock-work universe, "predetermined" would often not mean"pre-computable" --- i.e., knowable in advance of reality’s computation of it.

Throw in quantum mechanics --- and the butterfly effect of chaos theory --- and it is arguable that many aspects of reality are not only far from pre-computable, but also possibly far from predetermined.

And there is every reason to believe that the butterfly effect operates in the human brain. I would argue that the processes of the human mind are so complex, that many decisions humans make are extremely far from being predetermined, in the sense of being pre-computable, and arguably might not be even predetermined, at all, if quantum mechanics is taken into account.

What is clear is that our decisions are controlled by our thoughts, and that many of our thoughts are not currently pre-computable or explicitly represented anywhere more than a fraction of a second before they occur. To me this sounds like a relatively traditional notion of “free will.”

Cashmore’s paper makes much of the various roles of the conscious and subconscious. Clearly much of human thought is controlled by the subconscious. Humans have habits, values, emotions, instincts, urges, and addictions that play a major role in controlling their behavior. But all these things are part of what makes each of us ourselves. When they play a role in our individual decisions, they are part of us, playing a role in our decisions.

So people who say there is no such thing as “free will,” are merely saying that by one narrow meaning of the phrase, there is no such thing. By many common-sense notions, people do have free will. People make many decisions that were not pre-computable before they made them. People’s thoughts and beliefs do affect their decisions, sometimes in very dynamic and hard to predict ways. People are capable of changing their minds.

Even most traditional notions of “free will” assume human decisions come from something quite different than pure chance, such as one’s character, values, and discipline.

So, free will, as many people conceive of it is not just an illusion --- it is a function of the fact that the human brain is a powerful, autonomous, complex, dynamic decision making entity.

Free will is the feeling you get when you make up reasons for your actions. It does not matter if the action was deterministic or random. This behavior has been shown to exist in monkeys.

The desire for free will increases your evolutionary fitness. If you get hungry, you believe that you can decide to eat or not eat. If you did not seek situations where you could do either, you would starve.

But you don't really decide to eat. If I ran a simulation of your brain on a computer and input the stimulus of hunger, it would predict your behavior. The program does not have free will. It is just simulating the firings of neurons.

This program would also predict that you would claim to have free will.

I agree with Ben Goertzel at this point.My own opinion is: I find very unproductive all this fear and concern in relation to an evil artificial intelligence that destroys humanity. That sounds to me speculation of speculation. I think it would be better to burn brain cells trying, first, figure out how to ACHIEVE AGI. Hit it until it acquires knowledge to the point of endangering anything, there will be a good time for debate. Until then, roughly, there will always be able to throw the switch.Anyway, I think the focus should be to reach the AGI. Resources (including mental) would be better spent on this.

Ben, thanks for the response to my other text (about Numenta). I put it on my knol (http://goo.gl/YEs5) and soon I'll comment it.

Matt, would you please expand upon how a program that can predict human behavior, based on simple behavioral inputs, helps invalidate the traditional concept of free will? Don't misunderstand me; I don't believe we have ultimate free will, if for no other reason than we are linked to a chaotic external environment.

I also fail to see how rationalizing a decision to act, after the act is committed, has anything to do with negating the idea of intrinsic mechanisms that allow a degree of autonomy.

If I am interpreting your statements correctly, you believe we are automatons whose behaviors are effected by the system described by the universe (???), which consciously believe in self-determination.

Please help me; I am not a philosopher or mathematician, and thus lack the skills and knowledge (I'd imagine) all of you possess. But on the surface, your statements seem contradictory. Thank you for any explanations afforded.

Matt, would you please expand upon how a program that can predict human behavior, based on simple behavioral inputs, helps invalidate the traditional concept of free will? Don't misunderstand me; I don't believe we have ultimate free will, if for no other reason than we are linked to a chaotic external environment.

I also fail to see how rationalizing a decision to act, after the act is committed, has anything to do with negating the idea of intrinsic mechanisms that allow a degree of autonomy.

If I am interpreting your statements correctly, you believe we are automatons whose behaviors are effected by the system described by the universe (???), which consciously believe in self-determination.

Please help me; I am not a philosopher or mathematician, and thus lack the skills and knowledge (I'd imagine) all of you possess. But on the surface, your statements seem contradictory. Thank you for any explanations afforded.

Matt, would you please expand upon how a program that can predict human behavior, based on simple behavioral inputs, helps invalidate the traditional concept of free will? Don't misunderstand me; I don't believe we have ultimate free will, if for no other reason than we are linked to a chaotic external environment.

I also fail to see how rationalizing a decision to act, after the act is committed, has anything to do with negating the idea of intrinsic mechanisms that allow a degree of autonomy.

If I am interpreting your statements correctly, you believe we are automatons whose behaviors are effected by the system described by the universe (???), which consciously believe in self-determination.

Please help me; I am not a philosopher or mathematician, and thus lack the skills and knowledge (I'd imagine) all of you possess. But on the surface, your statements seem contradictory. Thank you for any explanations afforded.

Matt, would you please expand upon how a program that can predict human behavior, based on simple behavioral inputs, helps invalidate the traditional concept of free will? Don't misunderstand me; I don't believe we have ultimate free will, if for no other reason than we are linked to a chaotic external environment.

I also fail to see how rationalizing a decision to act, after the act is committed, has anything to do with negating the idea of intrinsic mechanisms that allow a degree of autonomy.

If I am interpreting your statements correctly, you believe we are automatons whose behaviors are effected by the system described by the universe (???), which consciously believe in self-determination.

Please help me; I am not a philosopher or mathematician, and thus lack the skills and knowledge (I'd imagine) all of you possess. But on the surface, your statements seem contradictory. Thank you for any explanations afforded.

If there is a program that predicts your behavior, and you programmed a robot to carry out those predictions in real time, then as far as anyone could tell, the robot would be you. Would it have free will?

Can you describe exactly what property of a program gives it free will, as opposed to making deterministic predictions?

Thanks for carrying on the conversation, Matt. (And sorry for the multiple replies)

In response to your comment concerning a program that predicts my behaviors to a tee, which in turn are carried out by a robot:

1) Yes, it would be me in a robotic body, to any onlooker. But the difference would be the robot is "predicting" responses, versus me using critical thinking and rationalizing responses. That is a huge difference, even if no one else could discern it.

2) This is somewhat of an absurd scenario that is akin to trying to crystallize ideals in a material universe. It's somewhat like when others use the phrase, "all things being equal", in situations where there are so many variables that the environment creates inequality 99.99999% of the time.

Finally, in relation to your original post in this thread, we are not programmed from the outset to believe in "free will" in the sense being described here. We are programmed with a will to survive, which has qualities that overlap with the conventional concept of self-determination.

A robot that imitates you would also claim to use critical thinking and rationalizing. Rationalization, which gives us the illusion of free will, is somehow adaptive. Monkeys do it too. http://www.world-science.net/othernews/071106_rationalize.htm

Wrong. You said the robot predicts my behavior, hence the mechanism is inherently different to what we do, no matter what the robot might "say". It is not "rationalizing"; it is predicting based on paramaters and probability. Perception is not necessarily reality. The mechanism matters.

The monkey study does not provide proof of what you are insinuating. Please explain how monkeys (or human children) placing greater value on their own decision-outcome scenario negates free will.

Another problem with your argument is the use of the word, "predict". In itself, it implies doubt. You should change your it to "absolute clone" or some-such. Besides, your scenario has no basis in reality at this point in human history.

You do X. A robot models your brain and predicts you will do X, then does X. The robot's behavior is identical to yours. You claim to have free will. So does the robot. Anything you believe, the robot will claim to believe. How are we to know the difference? Do you think your brain does something not computable? Do you think the atoms in your brain don't obey the same laws of physics as other atoms?

You do X, then argue why X was a good idea. If you had not done X, you would have argued that X was a bad idea. It doesn't matter if the choice was arbitrary. When given the choice again, you will most likely make the same choice. It's just human nature. The study showed that monkeys think the same way. What's wrong with that?

The problem is that you are using an ideal hypothetical, combined with monkey studies, to definitively state we don't have free will. As I stated in a previous response to you, it's as silly as believing in absolute ideals manifesting in the real world.

1) Yes, our corpora obey the laws of physics. So what. There is no computer that mimics me down to the subatomic level. (Clearly, I am playing along with your belief that there is no "soul" of any kind, and thus we are what we can measure)

2) Yes, obviously our brains do things that are no computable so as to mimic our behaviors. Even so, as I stated before, we KNOW the mechanism would be different. This point is lost on you. Sensation and experience are two phenomena the robot doesn't have and couldn't utilize as we do. Or shall we add these into your model as well? Along with a subconscious?

You state: "You do X, then argue why X was a good idea. If you had not done X, you would have argued that X was a bad idea."

Not true. I can discern between a good and bad decision, with respect to many variables. I imagine that many can do so. Use another analogy, because that one is flawed.

You state: "It doesn't matter if the choice was arbitrary. When given the choice again, you will most likely make the same choice. It's just human nature.'

Again, you use flawed logic. If the decision was "arbitrary", then NO, the decision may not be the same the next time, by the very definition of the word "arbitrary". This especially holds true if any one or more environmental variables (or even intrinsic variables) differ, because such may move my "whim" towards choice "b", "c" or "d", next time.

You speak as if we can be described in binary, which in itself is erroneous. Moreover, you equate rationalization of a choice (which is predetermined in your opinion) as somehow being proof of negation of free will. I don't see the connection there, and don't comprehend how rationalization of such a choice is equivalent.

Another question for you, Matt: If I choose rationalization "A" the first time I make predetermined choice "1", then will I always recall rationalization "A" under such circumstance, or next time will I add rationalization "B" to it? If so, am I using free will to rationalize my otherwise predetermined choice?

It would help your argument if you describe precisely what you mean by "free will". Does a random number generator have free will because it can freely choose to output a 0 or a 1? If not, then what is it missing?

Also, a robot that behaves just like a person (maybe you don't think such a thing is possible) will claim to have sensation and experience. How do you know it doesn't? Just because it's a robot and you say so?

Matt, would it be possible for you to answer my counters, before moving on to the "next thing"? You seem to be looping back to your original conjecture (about the robot), without addressing my issues with your hypothetical. Thank you.

However, I will answer your question concerning the random number generator, with another: Does sentience have any place in this conversation?

To answer your previous points, I said earlier that free will is the feeling you get from rationalizing the decisions you made. Your decisions don't *seem* deterministic because you can't predict them before you make them. But that doesn't mean they aren't.

Sentience might have something to do with it, if you can give a coherent definition of it. If you are sentient and a robot has behavior indistinguishable from you, then it must be sentient too, right? Or does the algorithm matter, and if so, how?

If these are the kinds of blogs i will be finding when i visit any post, then i will always be on the look out. This blog is awesome, right from writing skills to the ability to send the message across. This is such a nice post, which i would like to see once more. Animal Head Wall Hooks