Monday, August 05, 2013

Computation, Church-Turing, and all that jazz

by Massimo Pigliucci

I recently examined (and found wanting) the so-called computational theory of mind, albeit in the context of a broader post about the difference between scientific theories and what I think are best referred to as philosophical accounts (such as the above mentioned computational “theory”). Defenders of strong versions of computationalism (which amounts to pretty much the same thing as strong AI) often invoke the twin concepts of computation itself and of the Church-Turing thesis to imply that computationalism is not only obviously true, but has been shown to be so by way of mathematical proof. The problem is, if one looks a bit closer at what computation means, or examines exactly what Church-Turing says, no such conclusion can handily be achieved. Let me explain.

Let’s start with the Church-Turing thesis. Jack Copeland has written an exhaustive entry about it for the ever more excellent Stanford Encyclopedia of Philosophy, and you are welcome to check the details and sources thereof. First off, of course, let’s say exactly what the Church thesis and the Turing thesis actually say (they are two different theses, which were soon shown to be equivalent formulations of the same basic concept).

Turing thesis: Logical computing machines (now known as Turing machines) can do anything that could be described as “rule of thumb” or “purely mechanical.”

Church thesis: A function of positive integers is effectively calculable only if recursive.

Don’t ask me how on earth these two turn out to be equivalent statements, computational logic is not my field. It has something to do with the equivalency between something called “lambda definibility” (Church's approach) and Turing's adopted definition of computability.

What is important for my purposes here is that — as Copeland clearly explains in detail — there has been much confusion and misunderstanding in philosophy of mind regarding the Church-Turing thesis, a situation that apparently hasn’t spared big names in the field, from Dan Dennett to Patricia Churchland.

To give you a taste of what the problem is, let me quote Copeland extensively:

“A myth seems to have arisen concerning Turing’s paper of 1936, namely that he there gave a treatment of the limits of mechanism and established a fundamental result to the effect that the universal Turing machine can simulate the behaviour of any machine. The myth has passed into the philosophy of mind, generally to pernicious effect.”

Ouch. Here is a (small) sample of things philosophers of mind have said that, according to Copeland, are just plain wrong:

Dan Dennett: “Turing had proven — and this is probably his greatest contribution — that his Universal Turing machine can compute any function that any computer, with any architecture, can compute.” (NOT TRUE)

Paul and Patricia Churchland: [Turing’s] “results entail something remarkable, namely that a standard digital computer, given only the right program, a large enough memory and sufficient time, can compute any rule-governed input-output function. That is, it can display any systematic pattern of responses to the environment whatsoever.” (NOT TRUE)

What Turing and Church have proven, more modestly (but still remarkably!), is that Turing’s universal machine can carry out any computation that any particular Turing machine can perform. This simply does not have the sort of implications for what actual (physical) computers can do, or about computability in general, that so many overenthusiastic computationalists insist it does, in part because T and C’s notion of a computer was not at all equivalent to that of a finitely realizable physical system; nor was their concept of computation as broad as many philosophers of mind would desire (indeed, it is noteworthy that Turing’s use of the word “computer” referred to human computers, i.e., to individuals that can carry out “mechanical” calculations by way of rule of thumbs — given that, please take some time to contemplate the irony of the meaning of a Turing test...).

In fact, as Copeland points out, we already know of the possibility of machines (sometimes referred to as “hyper-computers”) that generate functions that are not Turing-machine computable. And if that were not enough, Copeland cites a number of theoretical papers about the existence of physical processes whose behavior does not conform to that of Turing machines. So much for grand statements about the “universality” of “computation” (more on this in a sec).

All of this is important for computational theories of mind because they assume that psychological processes akin to those going on in the human brain are Turing-machine computable. But Copeland clearly states that this is, at best, an open question, not at all a settled fact.

Here is another direct quote from the SEP article that ought to give pause to computationalists: “[it] is common [and erroneous] in modern writing on computability and the brain ... to hold that Turing’s results somehow entail that the brain, and indeed any biological or physical system whatever, can be simulated by a Turing machine ... The Church-Turing thesis does not entail that the brain (or the mind, or consciousness) can be modelled by a Turing machine program, not even in conjunction with the belief that the brain (or mind, etc.) is scientifically explicable, or exhibits a systematic pattern of responses to the environment, or is ‘rule-governed.’”

So, please, can we stop invoking Church-Turing as a trump card in favor of computational theories of mind? Thank you.

Now to the broader issue of computation itself, specifically when it comes to computation as a process to be carried out by actual physical systems (after all, philosophy of mind, artificial intelligence research, and neurobiology are concerned with flesh and blood brains, or with possible silicon equivalents — both of which are actual or actualizable physical systems). Here an excellent source summarizing the relevant discussions is the SEP article by Gualtiero Piccinini. It too, as it turns out, deals in part with Church-Turing, as well as with a number of accounts of computation in philosophy (simple mapping, as well as causal, counterfactual, dispositional, semantic, syntactic and mechanistic accounts — go read about them to your heart’s content). But the part I wish to focus on addresses the crucial question of whether every physical system is computational — another oft-used trump card in the hands of computationalist theorists of mind.

The notion that all physical systems carry out computations is appropriately referred to as pancomputationalism, and prima facie I find it just as interesting as its non-physicalist counter, panpsychism — i.e., not at all. Still, let’s take a closer look.

As it turns out, there are two varieties of pancomputationalism on the market: strong, or unlimited pancomputationalism; and weak, or limited pancomputationalism. The first version says that every physical system performs every computation, while the latter says that every physical system performs some computation.

Piccinini also distinguishes different schools of thought about pancomputationalism with respect to its source, i.e. how one answers the question of why one thinks everything computes. One possibility is that computation is a matter of free interpretation, i.e. that whether a given system can be described as computing is a question of how one describes that system; a second alternative is that computation just is the causal structure of a system, so that everything that involves causality ipso facto computes. Thirdly, there is the possibility that everything is computational because everything carries information. And finally, pancomputationalism may stem from the idea that the physical universe itself is a computer, from which it obviously follows that everything within it is computed.

So let’s proceed with our discussion in two parts: one dealing with strong vs weak pancomputationalism, the other addressing the alleged sources of pancomputationalism of whatever sort one accepts.

Somewhat ironically, a major thrust of unlimited (strong) computationalism is a criticism of computational theories of mind: Ian Hinckfuss and William Lycan have articulated this position by arguing that a bucket of water contains a huge number of microscopic processes, which could in theory implement a human program, or any arbitrary computation whatsoever, at least for a time. John Searle (he of the Chinese Room) elaborated on this, suggesting that whether a physical system implements a calculation is a matter of how the observer interprets the system, and it was Hilary Putnam, a major originator of the computational theory of mind (again, the ironies never end!) that first formalized the concept of unlimited pancomputationalism.

The problem, as Piccinini points out in his article, is that if unlimited pancomputationalism is true, then the notion of computation becomes trivially true and vacuous, which strongly undermines the computational theory of mind. Indeed, unlimited pancomputationalism even undermines computer science itself, because there is no distinction anymore between a proper computer (even a broadly construed one) and pretty much anything else, rocks included.

Fortunately, there doesn’t seem to be any good reason to believe in unlimited pancomputationalism, since it relies on the above mentioned simple mapping, which I understand is by far the weakest account of computation available to date.

Let us therefore turn to limited pancomputationalism, the idea that, while everything computes, not all things can carry out all computations. This view of computation is not as vacuous as the preceding one, but it is still open to accusation of trivialization, which again directly undermines computational theories of mind (because if everything computes, then minds still aren’t that different in principle from rocks, though rocks may be capable of much more limited “computations”).

Weak pancomputationalists do have a response here, however: the predictive power of their position hinges precisely on the fact that each particular system is limited insofar as the computations that that system can carry out are concerned, so that one can come up with meaningful (computational) theories explaining why brains are brains and rocks are rocks.

I must admit that such reply leaves me extremely cold. But it is important to understand that — just as in the previous case — whether limited pancomputationalism is a coherent option depends on one’s account of computation. As it turns out, semantic, syntactic and mechanistic accounts are rather restrictive and not easily compatible with any form of pancomputationalism, while the simple mapping and causal accounts are a bit more friendly to it.

Time to turn our attention to the alleged sources of pancomputationalism. The first one is, as we’ve seen, that everything can be considered computational if the observer describes it in a particular way. The obvious (and, I think, fatal) objection here is that then pancomputationalism is a hopelessly subjective notion and we can safely ignore it from both a scientific and a philosophical perspective.

The idea that computation is equivalent to causation is a bit more intriguing, but it opens up its own Pandora box. To begin with, ever since Hume the concept of causation itself has been anything but philosophically straightforward, and it is not at all clear what is added by re-baptizing it as computation. Moreover, there are quantum mechanical phenomena that appear not to be causal (or at least not to require the concept of causality to be deployed in order to understand them), which means that pancomputationalism wouldn’t, in fact, be “pan” after all. Lastly, we already have a term for causation, and we already use the word “computation” to indicate a (usefully) restricted class of phenomena, so why mix the two up and increase confusion?

Similar considerations apply to the equation of information with computation. The term information itself can be defined in a variety of manners, and it is far from clear that everything carries information except in a trivially true sense of the term. Seems like pairing up equally vacuous concepts is unlikely to generate much insight into how the world works.

So we finally arrive at the provocative idea that everything is computational because the universe itself is a computer. But what does that mean?

The notion is referred to as ontic pancomputationalism, and it comes into the philosophy of computation straight from fundamental physics. Scientifically speaking, ontic pancomputationalism is a claim based on our empirical understanding of the world, while philosophically speaking it is a (logically independent) metaphysical statement about the nature of the world.

As Piccinini puts it, “The empirical claim is that all fundamental physical magnitudes and their state transitions are such as to be exactly described by an appropriate computational formalism.” There are two possibilities here: cellular automata or quantum computing. Some physicists — most prominently Stephen Wolfram — have proposed that the universe itself is a cellular automaton, that is that its phenomena unfold algorithmically. The problem with this, and it’s a serious one, is that this theory predicts that all fundamental physical quantities are discrete (as opposed to continuous), and that moreover space and time themselves must be fundamentally discrete. This is because cellular automata are classical (as opposed to quantum) systems. But Richard Feynman (cited in Piccinini’s article) argued that it is difficult to see how the quantum mechanical features of the universe can be simulated by a cellular automaton.

Enter quantum computability, where bits are replaced by “qubits,” units of information that can take any (superimposable!) state between 0 and 1 and can exhibit quantum entanglement (does your head spin yet? Good, you’re in ample company). This essentially says that the universe is indeed a computer, but a quantum, not a classical one. From an empirical perspective quantum ontic pancomputationalism is simply a redescription in computational language of standard quantum mechanics — importantly, there is no differential empirical content here, which raises the question of why bother with the whole exercise to begin with.

The metaphysical claim of ontic pancomputationalists is that the universe itself is made of computation (or, equivalently, of information). Here is again Piccinini’s rendition of the idea: “according to the metaphysical claim of ontic pancomputationalism, a physical system is just a system of computational states. Computation is ontologically prior to physical processes, as it were.” I kind of like the idea myself, but you know where that leads, right? Piccinini: “If computations are not configurations of physical entities, the most obvious alternative is that computations are abstract, mathematical entities, like numbers and sets. ... the building element [of the universe] is the elementary ‘yes, no’ quantum phenomenon. It is an abstract entity. It is not localized in space and time ... Under this account of computation, the ontological claim of ontic pancomputationalism is a version of Pythagoreanism. All is computation in the same sense in which more traditional versions of Pythagoreanism maintain that all is number or that all is sets.” Oh no, the specter of mathematical Platonism rises again!

So there you have it, folks. Do not use Church-Turing as the theoretical underpinning for computational theories of mind, because it ain’t no such thing. And be very careful what sort of pancomputationalist you may end up being (if any), and why. The logical consequences may be very unpalatable for your metaphysical worldviews.

229 comments:

While hypercomputers are logically possible machines, there is no known way to build one, at least last time I checked. But let's suppose for a second that it were indeed possible. Maybe we'll find some previously unknown physics that makes hypercomputers buildable. What would this have to say about the computational theory of mind? Not much. After all, if the mind is a hypercomputer, it's still a computer. It's not only there in the name but it's there in the work on the theory of computation. And indeed, if hypercomputation really is possible, then you can bet we'd have a brand new Intel Hyperchip as soon as they could figure out how to make it cheap enough (and the NSA would probably have one regardless of cost long before that). What then for the seemingly "ill-fated" CTM? I'm not sure much really different happens. CTM is adequate, but you just need really good computers to do it.

A better question for me to ask right now might be, if hyper computation really is possible, is it really all that different from normal computation? Well, let's look at some hypercomputer ideas and see. Lot's of them are oracle machines, but what about the less mysterious forms of hypercomputation that are known to exist in logical space? The simplest one is a Turing machine that can complete infinitely many steps in finite time. Well, that won't get us around the objections to the CTM, it's just a boring old syntactic engine, it just goes really really fast! The same kind of issues arise for every other proposal for hypercomputation. Same old ideas, just with a little tweak. Real-number von Neumann architecture? Boring. Real number neural networks? Boring. Crazy blackhole machines? Definitely not boring but not relevant for CTM unless we have little blackhole brains.

So what then for the CTM, if hypercomputation really is possible? Well, two options. Either we accept that CTM is perfectly plausible, but say that by "computation" we mean any of those funny computation things that people talk about, including hypercomputation, which means that the extension to hypercomputation was ultimately a boring change to the CTM, or we say CTM is implausible because hypercomputation isn't "really" computation, but we replace it with an almost indistinguishable Hypercomputational Theory of Mind. Or, we take my preferred route, and say we were (justifiably) too narrow about our general intuitions about what we should be calling "normal computation". Ultimately, tho, the choice doesn't matter to the issue. Whatever turns out to be true, if we can build a box that can do the same sorts of things minds normally do, but out of the sorts of materials our laptops and iPhones are made of (or at least their future counterparts), then everyone is going to say we've got mindful computers. Because the issue has always been whether _machines_ can think, which is a deeply interesting question, not whether some particular sort of machine can think, which is "no" for lots of machines that no one wants to think anyway.

What about the deflating point about pancomputationalism that Piccinini makes, namely, "if unlimited pancomputationalism is true, then the notion of computation becomes trivially true and vacuous". Well, this is certainly true. A very good point indeed, but perhaps irrelevant in the bigger picture. And probably analogous points could be made about hypercomputation, as well. If so, then what? I think this would probably reveals a more interesting line of inquiry. Namely, everyone is more or less correct -- even both Searle and Dennett! -- but that the real question is what sort of line is actually drawn between mindful and non-mindful Dennett-style computing, or between mindful Searle-style more-than-computing and mindless Searle-style computing? Because really that's the important question, not this seeming terminological argumentation. Does everything "compute"? Do minds do this very particular kind of "computation"? Well who knows, who cares. The important point is the aspects of physical systems which produce minds, whether you want to call that computation or not, and arguments about the CTM really ought to address the content of the idea, not the fact that it's branded as "computation". If the CTM is mistaken, it's mistaken for far more interesting reasons than quibbles over "computation".

A. The Universe allows hypercomputation.A.1. The mind is a digital computer.A.2. The mind is an analog computer (analog system, if you prefer).A.3. The mind is a hypercomputer.B. The Universe is fundamentally analog and forbids hypercomputation.B.1. The mind is a digital computer.B.2. The mind is an analog computer.C. The Universe is fundamentally digital (albeit with extremely high precision) and forbids hypercomputation.C.1. The mind is a digital computer.

In case A.3, mind cannot be simulated with ordinary computation, but it can be simulated hypercomputationally. What kind of substrate is required to do such a simulation is an open question.

In cases A.2 and B.2, the mind can be simulated by an analog system of appropriate structure and complexity. It can be simulated by a hypercomputer in case A.2. Whether it can be simulated in digital computation is an open question.

In cases A.1, B.1, and C.1, CTM holds trivially.

I strongly doubt that any form of hypercomputation is physically possible.

Interesting to hear your thoughts on the matter. I read the SEP article, and I feel that you may be premature in calling out the Church-Turing thesis for being an inadequate underpinning for the CTM.

You are absolutely right on one thing. The Church-Turing thesis does not prove *beyond* a shadow of a doubt that we can simulate a human brain, however I think it does prove it to *within* a shadow of a doubt.

There is every reason to suppose that any physically realisable machine can be simulated by a computer. It's certainly true that nobody has managed to find a physical system which demonstrably cannot.

In fact, no laws of physics have been discovered which are not simulatable to within an arbitrary precision by a computer. Any physical laws which are not computable are so incredibly subtle that they have not been noticed yet, in an era where we have found the Higgs boson, have traced the history of the universe back to the first few milliseconds and understand the atoms that make us up down to the level of quarks and beyond. It's hard to see where the hypothetical uncomputable laws of physics might lie hidden (there's a reason we had to spend billions to build the LHC to perform experiments at very high energies, after all). It's even harder to see how a messy biological system like a brain could take advantage of them.

Any theoretical hypercomputers that can compute functions that ordinary computers cannot can only do so by means of "magic". While ordinary computers can be described in terms of a minimal set of simple operations (e.g. increment a number, jump to the instruction X steps ahead), hypercomputers are augmented with fantastic abilities such as "look up the answer to this uncomputable function" or "run for an infinite amount of time on this infinite amount of memory".

In other words, hypercomputers are theoretical models only and are not physically realisable because they take impossible shortcuts. For this reason, it is not plausible that the brain is a hypercomputer.

The fact is that any operation that can be broken down into simple, dumb steps can indeed be performed by a computer. The Churchlands are essentially right when they say "[a computer] can display any systematic pattern of responses to the environment whatsoever.”" - they're only wrong if you allow "systematic" to include the kind of magical steps that cannot be performed by a physically realisable machine or a human being.

So while you're right that the CT thesis does not actually prove that brains can be simulated, it does show that the contrary position is extremely implausible, to say the least. This is why Dennett, the Churchlands, Searle and pretty much everybody else who writes about the philosophy of mind makes this entirely reasonable simplification.

So, if one wants to talk about the brain as a computer, it can (must?) be used as what you mean by when you say "computer". Although, since most philosophers of mind are not giving formal (mathematical) arguments, CT is probably brought up just to sound smart or cloak their arguments.

Also, I would disagree with Copeland in labeling Dennet's statement as wrong. If "computer" in the sentence means something equivalent to a turing machine, then it is correct. If "computer" means something else to Copeland, what is a "computer"?

I don't know whether Dennett is correct or not. If we assume he is restricting his domain to digital computers, he is certainly correct. If he meant to include analog computers, he may or may not be correct. As the quote speaks to any computer with any architecture, it would seem to be inclusive of analog computers.

But "any architecture" of an analog computer could mean any physical system over which there is sufficient controllability and observability to program arbitrary problems and "read" (by some means) the output.

And that might include physical systems that have chaotic behavior. It's certainly conceivable that you could not reproduce the results of such a system with a digital computer of arbitrary precision. The observed behavior likely would not converge as the precision is improved without bound. Perhaps some particular level of precision does reproduce the results of the analog system, but how would we know if we have selected the right level?

And then again, we don't know if the physical world has infinitesimals or not; if everything (matter, energy, time, and space) is quantized, then true chaos might not exist at such levels (it could still emerge at macroscopic levels). In that case, there would be a point at which sufficient precision is reached to model the world (or any analog computer that the world might contain) accurately. That (it seems to me) would make all physical processes computable in principle. Then Dennett would be right about Turing's thesis applying to all possible architectures, but only in the presence of the additional assumption about existence of a limit of sufficient precision. Such an assumption is pure conjecture, as far as I am aware.

I'm agnostic about the CTM myself. I don't know what to think. But this particular challenge seems misplaced. Massimo does not have to show that there are computations which cannot be performed by a Turing machine, he denies outright that the mind is best characterized as performing "computations" at all.

Jeffrey, since you demand burden of proof, do you not perhaps carry any burden of proof to show that semantics could somehow, just possibly be computed from syntax? Has anybody ever been able to show how this can be done? Where is your burden of proof?

Semantics arise from the correspondence of the syntax to the world being described. Since today's computers can and do interact with the real world the way bodies do, there is no reason - even in principle - to think they cannot have real-world semantics in their reasoning the way people do.

Christian:

Someone (Massimo) who confesses to be ignorant of the theory of computation would do better to learn it, instead of insisting that his pre-scientific view of the mind is the right one. And I would not depend solely on Copeland - a philosopher, not a computer scientist - for an understanding of the way today's computer scientists understand the Church-Turing thesis.

>I would not depend solely on Copeland - a philosopher, not a computer scientist - for an understanding of the way today's computer scientists understand the Church-Turing thesis<

While we seem to be on the same side in general, I don't think Copeland actually got anything wrong in his analysis. I do disagree with the implications he makes, and don't think his article has much impact on the debate, but technically he's correct.

He's right: the CT Thesis does not show that all machines are computable. If the laws of physics are uncomputable (unlikely but not proven false), then it might indeed be possible to build a machine which is not computable.

> What would this have to say about the computational theory of mind? Not much. After all, if the mind is a hypercomputer, it's still a computer. <

Correct, but my point was different: that claiming, as some do, that Church-Turing is a statement about all possible computations is simply incorrect.

> What then for the seemingly "ill-fated" CTM? I'm not sure much really different happens. CTM is adequate, but you just need really good computers to do it. <

Again, this wasn’t my argument. I think CTM is ill fated for other reasons, which I explained in my previous post. With regards to this post, the more pertinent objection would be that there are physical processes that do not fall under the heading of “computation” at all — unless one is a pancomputationalist, that is.

> Because the issue has always been whether _machines_ can think, which is a deeply interesting question, not whether some particular sort of machine can think <

I think that is precisely wrong, though a common misconception. Unless you are a supernaturalist dualist you will agree that human beings are biological “machines,” so the issue isn’t whether machines can think, they obviously do. The issue is what are the physical conditions that allow machines to think, and whether those conditions can be captured entirely in terms of computation or they need something else.

> Namely, everyone is more or less correct -- even both Searle and Dennett! <

That is precisely why such a broad definition of computation would be vacuous and useless, definitely nothing to cheer about.

> CTM really ought to address the content of the idea, not the fact that it's branded as "computation". <

No, sorry, you don’t get to play that game. It is called a computational theory of mind precisely because it relies on the idea that minds are precisely analogous to digital computers. If you water that statement down so much that both Searle and Dennett are right, you don’t have a computational theory of mind. Indeed, you don’t have a computational theory of anything.

I was sort of cheating, and using this thread to talk about other CTM issues. ;P

> The issue is what are the physical conditions that allow machines to think, and whether those conditions can be captured entirely in terms of computation or they need something else.

But that's the crux of the issue, isn't it? If we agree that physical systems can have minds, and that these physical systems don't have to be wisps of jelly, but might be more "crunchy" substances that we might build laptops out of in 20 year, then the waffling over "in terms of computation" or not is silly. If my laptop starts complaining, whether it's using hypercomputation or whatever, it's still my darn computer that's complaining! No one really cares whether my computer, in the colloquial sense, /happens/ to be doing super-Turing things. It's still a machine doing things, it's still a computer. All we thus discover is that the stuff we thought was special -- super-Turing computations -- wasn't that special after all. Which of course we sort of knew way back when. It's not as if Turing, when he invented the notion of an oracle machine in the 30s, thought it was miraculously special, it was just something no one knew how to build (and still don't). Same for all the other brands of hypercomputation. So "computation" meant those things we new we could build, but that's an accident of history and physics, not some deep conceptual issue. Some people take it to be deep (for instance, Scott Aaronson thinks there might be real physical law hinging on hypercomputation not being possible), but we shouldn't take that to indicate anything other than our limited experience during the earlier period of computer science. In 20 years (or 2000 or whatever), when we're building hypercomputer iPhones, CS departments will be teaching classes about hypercomputation, because they will have adjusted the concept of computation. Anything else, any other arguing over is it "really" computation, is the definition too broad, etc. etc. are pointless. Words are cheap, concepts are cheap.

> No, sorry, you don’t get to play that game. It is called a computational theory of mind precisely because it relies on the idea that minds are precisely analogous to digital computers. If you water that statement down so much that both Searle and Dennett are right, you don’t have a computational theory of mind. Indeed, you don’t have a computational theory of anything.

Sure, and in 20 years or whatever when our digital computers are using the newest and greatest in quantum hypercomputation, what then? Do I still not get to play that game? Or let's suppose that we discover that there is no hypercomputation in the laws of physics, and the best you can get is boring old Turing-complete computation. What then? Whence mentation? Because at that point we're reduced to physical processes, all of which are simulable, at least, so what in the normal, non-hypercomputing laws of physics do we draw the line? And perhaps just as importantly, at what point does the drawing of the line become merely a contest of lexicology? One philosophers non-computational neural nets are another's paradigmatic computational system, and so on. At some point the arguments over what is and isn't computation become absurd game of nitpicking. When it comes down to it, there is simply no big cleaving of the world into computing and non-computing things no matter who's definition you use, there are only ever fuzzy boundaries that no one knows how to deal with until you get well onto one side, and that's enough to suggest that the debate over the word is silly.

>the more pertinent objection would be that there are physical processes that do not fall under the heading of “computation” at all<

I'm not sure this is true, Massimo. Looking at the SEP, the linked papers seem to be speculative. I don't have easy access to academic papers, but if there's a specific paper you're referring to I'll try to get a hold of it.

Firstly, let's not get back to the "simulating X is not X" argument, as that was settled by *me* on the last thread.

(Recap: "Simulating X is not X" does not hold if X is a computation, which is what is at stake. Therefore this argument does nothing either way for the debate around the CTM).

By attacking the relevance of the CT Thesis, you are directly attacking the notion that we can simulate a brain, i.e. that we can process information in the same way that a brain can on a digital computer. So we absolutely do need to discuss the implications of Copeland's arguments for brain simulation.

"Not even close, according to my reading of Copland."

Your reading of Copeland is correct, in that this is certainly the impression he wanted you to take away from his article. However the article is very misleading. He doesn't spell out what unlikely propositions would have to be true for Thesis M to be false.

In fact, for Copeland's attacks on the relevance of the CT to be apropos, all of the following propositions would have to be true:

1) Hypercomputers are physically realisable2) The brain is a hypercomputer3) There are laws of physics, yet undiscovered, which are uncomputable4) There are brain structures which can perform computations which no human being can perform (even given arbitrary amounts of time and paper).

I've made these points, and you've rejected them all as irrelevant. They're really not. Unless you accept these propositions, then it is actually Copeland's article which is irrelevant to the CTM.

I am afraid to say that I have lost track of why this discussion is so important. Ultimately, the underlying questions will cannot be decided unless it can be shown decisively that there is something non-computational about the human mind, but that is an empirical matter, not one of thought experiment or philosophical concept.

One should first provide a clear, agreed-on and testable (!) definition of a property, e.g. "consciousness", "intentionality" or "understanding", and then one should test whether any given thinking machine exhibits any given property.

And that includes us. It may well turn out that for some testable definition of "consciousness" humans don't have that property either.

Throwing around these terms without making them testable or, worse, without even agreeing on what they mean, will never lead anywhere.

Massimo, you wrote " If you water that statement down so much that both Searle and Dennett are right, you don’t have a computational theory of mind. Indeed, you don’t have a computational theory of anything."

As notted before, Searle does not object to connectionist models of the mind. Dennett and Searle can be both right in the sense that computational model in the form of neural networks produces the mind. Are you saying that this would not be a computational model? That would be an unusually narrow defition of the term "computation". According to your definition, is numerical calculations (for example, solving differential equations) computational? If yes, connectionists models are just numerical calculations and therefore are computational. If no, over half of the programs running on my laptop right now are not computational (for example, Fourier transform that is being calculated by the audio processing software). If they are not computational what are they?

A related question: are analog computers (ie. circuits build out of analog devices to solve differential equations or solve other practical problems) computational, according to your definition? If a computer (such as a controller for an instrument) does convolution with a analog filter rather than using software, do we suddenly not call it a computer anymore? I'm not saying that everything has to be understood as computation, but what I'm saying is that many analog processes have been understood as computation without any problem.

A possible objection is that the computational theory of mind, as you stated, precisely specify "digital". I don't see that a problem at all. Let's say the strictly digtal computational theory of mind turns out to be false. Let's say that some analog computation is required to produce the mind. I don't think that will upset anyone. Certianly not Dennett. He'll say "ok remove the word digital from my books, and let's get this machine running".

>Let's say the strictly digtal computational theory of mind turns out to be false. Let's say that some analog computation is required to produce the mind.<

But any analog computation can be carried out by a digital computer to an arbitrary precision.

In fact, potentially more precisely by a digital computer, because it is impossible to give an analogue device a precise input or read its output precisely. Adding 2+2 on an analogue computer will give you something like 4.0000000023309...

The discussion makes me think of the free will debate. We have the same rigid determination to embrace a certain viewpoint, regardless of the difficulties it presents. I suspect that is a symptom of a computed mind which on the face of it seems to be neither mindful nor free. Evidently computed minds really do exist :)

so much errors.you should check your sources and understanding of the topic.

1. «“lambda definibility” is (Turing’s approach) and Church’s adopted definition of computability» - other way around.2. Dan Dennett statement «Universal Turing machine can compute any function that any computer can compute» in fact is true.simply because intuitive notion of computable functions was shown to be the same as category of Turing-computable functions.3. Paul and Patricia Churchland statement "can compute any rule-governed input-output function" is true,by the same argument - any rule-governed input-output function is an algorithical function and thefore can be presented as programs on Turing Machine (and be emulated on Universal Turing Machine).4. Copeland's hyper-computable are phisically impossible,at least if you are Bayessian, shift your priors to that conslusion -check out Scott Aaronson on the topic.(of course "No Bayesian model is 'good enought'",but who cares while humanity have nothing better yet)5. you are trying to confuse your readers -superposition is a result of quantum entanglement.6. «universe is indeed a computer, but a quantum, not a classical one»once again, Quantum Turing Machine can be emulated on Universal Turing Machine.

the problem of Turing emulation/reduction is in effectiveness, not in possibility.you should check "extensions to/for Turing machine" - they all are equivalent to Turing Machine and can be emulated on Universal Turing Machine.including probabilistic, quantum one, and non-determimistic in general.and again, Scott Aaronson "Limits of effectivelly computable".

agree, «the specter of mathematical Platonism» should be killed indeed.but properly so, if you cut it with wrong tool it would rise again (or was it hydra's heads?).as for me notion what «the universe itself is made of computation» is a simple equivalence so doesn't lead to anything except of wider selection of tools for the job, like in case of Jürgen Schmidhuber's "Computable Universes".

Looks like philosophers should not step back in field already cultivated by scientists.IMO, you should delete or rewrite this article -otherwise it look a lot like teologist's begging for scientifically looking explanation.

While I agree with the spirit of your post (with the exception of your dismissal of mathematical Platonism), I think Copeland is arguably right to disagree with this statement, but only in the most nitpicky way imaginable.

Consider the function that takes a program as an input and and follows the rule that it returns a 1 if the program halts in finite time and a 0 if it does not halt. As you're probably aware, this is the Halting Problem and has been proven to be uncomputable.

Now, arguably, this function is a "rule-governed input-output function" because its behaviour is perfectly defined by the paragraph above. If you accept this, then the Churchlands are wrong.

However, what they intended (and what a charitable reader would infer) by "rule-based" is a function which can be broken down to trivial steps, and determining whether a program can halt or not is not such a trivial step.

As such, I agree with you that it is a little unfair to call them out on it.

Disagreeable Me«this function is a "rule-governed input-output function"» only if one have halting problem oracle (can she force FSM?),otherwise she needs rule-based solution for than problem,just like you wrote «determining whether a program can halt or not is not such a trivial step» since «has been proven to be uncomputable» so don't have rule-based solution (other that alcohol, of course).

Christian Giliberto«Entanglement is a consequence of superposition».thanks, my stupid error, there should be `cause` instead of `result`.

@arumadI think you have a very specific interpretation of rule-based solution.

A rule is just a principle that determines behaviour. It doesn't have to be achievable by simple atomic steps.

I could have a rule that says that if the Goldbach conjecture is true, I must hop on one leg for a minute every day. I don't hop on one leg every day, but I don't know if I'm breaking this rule because I don't know if the Goldbach conjecture is true or not.

So, by this broader definition of rule-based, the Churchlands are not technically correct.

Well, I knew this was going to generate a bit of discussion... I apologize upfront, but while I read all comments, I do not have time to engage with everyone, nor to do it in sufficient depth for any given comment. I will instead pick some of the points that I think may be of general interest and/or move the discussion forward and briefly comment on them. (Disagreeable, careful, you are already at 6 posted comments and I haven’t had my coffee yet... ;-)

Darryl,

> If we agree that physical systems can have minds, and that these physical systems don't have to be wisps of jelly, but might be more "crunchy" substances that we might build laptops out of in 20 year, then the waffling over "in terms of computation" or not is silly <

I think you are missing my point: as explained in previous posts, I am inclined toward biological naturalism, which means that I don’t think that thinking is just a matter of computation. It also requires the right stuff. And whether silicon is right remains to be seen, though there are very good reasons to think not. Hypercomputation is a distraction, and I’m sorry I mentioned it.

> Words are cheap, concepts are cheap. <

I never understood this sort of attitude. We think, communicate and learn through words and concepts. Without them we literally wouldn’t be having this (or any other) conversation.

> Because at that point we're reduced to physical processes, all of which are simulable, at least <

As I pointed out before, being simulable have nothing whatsoever to do with this discussion. Once more, to use Alex phrasing, you can simulate a hurricane, but you ain’t getting wet.

> there is simply no big cleaving of the world into computing and non-computing things no matter who's definition you use <

If that is the case, please stop calling it a computational theory of mind and imply that it is a gib deal. Because you’ll also automatically have a computational theory of rocks, etc.

Stephen,

> It seems not everyone agrees with Copeland's interpretation of the C-T Thesis. <

Indeed, thanks for the links. I don’t know enough about Turing’s writing to adjudicate that dispute. Hodges is incorrect about at least one thing though: Copeland doesn’t say in the SEP that Turing’s thesis applies only to human computers, he says that that is the historical background in which one needs to understand what Turing (and Church) were up to.

Alex,

> Throwing around these terms without making them testable or, worse, without even agreeing on what they mean, will never lead anywhere. <

You have one good point and one not so good, I think. I keep being baffled by the popular contention that consciousness can’t be defined. It can, and it has, by many people. Try this for size: consciousness is the ability to have first person experience; self-consciousness is the ability to reflect on first person experience. So, many animals have the former, human beings and possibly a few other species are likely to have the second. Not that difficult.

The real problem, as you say, lies with the testability issue. How do we know if another animal species (let alone a computer) is self-conscious? I don’t have an answer to that question, but I know the answer isn’t the Turing test. For the same reasons that behaviorism led to a dead end in psychological research: whether or not one can “see” mental states, they exist, and it is downright silly to deny them (as many behaviorists did).

Then the question becomes, how do we test for free will?We are dealing with computation machines so we can compute the outcome of choices put to the machine. If the actual outcome differs significantly from the computed outcome we know the computational machine is exercising free will and is therefore self-conscious.

> Are you saying that this would not be a computational model? That would be an unusually narrow defition of the term "computation". <

Yeah, I’m not sure what to think in terms of connectionism / computation. But even granted that connectionist models are computational, Searle’s (and my) objection stands: the formal characteristics of the model aren’t going to get you consciousness, if biological naturalism is correct. As I pointed out several times, by the way, I don’t *know* that that is the case, but as a biologist I have good reasons to suspect it to be the case (Alex gave a number of explicit good reasons for this in the discussion of the other post). Indeed, my point is even more modest: computationalists simply ignore — without evidence — even the possibility that substrate matters, and as you can see, it is hard to make them acknowledge their biologically unfounded leap of logic.

Richard,

> there would be a point at which sufficient precision is reached to model the world (or any analog computer that the world might contain) accurately. <

Again, for me all of this is besides the point. I don’t think the issue lies with digital vs analog computation. Nor do I deny that there are computational *aspects* to human thinking. I am simply arguing that to ignore the issue of biological substrate leads straight into a form of dualism, which I thought we had abandoned since Descartes.

Christian,

> Massimo does not have to show that there are computations which cannot be performed by a Turing machine, he denies outright that the mind is best characterized as performing "computations" at all. <

Not exactly, see above: I deny that computation is all the mind does, or that computation is sufficient to capture and reproduce human consciousness.

I wasn't really contesting your point, but rather I was considering the conditions under which Copeland's point about Dennett might be incorrect. The bottom line is that Dennett was correct insofar as digital computation is concerned. I believe that Turing *may* be extendable to analog computation if certain additional assumptions about the analog world are met (assumptions that state, in effect, that the analog world has finite precision).

Even if Turing doesn't apply to analog systems, though, it does not automatically follow that CTM is wrong. But I am nearly certain that if CTM is wrong, then the wrongness lies in the analog nature of mind. In other words, if CTM fails, it is a substrate issue, so I was provisionally agreeing with you.

Massimo, forgive me for the misunderstanding. I should have said "that the mind is FULLY characterized by computation" rather than "is BEST characterized by computation."

Semantics are important, but would you agree with my understanding of Jeffrey's objection if I phrased it as "Massimo doesn't claim that the mind performs some new sort of computation that can't be performed by a Turing machine, but that the mind is not fully characterized by computation?"

Not according to Copland, and if you don’t mind for now I stick to a peer reviewed paper as opposed to an anonymous post. Same goes for your #2. And #3.

> Copeland's hyper-computable are phisically impossible <

And just as irrelevant to my main point. I brought it in only to show that when people make claims that Turing machines can compute everything this is simply false.

> you are trying to confuse your readers -superposition is a result of quantum entanglement. <

That was a direct quote, take it up with Copeland. And I am most certainly *not* trying to confuse my readers, I resent the implication of unethical behavior.

> agree, «the specter of mathematical Platonism» should be killed indeed. but properly so, if you cut it with wrong tool it would rise again <

Actually, disagree, since I’m quite fond of mathematical Platonism.

> IMO, you should delete or rewrite this article <

Let me think about it... No, I don’t think so.

Disagreeable,

> [regarding non-computable physical processes] I'm not sure this is true, Massimo. Looking at the SEP, the linked papers seem to be speculative. <

Doesn’t the answer to that question depend on whether one is a pancomputationalist? If you are, then everything is computable (but the word pretty much loses meaning); if you are not, then you agree that there are physical processes that ar not computable, and it becomes an empirical question to figure out which ones.

> "Simulating X is not X" does not hold if X is a computation, which is what is at stake. Therefore this argument does nothing either way for the debate around the CTM <

Absolutely the other way around. The burden is on you as a computationalist to show us that simulating the brain is the same as making a brain. If not, you are simply begging the question in favor of the CTM.

> you are directly attacking the notion that we can simulate a brain <

No, I’m attacking the idea that simulating the brain is the same as making one.

> for Copeland's attacks on the relevance of the CT to be apropos, all of the following propositions would have to be true <

I don’t think so at all. Nowhere did I see Copeland claiming that the brain is a hypercomputer. You are reading too much into it.

I disagree. If computable means "can be simulated by a Turing machine" then the word has a precise definition. What separates a brain from a rock is only that the particular computation it's carrying out is a lot more "interesting", and that it performs a required function purely by virtue of its computation, processing input signals and delivering output signals.

> The burden is on you as a computationalist to show us that simulating the brain is the same as making a brain.<

I agree that I have a burden of proof. However that doesn't mean the anti-CTM crowd doesn't. I'd suggest that the default position should be agnosticism, as opposed to anti-computationalism.

This is quite unlike theism/atheism, because we have an observed phenomenon (mind) and a plausible hypothesis to explain it (computation).

>If not, you are simply begging the question in favor of the CTM.<

If I had claimed to show that simulating the mind is in fact the mind with that short argument, then I would be begging the question indeed. I'm not, though. There are other arguments which support the idea of the mind being a computation.

But when you insist that "simulating X is not X", you are begging the question, because it is not true *IFF* the mind is a computation.

>> you are directly attacking the notion that we can simulate a brain <<> No, I’m attacking the idea that simulating the brain is the same as making one. <

Those are two different arguments. Don't confuse them. Make no mistake, Copeland is arguing that it may be fundamentally impossible to even simulate a brain. Searle is the one who argues that simulating a brain is not the same as making one. If you want to stay on topic with regard to Copeland's criticism of applications of the CT thesis to philosophy of mind, you really should confine the debate to whether brains are simulable.

>I don’t think so at all. Nowhere did I see Copeland claiming that the brain is a hypercomputer. You are reading too much into it.<

He may not spell it out, but it follows from his arguments.

Copeland's Thesis M:>Whatever can be calculated by a machine (working on finite data in accordance with a finite program of instructions) is Turing-machine-computable.<

Copeland argues that this thesis may be false. He does so by bringing up the notion of hypercomputers as examples of machines which might calculate Turing-machine-uncomputable functions.

In fact, by definition, any machine which calculates a function which is not Turing-machine-computable is a hypercomputer.

In other words, if a human brain is a machine that can process information in some way that a Turing machine cannot, then it must be a hypercomputer by the very definition of the term.

Could you give a more concrete example of how the Church-Turing Thesis is "used as a trump card in favor of computational theories of mind?" It's not clear to me, for example, that the quotes you gave by Dennett or Churchland are supposed to be part of an argument for the CTM - rather, it looks like they're using the Church-Turing thesis to give a precise explication of what they mean by "computation."

So, putting aside issues of whether philosophers of mind have misinterpreted the CT-Thesis, I'm simply looking for a clearer example of a case where it's used as part of an *argument* for the CTM (as you claim it is).

The testability is of course the main point I tried to make. I can come up with possible definitions for those three terms (least sure about intentionality, admittedly), and I can self-flatteringly conclude that *I* have those properties. But how do I know if a thinking machine or even another human being has them?

The worst one is perhaps "true understanding", which brings us to the Chinese Room again. What criteria do I have to figure out whether a conversation partner understands something except probing them with questions and seeing if the answers are acceptable? Yes, it is a behavioralist test, but so help me I cannot imagine any other.

> Could you give a more concrete example of how the Church-Turing Thesis is "used as a trump card in favor of computational theories of mind? <

I find Copeland's examples quite representative, not just those from Dennett and the Churchlands, but several others he quotes verbatim. I don't think these authors are trying to be precise about what they mean by computation. Indeed, I think their approach works in large part because they are not precise. I do wonder how many of them would fall under one type or another of pancomputationalism, not all of which is actually friendly to CTM.

> arumad is correct that "lambda definability" is Church's notion, not Turing's. Copeland says this himself in the article <

I stand corrected, thanks.

Alex,

I definitely share your worry about testability, but again I warn against the behaviorist fallacy of "if we can't test it then it doesn't exist."

> The worst one is perhaps "true understanding", which brings us to the Chinese Room again. What criteria do I have to figure out whether a conversation partner understands something except probing them with questions and seeing if the answers are acceptable? <

Oh, I can clarify what I mean by that: I was talking as understanding here as synonymous with self-consciousness, i.e. I am looking for a machine that can convince me that it doesn't just provide the correct output, but is aware of doing so.

> Meta: Is it ok if I post responses to others freely, but try to limit my responses to you to manageable levels? <

Sorry, I didn't explain myself clearly. You are more than welcome to post any time you like, it's just that I don't want you to feel slighted if I address only a fraction of the points you make.

> If computable means "can be simulated by a Turing machine" then the word has a precise definition <

Maybe, but it's still useless if it turns out that everything is computing. Again, no difference between human brains and rocks, so you don't have a theory of brain functionality.

> What separates a brain from a rock is only that the particular computation it's carrying out is a lot more "interesting" <

I addressed this while talking about pancomputationalism. It sounds like you are accepting a particular variety of strong pancomputationalism, which happens to be by far the weakest one according to the readings I've done.

> I'd suggest that the default position should be agnosticism, as opposed to anti-computationalism. <

But I think there are positive reasons to reject computationalism in favor of biological naturalism, hence my lack of agnosticism.

> when you insist that "simulating X is not X", you are begging the question, because it is not true *IFF* the mind is a computation. <

Sorry, but I honestly think you are misunderstanding what begging the question is, in this context.

> If you want to stay on topic with regard to Copeland's criticism of applications of the CT thesis to philosophy of mind, you really should confine the debate to whether brains are simulable. <

I assure you that I am not at all confusing Copeland and Searle. I brought up Copeland for specific reasons, but it should be clear that my overall view is more similar to Searle's.

> Copeland argues that this thesis may be false. He does so by bringing up the notion of hypercomputers as examples of machines which might calculate Turing-machine-uncomputable functions. <

Narrow reading. He also points out that there have been suggestions that not every physical process is computable, which to me is a truism unless one is a pancomputationalist, a position that I find either silly or useless.

(I won't feel slighted, just a bit disappointed because you can't give me the bandwidth I'm looking for. I'm genuinely thankful for the attention you have been able to give my arguments so far, but don't want to annoy you too much!)

If you want to know my position with respect to pancomputationalism, I believe in the Mathematical Universe Hypothesis as proposed by Max Tegmark, which holds that all possible universes exist and that universes are fundamentally mathematical structures. This is similar to ontic pancomputationalism, except I wouldn't emphasise the computational aspect of it so much as the mathematical (there's a subtle difference).

>Again, no difference between human brains and rocks<

Massive difference. Brains implement a very intricate and complex algorithm for processing input signals and producing evolutionary adaptive output signals. Rocks don't compute anything much in themselves (except arguably how to be a rock).

The CTM does not hold that all computations are conscious, only that certain sophisticated computations are. Please indicate to me that you understand this.

>Sorry, but I honestly think you are misunderstanding what begging the question is, in this context.<

Sorry to harp on about this, but this is genuinely bugging me. Let me explain in detail why I think you are begging the question.

Firstly, please understand that I don't pretend to have made much of a positive case for the CTM in these posts. I'm just trying to poke holes in your anti-CTM arguments. So, let's assume for now that I'm arguing for agnosticism rather than the CTM, and forget about my burden of proof.

We need a definition of begging the question. Hopefully the one from Wikipedia will suffice:

"Begging the question (Latin petitio principii, "assuming the initial point") is a type of informal fallacy in which an implicit premise would directly entail the conclusion; in other words, basing a conclusion on an assumption that is as much in need of proof or demonstration as the conclusion itself."

I see your argument as follows. (I'm going to define "physical" phenomena as non-abstract, having physical effects in the real world)

Premises:1) Simulating a physical phenomenon does not produce that physical phenomenon2) [implicit premise]: Consciousness is a physical phenomenonArgument:3) (From 1 and 2): Simulating consciousness does not produce consciousnessConclusion:4) (From 3): There is more to consciousness than computation.

The problem here is that you're concluding (4) ultimately as a result of (2), when the two are pretty much equivalent. You have an implicit premise which directly entails the conclusion. I reject (2), and claim that it needs to be established first.

This is why I think you're begging the question. Please set me straight if you think I'm wrong here, but I really don't think I am.

(... continued)>I assure you that I am not at all confusing Copeland and Searle. I brought up Copeland for specific reasons, but it should be clear that my overall view is more similar to Searle's.<

Yes, your views are clear. However I remain confused as to why you think Copeland is relevant.

The whole point of Copeland's article is to cast doubt on whether the information processing achieved by the brain can be realised by a Turing machine. The article has nothing to say on whether a faithful simulation would be conscious. Instead, he is precisely targeting the assumption that a brain can be simulated. If you don't get this, then I'm afraid you've misunderstood the point of the article. If you're really not interested in the proposition that brains are simulable, then Copeland is irrelevant and we should go back to discussing Searle.

>He also points out that there have been suggestions that not every physical process is computable, which to me is a truism unless one is a pancomputationalist, a position that I find either silly or useless.<

You appear to have a radically different interpretation of "computable" than I have. Please spell it out for me.

You have defined pancomputationalism as the viewpoint that every physical system performs computations. This is not precisely equivalent to what I mean when I say that every physical system is computable.

When I say every physical system is computable, I am not saying that physical systems are "performing" computations. I am claiming that every physical system is simulable by a Turing machine.

This is neither obviously true, silly, nor useless. It's an empirical question, and it's just possible that there are some exotic undiscovered laws of physics which are not simulable (and this is what Copeland is proposing in his article), however this is an unlikely proposition for various reasons, which I would be happy to get into if you wish.

My thoughts on the discussion about whether simulating a mind is the same as having a mind.

If you have an independent reason to believe the computational model of a mind, then I think you can reject the argument that the simulation is different from the mind on the grounds that both are computations carrying out the same process.

But I do think it would be circular to try to use the argument if you did not have other reasons to think the mind was computational.

But if you do NOT think there are other reasons to believe the computational theory, as I understand Massimo does not, then the argument is circular.

Since you two are starting from different positions, I suspect you won't be able to agree on this point.

As much as I see where you are coming from, to replace discussion of an empirically difficult concept (consciousness) with an empirically near impossible and conceptually muddled one (free will) isn't much of an advance...

Christian,

> Massimo doesn't claim that the mind performs some new sort of computation that can't be performed by a Turing machine, but that the mind is not fully characterized by computation <

Copeland's arguments are a bit silly. For instance, he defines an accumulator machine as one that can compute upon the reals (ie, not just the computable reals) via a real-valued accumulator to show the falsity of Church/Turing "Thesis M". This is just hand-waving; the accumulator primitive is completely undefined. He could have just said: "I define my accumulator machine to be math, therefore it can compute on the reals." The notion of computability via finite realizable primitives is a key part of Church/Turing's work and forms the crux of the issue. Copeland just dodges, and leaves it as "an open question" whether the accumulator can be realized. Unless a gross violation of the uncertainty principle is discovered, I would consider that question "closed, barring extraordinary evidence".

He also seems inordinately focused on Turing's 1936 paper. Turing's eponymous Test certainly shows that he thought of "Thesis M" as true.

Here's some of what Dennett had to say in his recent article, A Perfect and Beautiful Machine': What Darwin's Theory of Evolution Reveals About Artificial Intelligence:"What Darwin and Turing had both discovered, in their different ways, was the existence of competence without comprehension. This inverted the deeply plausible assumption that comprehension is in fact the source of all advanced competence. Why, after all, do we insist on sending our children to school, and why do we frown on the old-fashioned methods of rote learning? We expect our children's growing competence to flow from their growing comprehension. The motto of modern education might be: "Comprehend in order to be competent." For us members of H. sapiens, this is almost always the right way to look at, and strive for, competence. I suspect that this much-loved principle of education is one of the primary motivators of skepticism about both evolution and its cousin in Turing's world, artificial intelligence. The very idea that mindless mechanicity can generate human-level -- or divine level! -- competence strikes many as philistine, repugnant, an insult to our minds, and the mind of God."So Dennett does seem to think the mind can be competent without comprehension. The big question is, what does Massimo see in the mind that "is not fully characterized by computation" yet nevertheless functionally characterized by Darwin and Dennett's theories of mindless evolution?

Massimo: "The notion that all physical systems carry out computations is appropriately referred to as pancomputationalism, and prima facie I find it just as interesting as its non-physicalist counter, panpsychism — i.e., not at all. Still, let’s take a closer look."

Pancomputationalism and panpsychism are not mutually-exclusive perspectives...

"David Chalmers of the Australian National University summarised Wheeler's views as follows:

Wheeler (1990) has suggested that information is fundamental to the physics of the universe. According to this 'it from bit' doctrine, the laws of physics can be cast in terms of information, postulating different states that give rise to different effects without actually saying what those states are. It is only their position in an information space that counts. If so, then information is a natural candidate to also play a role in a fundamental theory of consciousness. We are led to a conception of the world on which information is truly fundamental, and on which it has two basic aspects, corresponding to the physical and the phenomenal features of the world.[19]

I predict, a la Dennett, without comprehension, that Massimo is handling the question of where, how, or especially why, intelligence plays a role in the operational functions of the universe, by substituting some mechanically reactive version of computation for an "explanation" of not only the reactive processes of the universe, but of all biological systems and their similar independently motivated or self-motivated systems that functionally exist therein.

@Wikipedia: "Digital physics suggests that there exists, at least in principle, a program for a universal computer which computes the evolution of the universe. The computer could be, for example, a huge cellular automaton (Zuse 1967[9]), or a universal Turing machine, as suggested by Schmidhuber (1997), who pointed out that there exists a very short program that can compute all possible computable universes in an asymptotically optimal way."

So Alastair, do you see such a producer of this predetermined range of systems presenting an optional example of an indeterministic one? How it would determine a computable indeterminacy is my question. As well as why. Or is information "truly fundamental" for no reason?

For me, the interesting question is why all scientific research seems to be based on a computational approach. Here I am using computational to cover any approach implemented with machines: digital/symbolic and connectionist (possibly on analog computers) seem to be the major ones.

There does not seem to be any scientific research on Chalmers non-physical but natural mental properties. Nor could I find any reference to Searle's biological naturalism outside of the philosophical literature.

I think one reason is that the computational approaches have provide a very successful paradigm for advancing scientific research.

A second reason would be that the philosophical claims of Chalmers and Searle, for example, are only existence proofs. They claim to have shown that some alternative to a computational model must be true, but cannot provide a constructive, empirically testable example of what that alternative entails.

Of course, it is not the job of philosophers to do science. But scientific research normally starts from some paradigm. So if and until irresolvable anomalies are found with computational approaches which then give some empirical grounding to philosophical ideas, it seems that the computational approach is the best way to do science.

But even some ways of knowing that the mind is computational would not be enough. For example, suppose that some alien scientist provided the full, detailed Turing Machine state table of someone's brain (somehow based on neuron states and interconnections, say) or the full connectionist network diagram with proper update algorithms. Would that deepen our understanding of how the brain works? Not my much, I would say. We need some mid level explanation of how the mind relates to this detail. And that comes back to having a useful paradigm to build and test explanations.

> If you want to know my position with respect to pancomputationalism, I believe in the Mathematical Universe Hypothesis as proposed by Max Tegmark, which holds that all possible universes exist and that universes are fundamentally mathematical structures <

Well, that's an interesting (highly speculative, to say the least) position. No wonder you support the CTM. It's a small subset of that sort of cosmic view.

> Rocks don't compute anything much in themselves <

I know, but you are still saying that there is something fundamentally similar between being a rock and being a brain. To me that's close to absurd, and at any rate, entirely uninformative.

> The problem here is that you're concluding (4) ultimately as a result of (2), when the two are pretty much equivalent. You have an implicit premise which directly entails the conclusion. <

That is the definition of deductive logic, but it's not a case of begging the question. In Oder to show that you have to show that one of my premises is identical with the conclusion, not just that it logically entails it. Otherwise you are throwing out all of deductive logic. In your case you really want to reject premise (2), but you can do so only based on empirical evidence (which, at the moment, I think actually favors (2)), not a priori.

> The whole point of Copeland's article is to cast doubt on whether the information processing achieved by the brain can be realised by a Turing machine <

I think you are misreading Copland. He simply wants to point out that certain things that many philosophers of mind think are demonstrated or settled by Church-Turing are actually not.

> he is precisely targeting the assumption that a brain can be simulated <

That is part of what it's doing. I'm actually agnostic on that particular point. But remember, that's not much of a concession on my part, since I think there is a fundamental distinction between X and a simulation of X.

> You appear to have a radically different interpretation of "computable" than I have. Please spell it out for me. <

That was why I quoted expensively from both SEP articles. There is no universally acknowledged definition of computation, which is why you can have a range of positions that go all the way to strong pancomputationalism.

> When I say every physical system is computable, I am not saying that physical systems are "performing" computations. I am claiming that every physical system is simulable by a Turing machine. <

Which, as interesting as it is in itself, I find only marginally relevant to understanding consciousness.

>Well, that's an interesting (highly speculative, to say the least) position. No wonder you support the CTM. It's a small subset of that sort of cosmic view.<

Agreed. However, to me the MUH is not highly speculative but evidently true (and I have arguments to back this up). I think it follows quite naturally from the conjuction of mathematical platonism, the CTM and naturalism, all of which I regard as obviously correct. The MUH is such an important and powerful idea, though very little known, that I really want to help get the idea out there. If I'm honest, this is a large part of my motivation for caring so much about the CTM.

>I know, but you are still saying that there is something fundamentally similar between being a rock and being a brain.<

Of course there is. Even you think there is something fundamentally similar, in that they are both physical objects made of atoms. The only difference physically is that one is hugely more complex than the other. This directly parallels the difference I see in them - one is doing an uninteresting, disorganised computation of rockishness, the other is doing a highly complex computation to determine behaviour.

>To me that's close to absurd, and at any rate, entirely uninformative.<

The CTM is uninformative in so far as it doesn't in itself explain precisely why a brain is conscious and a rock isn't. It doesn't aim to. The assumption is that it's something about the organisation and complexity of the algorithm. That by itself doesn't help us understand the brain much better, but it provides the fundamental basis on which understanding can be built, and a denial of more spiritual or mysterious accounts of consciousness.

>That is the definition of deductive logic, but it's not a case of begging the question.<

According to the wikipedia definition, an implicit premise which *directly* entails a conclusion is begging the question. That's what you have done. Taking my definition of physical into account you can jump straight from (2) to (4) without any of the intervening stuff about simulation.

2) [implicit premise]: Consciousness is a physical phenomenon4) (From 2): There is more to consciousness than computation.

>but you can do so only based on empirical evidence (which, at the moment, I think actually favors (2)), not a priori.<

I'm not trying to make a positive case for CTM here, I'm just trying to explain why I don't find your anti-CTM arguments persuasive. As such, if you don't back up a premise, I'm free to reject it a priori.

What empirical evidence do you have for (2)?

>I think you are misreading Copland. He simply wants to point out that certain things that many philosophers of mind think are demonstrated or settled by Church-Turing are actually not.<

Regarding Copeland, I still maintain that he's only relevant if you want to cast doubt on brain simulation. The statements by other philosophers of mind which Copeland takes issue with are incorrect if and only if hypercomputers are physically realisable. He's right to point out the distinction, but it doesn't have much significance for this argument because hypercomputers are almost certainly not physically realisable.

>Which, as interesting as it is in itself, I find only marginally relevant to understanding consciousness.<

Having physical processes be simulable by Turing machines is crucial to the CTM debate. If this is not true, then it is possible that the brain is a hypercomputer and so impossible to simulate, implying that the CTM is likely false. I know you don't think simulating a brain would produce consciousness, but if we can't even *simulate* a brain in *principle* then that in itself is enough to kill the CTM, whether or not Searle is right about the Chinese Room.

I am in no position, technically, to comment on Copeland's treatment of accumulator machines. But his overall article hardly seems "silly." An opinion apparently shared by the reviewers who examined it before it was accepted in the SEP.

Alastair,

> Pancomputationalism and panpsychism are not mutually-exclusive perspectives... <

Yes, I know, I've added panpsychism to the long list of bizarre notions seriously entertained by Chalmers. The man's either a genius or one of the most overrated contemporary philosophers. Care to bet which way I lean?

brucexs,

> Nor could I find any reference to Searle's biological naturalism outside of the philosophical literature. <

You are correct about Chalmers' dualism, but definitely not about biological naturalism. Scientists don't use that term, but the overwhelming majority of research in neurobiology is not at all informed by computationalism, it is conducted just like any other research in biology, informed by evolutionary biology, developmental biology, genetics, and so forth.

> I think one reason is that the computational approaches have provide a very successful paradigm for advancing scientific research <

As in the abysmal failure of strong AI? (I'm not talking about Deep Blue or Watson here, as interesting as those are.) Even Chalmers agrees, by the way, that we currently have no idea of how consciousness emerges from neural activity. None at all. And I doubt that thinking of the mind as software will get us very far. (Yes, neural nets have provided insights into learning, but somewhat limited. Just like genetic algorithms have provided some insights into genetic evolution, but not as much as actual molecular genetics.)

> But I do think it would be circular to try to use the argument if you did not have other reasons to think the mind was computational. <

>Scientists don't use that term, but the overwhelming majority of research in neurobiology is not at all informed by computationalism

You certainly know more about the field than I do. Would it be more accurate to say that cognitive science, that is scientific studies of how the mind functions and how it relates to the brain, are still almost entirely based on the computational approach?

>As in the abysmal failure of strong AI? If my previous comment is correct, then it has not changed the paradigm. Some of the original thinkers on Strong AI were clearly over optimistic and much too simplistic in the types of computational models that work.

Interestingly enough, this weeks Economist magazine has a non-technical summary of some of the big projects being funded to look at simulating computationally part of the brain. It is available free at the magazine site for now.

Should have done a bit of home work before publishing the above. Here is a quote from the SEP article on Cognitive Science:

" The central hypothesis of cognitive science is that thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures. While there is much disagreement about the nature of the representations and computations that constitute thinking, the central hypothesis is general enough to encompass the current range of thinking in cognitive science, including connectionist theories which model thinking using artificial neural networks."

To be fair, he does mention that other approaches are used in linguistics and anthropology-based studies.

The author of the article does appear to have a computationalist bent, but I would hope the above quote is still a fair summary of mainstream scientific thinking.

'The man's either a genius or one of the most overrated contemporary philosophers. Care to bet which way I lean?' Chalmers is indeed a ridiculously overrated philosopher (I want to add Colin McGinn, our new persona non grata in the analytic community, here as an equally overrated philosopher) who places far too much import in his own intuition.

> I was talking as understanding here as synonymous with self-consciousness, i.e. I am looking for a machine that can convince me that it doesn't just provide the correct output, but is aware of doing so.

Depending on what you mean by "convince me" that would be a behavioral test.

I am surely not saying that something does not exist because we cannot test. But if, for example, the claim is that a computer that passes the best behavioral test we can come up with still does not think because there is some untestable extra that we humans have but the computer doesn't, that pretty much sounds like begging the question to me, or perhaps moving the goalposts, depending on where you think the conversation started.

One might reasonably consider the burden of proof on the side of the skeptic in this case: "If you think that there is still a difference, please demonstrate it. Until you can come up with that evidence, the best tentative explanation of what we observe is that the computer can think."

> I think you are missing my point: as explained in previous posts, I am inclined toward biological naturalism, which means that I don’t think that thinking is just a matter of computation. It also requires the right stuff. And whether silicon is right remains to be seen, though there are very good reasons to think not. Hypercomputation is a distraction, and I’m sorry I mentioned it.

If the issue is the right stuff, and this is a matter of _physical_ nature, not behavioral nature, then we're back into the realm of vitalism. Maybe a nouveau vitalism, where the "right stuff" is "genuine atoms" and not "behavioral identical non-atoms", for all relevant behaviors, etc. but vitalism nonetheless. What is this right stuff, and why is it right? You've mentioned meanings in another context as being the all important component (or one of them?), but that doesn't answer the question, it just pushes it back a level.

> I never understood this sort of attitude. We think, communicate and learn through words and concepts. Without them we literally wouldn’t be having this (or any other) conversation.

It's an attitude that comes from observing language evolution. Thousands upon thousands of new words are invented every year for all sorts of things, most of them silly. Most of the concepts they name were never properly concretized before, never given names (tho probably thought, in some primordial form). Badonkadonk, cloud computing, tweeting, etc. Words _really are_ cheap, as are concepts. But that's not at all inconsistent with the position you're taking here. And surely you've encountered this low cost before. After all, every philosopher has read, if not used, such locutions as red_0, red_1, red_2, ... for three different notions of what the word "red" means, and so forth. Nevermind all the countless namings for on-the-spot inventions for the sake of a thought experiment, etc. And it's not as if the word computer is an ancient and holy thing. As you yourself point out, it already underwent a rather extreme shift in meaning around Turing's time, from meaning "person that does computations" to "machine that does computations". But perhaps this is the wrong avenue of discussion.

> As I pointed out before, being simulable have nothing whatsoever to do with this discussion. Once more, to use Alex phrasing, you can simulate a hurricane, but you ain’t getting wet.

Indeed. In your "debate" with Yudkowsky (which he really seem to take seriously, I felt), you mentioned this, and also the sim sugar example. But Yudkowsky's question, and surely others' question, remains: what makes a mind more like a hurricane or sugar, and less like (a recording of?) a symphony or (a PDF of?) a book?

> If that is the case, please stop calling it a computational theory of mind and imply that it is a gib deal. Because you’ll also automatically have a computational theory of rocks, etc.

It's probably true that we ought to abandon the whole term. Except perhaps (and this is a big perhaps) precisely for the hurricane and sugar import you mention. Regardless of the definition, the distinction between real rocks and sim rocks will exist, so while you may have a theory of rocks as computers, the theory is not that rocks are just rock-like computations but rather, rocks are rock-like computations performed more or less directly by a physical system (i.e. it's made of atoms doing the atom-y parts of the computation, etc.). The CTM, at the very least, will always bring with it a claim that minds are _any_ mind-like computations, and no particular "substrate", nor any particular closeness of the "computing" parts to the "computed" parts, is necessary. That, I think, is the real import of the CTM, that minds really aren't like sugar or hurricanes, they're not substrate dependent. "Computation" may become all encompassing, because you can view everything as performing computations, but it's not all-explaining. There is a lot of space between "the rock is a computer performing rock-y computations" and "the rock is nothing other than a computer performing rock-y computations".

Now look, I'm not endorsing pan-computationalism. It does seem silly to me to use a special term like "computer" to mean nothing more than "thing". Perfectly valid -- after all, we already have a word that means "thing", namely "thing" itself -- but useless as an addition to our toolkit for thinking about the world. But as far as I can tell, no one has a definition that is both widely accepted and usably precise without also being absurdly so (of the heap = 1000 grains sort of absurdity). The fact that everyone has a subtly different notion of what the word "ought" to mean, heck, the fact that we even ask that question at all, indicates that we're engaging in something quite useless, I think. This is why I mentioned the fortunately low cost of words. Why not just abandon the word entirely, and instead invent some new words with a meaning that can't help but be agreed on because it's been invented, and then we can ask do minds do _this_ newly labeled concept, or _that_ newly labeled concept, and so on. Maybe it was a mistake for Turing to have coopted the word "compute" to (conjecturally?) mean "doable by some Turing Machine". Maybe instead he should've said "shmompute", and then merely conjectured that all computable things are also shmomputable. But should we persist in this lexical conservatism, and its attendant silly debates? Why don't we talk about the substance behind the words, in all of it's nuanced, philosopher-specific senses, instead of arguing over these seven letters? Because right now it seems like most of the debates aren't over "is mind like this or like that?" but rather over "can we use the word 'computer'?".

Vitalism is the theory that the origin and phenomena of life are dependent on a force or principle distinct from purely chemical or physical forces. That is not a necessary condition for biological naturalism, in fact and Massimo can correct me BN implies that conciousness/subjective experience are not at all distinct from chemical or physical forces and most importly biology and I agree.

I do see a connection between vitalism and biological naturalism. Vitalism was the idea that there is some substance, some vital essence (elan vital) that is responsible for life, and so makes living tissues fundamentally different from other material.

We now know that this is false, and life is instead a high-level emergent phenomenon that arises when ordinary matter is arranged in such a way that it can reproduce itself and evolve. It is these phenomena of reproduction and evolution, etc that determines what is alive and not any particular substance.

But this necessarily means that life is substrate independent. If we could find a completely different set of substances or chemicals which would exhibit the same properties of reproduction and evolution, then we would have every reason to call these substances alive, even though they might have nothing in common with terrestrial biology at a low level.

Indeed, I would even propose that virtual entities which evolve and reproduce in a simulation can also be considered to be alive.

To me, the assumption that there is something special about neurons that enables us to be conscious is directly analogous to vitalism. Whatever this special property is, it might as well be called elan psychique.

computational |ˌkämpyo͝oˈtāSHənl|adjectiveof, relating to, or using computers: the computational analysis of English | computational power.• of or relating to the process of mathematical calculation.

The brain however evolved to have and serve a much different purpose, or set of purposes, than our computers, which have in turn "evolved" to serve our brains. What continues to baffle me, and not just me but philosophers such as Mary Midgely and others that Massimo deliberately omits, is that he can't seem to see that purposes are causative, and make the essential differences in the workability of any functional apparatus, whether it be a living mechanism or one presumably dead from its mechanical start.The brain and the mind that represents these operational purposes is not a computing system, it's a survival system. It doesn't operate, and doesn't need to operate, by mathematical computation, it uses and has evolved the use of symbolic algorithms; but yes, it did invent the mathematical systems that the computers it then invented to use those systems are doing so to serve the purposes of the mind. Which did not exchange its purposes with those of its computer systems in the process, even though anyone observing this discussion would tend to think the minds involved were now operating under that impression. Even though Massimo is claiming that the mind is not fully characterized by computation, he can't seem to tell us how its character must differ.He doesn't see that minds for example must continually form their own working hypotheses before they can compute a thing at all. These hypotheses are their experimental goals, and solving hypothetical goal seeking problems are their purposes.Done so by experimental trial and error, which is the province of all of nature's intelligence. You know, whatever ability our minds, by their inherent nature, can acquire by trial and error to apply knowledge and skills. In short, its purpose. But not its computers purpose.

I note that Massimo once said, re Yudkowsky and Less Wrong, that: "I think I know why LW contributors are fixated with algorithms: because they also tend to embrace the idea that human minds might one day be “uploaded” into computers, in turn based on the idea that human consciousness is a type of computation that can be described by an algorithm. Of course, they have no particularly good reason to think so, and as I’ve argued in the past, that sort of thinking amounts to a type of crypto-dualism that simply doesn’t take seriously consciousness as a biological phenomenon. But that’s another story."One that should come back to haunt him, because it's the biological thinking process that can be described by an algorithm, not just the consciousness that gives access to the data that the process uses to feed its 4 dimensional symbolic patterning apparatus operated by a who what when where and how learning and predictive algorithmic feedback process. For which a computational mechanism, consciously aware or not, hasn't yet been found that either needs or wants to redesign itself to do.

Only H2O serves the same purposes as water, but water found itself in service of a multitude of purposes that H and O could not have served until they were intelligently bound together by nature's trial and error mechanistic systems that we call forces. Which had produced both H and O for undoubtedly separate purposes earlier in sequential time.

> But Yudkowsky's question, and surely others' question, remains: what makes a mind more like a hurricane or sugar, and less like (a recording of?) a symphony or (a PDF of?) a book?

Perhaps sugar is not the best example, but a hurricane gets close: the mind is a process, not a thing. Your mind is the process of you thinking. There are then different scenarios:

If you want to have _a_ mind on a computer, the question is merely whether there is something extra in a biological brain that is not found in a standard computer. Ultimately one would merely have to identify what that extra is and build a different type of computer that has it, problem solved. (And until a test comes up I am agnostic about whether some such extra exists.)

If you want to have a _human_ mind on a computer, and neglecting the feasibility of simulating a human, the question is whether the simulation of a thing is the thing. And the problem is, a human mind is not like the text in a book or the sounds in a symphony that can simply be copied. Those are perhaps at best comparable to memories, which can lie there without anything interesting happening to them. The mind is an activity, more like an orchestra playing the symphony. Even if you can simulate the orchestra, there is no actual concert going on.

Finally, if you want _your_ mind in the computer, there is the added problem that your mind is the process of you thinking, not the process of some machine thinking next to you even if it starts its own hypothetical mind with your knowledge. But I cannot assume that you believe in mind uploading, so that is rather irrelevant here.

I see what you're saying, but from your example it is obvious that what you consider essential to the concept of fire is that it causes chemical reactions and heat to affect physical objects. Given this, fire, like photosynthesis is a phyical process.

However, in certain other types of processes, such as a computations, elections, peer review, etc, you don't care about what happens to atoms, you care about getting an appropriate result. These are not physical processes.

So which kind of process is the human mind?

Functionally, it's certainly not a physical process. All we care about is getting an appropriate output for the input from the senses. We don't care what happens to the atoms in the brain as long as the output is correct.

There's also the aspect of subjective consciousness, which is more problematic. But unless you think that phenomenal consciousness is something that directly affects atoms, then it's still not clear that consciousness is a physical property.

1. Thinking, learning, remembering, deciding are also physical processes. They change the structure of our brain.

2. Perhaps you should step back and ask yourself, what is being cared about and who does the caring?

If you ask me to compute something or a computer to compute something and you are only interested in the results of the computation, then you are right.

If you kill me and build a robot that can functionally replace me, *I* certainly would care about that (until I am dead, of course).

An election is very much a physical process. If you suggest to replace the actual voters with a computer that decides who gets elected you cannot seriously suggest that there is no important difference. If you merely replace the medium by which the will of the voters is transmitted, well, that is another matter. (Note however that my country of origin, for example, has outlawed the use of voting machines to keep the process transparent.)

3. Again, I am not saying I'm sure that the right type of computer could not be conscious. I am merely saying that a simulation of a human would not have a mind like a real human does because a simulation is not the real thing. A simulated city is not a city. A simulated war is not a war. A simulated mouse is not a mouse. Even if you get a good model of mouse behavior, there is no mouse. It's a simulation. That should not be hard to grasp.

What about a simulated Apple ][? If I start to run a Apple ][ simulator, very strictly speaking I have not created an Apple ][ out of nothing, but it is quite true that everything that matters about the Apple ][ (ie. to run Apple ][ software) is duplicated. Some people are hoping that the mind is more similar to Apple ][ than to a city, a war, or a mouse. I'm not saying that it is (I think we simply don't know at the point), but it's quite a reasonable idea.

It's a reasonable idea because to our best knowledge, neurons are information processing devices. If you replace a neuron in the brain with a small computer, as long as that small computer mimic the same electronic pulses, other neurons wouldn't know it's not a real neuron. The brain will function as before. You can't plug a simulated Brooklyn to the rest of NYC. You can't plug a simulated Pearl Harbor into the rest of WWII, but you can with the brain.

Now it's true that there are many non-electronic factors in the operation of the brain. Maybe the small computer needs to synthesize real transmitters and other molecules. But maybe all that biochemical stuff will turn out to be not important. We don't know. Scientists have plugged circuits into the neural systems of lobsters and other animals and have some success at duplicating some of the neural functions. Maybe that will apply to consciousness but maybe not. I don't see any reason to dismiss either possibility. I am a neurophysiologist. I like my real neurons and I am not placing any bets on a simulated mind. I think it wouldn't work but that is just a hunch.

>1. Thinking, learning, remembering, deciding are also physical processes. They change the structure of our brain.<

Well, everything that happens is a physical process by that definition.

My point is that the changes in the brain structure are not ultimately what make these processes what they are. They're not what we care about. What matters is the behaviour these changes allow. If you changed the substrate completely you would still call these processes genuine thinking (at least in the sense of information processing, if not consciousness), learning, remembering, deciding.

This is unlike the virtual fire in your example, because what you care about if there is a fire in your apartment is the physical damage it can do. This is why you would not call a virtual fire genuine.

>2. Perhaps you should step back and ask yourself, what is being cared about and who does the caring?<

Whoever is using the terms. Whoever defines the terms. If you look in a dictionary definition of fire, you'll see something about heat and chemical reactions. If you look in a dictionary definition of learning, you won't see anything about changes in brain structure. If what is at stake is whether a virtual X is equivalent to a real X, you need to examine what you mean by X.

>An election is very much a physical process. If you suggest to replace the actual voters with a computer that decides who gets elected you cannot seriously suggest that there is no important difference.<

You miss my point. What I'm saying is that in an election you don't care which atoms get shuffled about and how. What you care about is that the will of the people is reflected in the choice of the elected official. A change to electronic voting is more in the spirit of what I was talking about, but you could also imagine simply changing the material the ballot papers are made of. The point is that no physical substance is important to the logic of what's going on.

Sure, you can say that these things only exist insofar as they are physically instantiated in some way (e.g. software being instantiated on a physical CD or the memory of a computer), but what makes each of these things interesting as phenomena in their own right has nothing to do with the particular atoms that make them up in their various manifestations. A CD has nothing in common with voltages in a RAM chip at a physical level, even if both contain the same software.

And if I'm right, the same kind of thing is true of consciousness.

Just as you lose nothing vital to a novel when you change format from printed to ebook, you would lose nothing vital to a mind if you moved to an electronic substrate, because in this view what makes a mind a mind has nothing to do with its physical makeup and everything to do with its "pattern".

Anyway, as I ("The universe is made of code!") see it, as this philosophical discussion goes on, workers in AC (ACers) will plod along making new code and will make a conscious, free-willing robot some day, and we will learn something in the process. (Or not.)

>free-willing robot<Why was I under the impression you did not believe in free will?

>conscious...robot<Why would we want a conscious robot?Surely we want a rule following robot with a prodigious, flexible rule-set coupled to advanced sensors all run by a very powerful computer that learns quickly, modifying its rule-set as it goes along.

I wonder sometimes are the motivations for the vociferous defence of CTM and pancomputationalism not entirely rational? I can't help but feel that lurking under all this is a "I want to be uploaded" desire and therefore the need for CTM/"Everything thus needs to be computation" to be true is paramount? Can anyone address this? Why is it so important that people want to hang onto CTM? Has it not been fairly fruitless so far w.r.t strong AI development and cognitive science? Anyone care to address this?

IMHO biological naturalism is much more elegant and supportable position.

I for one have no expectation that mind uploading will take place during my lifetime, and am agnostic on whether it will ever be achieved.

I'm only concerned with the principles in question, and my motivation is largely a morbid fascination with how other apparently rational and intelligent people can hold views which are so radically opposed to what seems perfectly self-evident to me.

It seems that one side of this debate is deluded, and I really want to figure out which side and why.

My understanding is that many of the leading proponents of an information- (and information processing-) based approach to physics see information as physical. The bits or qubits are always 'embodied' in actual physical processes, albeit that these processes are understood at a deep level in terms of the processing of information. (There are close parallels between information theory and thermodynamics.)

So I'm not sure that such a view leads to Platonism. Seeing physical processes as algorithmic (and scientific theories as predictive algorithms) seems to me a genuinely interesting perspective: but it may well be that there is no way actual physical processes can be perfectly simulated (or predicted).

Coditrons (hypothetical particles or quasiparticles) would be made of probabilistic logic gates and be like the hardware implementation of cells in a CA. I'm not sure about the regularity of the topology of how the coditrons (cells) are distributed [ en.wikipedia.org/wiki/Category:Space-filling_polyhedra ]. It could be irregular.

I still think your coditrons idea is nonsense, but what you've just said about topology and distribution made a little bit more sense to me. It reminds me of Stephen Wolfram's conjecture that the universe is a cellular automaton, but I think it suffers from the same issues in accounting for relativity etc.

Are you familiar with Wolfram's hypothesis, and what do you think of it?

> I know, but you are still saying that there is something fundamentally similar between being a rock and being a brain. To me that's close to absurd, and at any rate, entirely uninformative.

I think you're unaware of the distinction between computers (handwaving begins). The Chomsky hierarchy is a classification of the "power" of computing systems. A rock and a brain can both be computing devices, but their respective powers are much different.

The rock essentially just performs the identity function; a puddle performs more complicated chemical reactions (finite state machine); a plant could be a pushdown automata; a human brain is a Turing machine. (end handwaving).

Both the rock and the brain have a physical substrate, but the rock's structure only gives f(x) -> x, while the brain's structure can perform (finite) Turing machine calculations. The fact that they are all computing devices is an interesting observation, but the true interest is how the structure relates to the "power" of the respective computing devices.

> Yes, I know, I've added panpsychism to the long list of bizarre notions seriously entertained by Chalmers. The man's either a genius or one of the most overrated contemporary philosophers. Care to bet which way I lean? <

By the way, Dennett is overrated too. Interestingly enough, he presupposed panpsychism when he attempted to explain consciousness.

"But, as we have seen, the point of view of a conscious observer is not identical to, but a sophisticated descendent of, the primordial points of view of the first replicators who divided their worlds into good and bad. (After all, even plants have POINTS OF VIEW in this primordial sense.)" (emphasis mine)(source: pg. 176, "Consciousness Explained" by Daniel Dennett)

Dennett is great, IMHO. I'd stand by pretty much anything he's said, but I wouldn't always say it in the same way.

In particular, Dennett is not a panpsychist. He's not attributing consciousness to plants, etc, he's talking about a point of view, which is not the same thing at all.

In particular, he's saying that the point of view of a conscious observer is a sophisticated descendent of the points of views of unconscious entities. He's not calling those early replicators conscious.

> In particular, Dennett is not a panpsychist. He's not attributing consciousness to plants, etc, he's talking about a point of view, which is not the same thing at all.

In particular, he's saying that the point of view of a conscious observer is a sophisticated descendent of the points of views of unconscious entities. He's not calling those early replicators conscious. <

When Einstein imagines what the universe would look like from the point of view of a photon, he's not imagining that photons are conscious.

When Dawkins explains that selfish genes have evolved to encourage outcomes that are beneficial from the point of view of the gene, he's not imagining that genes are conscious.

When Wilson describes how an ant behaves in response to local conditions as they appear from its point of view, he is not presupposing that ants are conscious.

When a software developer explains to a customer how a backend system will not be able to identify the user when no authentication data is available from its point of view, that developer is not assuming that his software is conscious.

Disagreeable,>A "point of view" presupposes consciousness...No it doesn't. This is a misunderstanding of Dennett's intent.<

A point of view presupposes a viewer who is conscious. In your examples the viewers are conscious third parties. In Dennett there is no conscious third party, as in your examples. So I think Alistair's point stands.

As to Dennett being overrated, anyone who can write Consciousness Explained and leave the subject completely unexplained hardly deserves any rating.

Okay, so if Dennett is speaking figuratively (not literally) here, then the "point of view of a consciousness observer" CANNOT be a sophisticated descendent of the primordial "points of view of the first replicators." Why? Because the "points of view of the first replicators" do not LITERALLY have points of view. As such, Dennett has failed miserably to explain how consciousness (a point of view) can spontaneously emerge arise from something that has no point of view (i.e not conscious).

I can kind of see where you're coming from in the case of Einstein. He's imagining what the universe would look like to him if he were moving at the speed of light.

In the other cases it's really not clear to me that we're talking about a hypothetical observer at all.

If I'm playing Texas Hold'em against a poker bot, I have to consider how my odds of having a good hand look from the point of view of that bot. If you want to say that this means I am putting myself hypothetically in the place of the poker bot, then OK, but you might as well say that Dennett is putting himself hypothetically in the position of an unconscious replicator.

@Alastair>Okay, so if Dennett is speaking figuratively (not literally) here,<

Firstly, consciousness has nothing to do with whether a point of view is literal or not. A literal point of view is a description of a position in space from which we consider how the world would visually appear. A conscious blind person has no literal point of view, they have a figurative point of view. An unconscious camera has a literal point of view.

For both primordial replicators and conscious humans, the point of view Dennett is discussing is figurative, because he's not talking about visual information exclusively. The figurative point of view he's talking about is the available information about the environment and the courses of action that would lead to increased reproduction.

Dennett (like all philosophical naturalists, include Massimo and me) does indeed think that consciousness can over time evolve and so arise spontaneously from something that has no conscioussness, but the bit about "point of view" is a red herring and he is not a panpsychist.

@Disagreable,>philosophical naturalists, include Massimo and me) does[do] indeed think that consciousness can over time evolve and so arise spontaneously from something that has no consciousness<

Actually I thought it was biological naturalist. Nitpicking aside, I thought this was a good place to interject a Catholic perspective(it might surprise you).

Our perspective is this. God intended that the world/Universe should evolve/develop according to the laws of nature because God created the laws of nature for this purpose. Therefore the Catholic point of view is that all observable phenomena within the Universe should be explainable by science in terms of the laws of nature and this includes consciousness.

Therefore there cannot be any contradiction between religion and science since science is revealing the methods by which God works. Any apparent contradiction is a man-made confusion. This is why the Catholic Church created the Pontifical Academy of Sciences. We believe that the study of science is rather like studying the mind of God and therefore should be encouraged and supported. We believe that natural, scientific explanations will be found for all observable phenomena.

In that sense Catholics are philosophical/biological naturalists. There is no supernatural, God works through the natural world. Now that should give you pause :)

I think you've missed mine, that the camera has no ability to decide on what to view or to react to what it views accordingly. It has no reason to be a viewer. We supply it's ability, we cause and/or program it to react, and we consciously select what it has no consciousness of viewing.Available information is meaningless, if the meaning is not available to the viewer. Remember that you referred to information "about the environment." But to the camera, the information is about figuratively nothing and therefor not at all informative literally.

Biological naturalism is the specific view that consciousness is necessarily the product of biological brains. This is Massimo's perspective regarding consciousness. I am not a biological naturalist.

Phlosophical naturalism is the broader view that everything that happens does so in accordance with physical law, that there is no supernatural, no God etc. Massimo and I are both philosophical naturalists.

>I thought this was a good place to interject a Catholic perspective(it might surprise you).<

I doubt it. I was raised Catholic but grew to reject it as nonsense at an early stage.

Overall, I regard the Catholic view with respect to science as being a whole lot more rational than evangelical Protestant views, for example. I still think it's problematic, however, as it postulates the existence of a God with very specific characteristics (infinite wisdom, perfect love, omniscience, omnipotence, a strange need to expiate the inherited sins of humanity by the torture and murder of his son) for which there is no evidence.

And, contrary to your assertion that scientific explanations will be found for all observable phenomena, the church still endorses the concept of miracles, which is not compatible with a philosophical naturalist viewpoint.

So the Catholic's god created the universe, its laws and it's freedom to evolve itself? Why would we then need to be accountable to that god that has caused us to evolve unaccountably to our creator in the process?

It's so silly to keep on with the proposition that the universe must have had a beginning, and that beginning must therefor have had a creator. It's the argument that, because it had to have had a beginning at some point, the universe was made from nothing by a something.

@Baron PYup, you missed my point. My whole point was that the camera has nothing akin to consciousness, but it does have a literal point of view. Points of view have nothing to do with consciousness.

@Alastair Paisley"Okay, then Dennett has completely FAILED to explain how a sentient living organism evolved"I'm not sure that he was even trying to explain this, and certainly the two sentences you quoted were not intended to provide a complete history of the evolution of consciousness.

But it's not hard to see the broad strokes of how unconsciousness could morph into consciousness over time if the CTM is right.

Consciousness is a property of certain extremely sophisticated algorithms. Just as evolution can drive increasing complexity in all biological systems, so the nervous systems of certain creatures have become more sophisticated over time. At a certain point, we would call them conscious, but there is no sharp threshold when we cross from unconscious to conscious.

A human being is more conscious than a dog. A dog is more conscious than a frog. A frog is more conscious than a fly (if you would grant a fly any small degree of consciousness). Much below this level of sophistication and it's hard to find anything you could plausibly call conscious at all.

Well, that God might perhaps have an intent, a purpose for you, or it might be a remote, observer God, or perhaps an experimentalist God and we are his laboratory, you take your pick or none at all.

Let's not derail the conversation. My interjection was an aside as an effort at being informative. It was not intended to create a shooting gallery where every angry atheist can take pot shots at the Catholic foolish enough to step into view.

@Disagreeable, "Points of view have nothing to do with consciousness."Since I've completely disagreed with that, I haven't missed it, have I?Do you disagree that for the concept of a view to be meaningful, there's a presumption that there's something in the view's vicinity with the ability to view it.And if so, is it the camera that has the ability to view or its creator? Because if the camera is simply an object that reflects a view, then all things in the vicinity of the view are viewers. Making the concept, as it stands, meaningless.

Peter,Why do you assume I'm an angry atheist? Why not imagine I'm a serene agnostic, free to point out that from a logical perspective, each of your imagined gods exists by virtue of a logical contradiction.

The quotation was cited from Dennett's book entitled "Consciousness Explained." I've read it. In it, he endeavored to furnished the reader with a materialistic explanation of consciousness. No such explanation was forthcoming.

> Consciousness is a property of certain extremely sophisticated algorithms. Just as evolution can drive increasing complexity in all biological systems, so the nervous systems of certain creatures have become more sophisticated over time. At a certain point, we would call them conscious, but there is no sharp threshold when we cross from unconscious to conscious. <

Consciousness (i.e. AWARENESS) is binary. Either something is experiencing subjective awareness or it is not.

> A human being is more conscious than a dog. <

I disagree. Both human beings and dogs experience "awareness." (A human being may have different experiences than a dog. But the point is that they both have experiences.)

As I see it, you're conflating "access-consciousness" (the technical term for information processing in the philosophy of mind) with "phenomenal-consciousness" (the technical term for subjective experience in the philosophy of mind).

"Complexity" has nothing whatsoever to do with "p-consciousness" (i.e. awareness). My current personal computer is exponentially more complex than my first personal computer. But I have absolutely no reason to believe that my current personal computer is having some kind of inner experience. Apparently, you would have us believe that it is.

@Baron PThere doesn't have to be a viewer for there to be a point of view. A point of view is only a point in space in the context of considering a visual field. Perhaps it only has meaning when there is a viewer, but I'm not asserting that points of view always have meaning in this sense.

@Peter>But I am. Reluctantly, because the CTM is very desirable from a Catholic point of view. Why? Well that should be obvious!<

Absolutely. No issues there. As you are coming from a religious perspective, I actually have fewer problems with your biological naturalism than I do with Massimo's, as I think the CTM is the only option consistent with philosophical naturalism. If one is spiritual, one can suppose that it is the spirit which gives life to consciousness.

>Why not moderate your tone in the interests of a productive dialogue?<

I aplogise if I have given offense. For what it's worth, I assume you are a moral, intelligent, decent, honest individual I would be glad to count as a friend.

However I do feel that religious views tend to get undue respect and as such I tend not to couch my opinion of religious views in dishonestly diplomatic phrases. While I do think you are rational, I also regard religiously motivated views as irrational.

But then, everyone has irrational views, myself included.

>Really? Please explain. I am eager to see a clear explanation of this morphing from unconsciousness to consciousness. Perhaps it is even testable?<

I said broad strokes. I gave my account in my previous post. It's just like any other increase in complexity over time. If you don't have a problem with how brains became more complex then there's nothing else to explain.

>Which algorithms would this be? Please explain. I would love to see these algorithms!<

Algorithms which correspond to what's going on in conscious brains, for instance. I hope you don't expect me to give a detailed description of the most sophisticated algorithms in the universe in a blog post comment, do you? I can't, not least because they're beyond my comprehension. But, for a high-level description, I'd stand by Dennett's Consciousness Explained.

@Disagreeable,"Perhaps it only has meaning when there is a viewer, but I'm not asserting that points of view always have meaning in this sense."OK, but one would think that if they could have, they should have.

I know, I've read it too. What I said was that I'm not sure Dennett is claiming to explain in any detail how consciousness evolved. Instead, he claims to explain what it is and how it works.

And I think he does, but I think the explanation is unsatisfactory to you because it is too unintuitive.

Critics have described the book as "Consciousness Explained Away", and in some sense I think this criticism is apt, however I still stand by it as I think "explaining away" is exactly what consciousness needs. I suspect that the concept of consciousness is incoherent and not distinct from the simulation of consciousness.

>Consciousness (i.e. AWARENESS) is binary. Either something is experiencing subjective awareness or it is not.<

This position is untenable if you deny the CTM. Consider a thought experiment where parts of your brain are gradually replaced by perfect electronic functional analogs. If you don't hold with the CTM, then you must imagine that your consciousness is being diminished as more and more of your brain is replaced. There can be no sharp transition.

But even if you do hold with the CTM, then it's far from clear that either something is aware or not. Where do you get this conclusion from? It certainly seems to me that there are times when I'm more or less on autopilot and times when I'm hyper-aware of my own self and my surroundings. When falling asleep or waking up I don't usually experience a sharp switch from consciousness to unconsciousness or vice versa but a gradual transition where it's not clear if I'm fully conscious or not.

I note that you regard dogs as conscious but are silent on the subject of frogs or flies. Could this be because of your uncertainty over which side of the binary threshold they are on? If it's such a sharp dividing line then why the uncertainty?

In general, I would be very suspicious of any such binary thresholds in complex biological organisms. I think they very rarely, if ever, exist.

I have no end of appetite for debate on the existence of God, and would have great interest in pursuing it with you.

As this is not an appropriate venue for that discussion, I would invite you to give me a link to the argument which you find most compelling for the existence of God. I will write up my response to this argument on my blog, and we can discuss it there if you wish.

> Critics have described the book as "Consciousness Explained Away", and in some sense I think this criticism is apt, however I still stand by it as I think "explaining away" is exactly what consciousness needs. I suspect that the concept of consciousness is incoherent and not distinct from the simulation of consciousness. <

It is not possible to have an intelligent discussion with someone who is sincerely denying his or her own subjectivity.

> When falling asleep or waking up I don't usually experience a sharp switch from consciousness to unconsciousness or vice versa but a gradual transition where it's not clear if I'm fully conscious or not. <

I believe I am ALWAYS having some kind of experience (asleep or awake). (Do you believe your "awareness" ceases to be aware while you're asleep? Interesting. But even if you do, my argument still stands. Consciousness is binary. You're either aware or you're not.)

> I note that you regard dogs as conscious but are silent on the subject of frogs or flies. Could this be because of your uncertainty over which side of the binary threshold they are on? If it's such a sharp dividing line then why the uncertainty? <

Frogs and flies have some kind of inner experience.

> In general, I would be very suspicious of any such binary thresholds in complex biological organisms. I think they very rarely, if ever, exist. <

I'm very suspicious of any argument for artificial consciousness that invokes "complexity" as the determining factor.

>It is not possible to have an intelligent discussion with someone who is sincerely denying his or her own subjectivity. <

It's not that I deny my subjectivity. It's that I deny there is any difference between "real" subjectivity and "simulated" subjectivity.

>I believe I am ALWAYS having some kind of experience (asleep or awake).<

Right. *Some* kind of experience. Of greater or lesser vividness. My view is that as experience dulls to the point of near-oblivion, it's a stretch to say that you are conscious.

>Frogs and flies have some kind of inner experience. <

Again, *some* kind of inner experience. In the case of flies, a very dim one. Again, perhaps a stretch to call it consciousness. What about simpler organisms? What about single neurons? Bacteria? Where do you draw the line? At what point is that magic threshold crossed?

And what about the thought experiment where your brain is slowly, bit by bit, converted into an electronic analogue? Where would consciousness switch off and what would that entail?

> My view is that as experience dulls to the point of near-oblivion, it's a stretch to say that you are conscious. <

Near-oblivion is not oblivion. (Dreamless sleep is actually a state of pure awareness.)

You're making a distinction based on degree, not definition. And once again, we come back to my original point. Dennett's "materialism" actually implies panpsychism.

"But, as we have seen, the point of view of a conscious observer is not identical to, but a sophisticated descendent of, the primordial points of view of the first replicators who divided their worlds into good and bad. (After all, even plants have points of view in this primordial sense.)" (source: pg. 176, "Consciousness Explained" by Daniel Dennett)

Translation: The more complicated the information processing is the more conscious the information process system (or "stimulus-response system") is.

> What about simpler organisms? What about single neurons? Bacteria? Where do you draw the line? At what point is that magic threshold crossed? <

> So Alastair, do you see such a producer of this predetermined range of systems presenting an optional example of an indeterministic one? How it would determine a computable indeterminacy is my question. As well as why.Or is information "truly fundamental" for no reason? <

Consciousness is the "producer." The deterministic/indeterministic event is the "uncaused cause" (a.k.a. free will). This is "how" consciousness becomes aware of itself. "Why?" Answer: to know itself (The "how" and "why" are ultimately one and the same.)

So free will has been a deterministic product? Nice, if how and why have been determined to be the same. The question then becomes, did why come before how, or was it determined to come afterwards by a reasonably ultimate determiner.

> So free will has been a deterministic product? Nice, if how and why have been determined to be the same. The question then becomes, did why come before how, or was it determined to come afterwards by a reasonably ultimate determiner. <

The "uncaused cause" is nonlocal (not located in space and time). (The ultimate "efficient cause" and the ultimate "final cause" are one and the same because they transcend space and time.)

Alistair, Your understanding of a fallacy is limited. There is nothing unsound about comparing what appear to be the only options on the basis of their logical possibilities.Especially when compared to the rational unsoundness of an uncaused cause.

> Your understanding of a fallacy is limited. There is nothing unsound about comparing what appear to be the only options on the basis of their logical possibilities.Especially when compared to the rational unsoundness of an uncaused cause. <

There are only two options - an infinite regress or an uncaused cause. You have chosen the former; I have chosen the latter.

An infinite regress is concerned with a truth that depends on a previous truth. The existence of a present something doesn't depend on a previous truth - the fact is rather that the past is only an inference of its prior presence, and we can only infer that the past had always had that presence, since we can't infer by our logical systems that the present at some point in its eventful sequence, was not present. We can however make illogical assumptions that everything must have had a beginning, or even more illogical, that its causation was a magically emergent process.

> An infinite regress is concerned with a truth that depends on a previous truth. The existence of a present something doesn't depend on a previous truth - the fact is rather that the past is only an inference of its prior presence, and we can only infer that the past had always had that presence, since we can't infer by our logical systems that the present at some point in its eventful sequence, was not present.We can however make illogical assumptions that everything must have had a beginning, or even more illogical, that its causation was a magically emergent process. <

My previous post still stands:

There are only two options - an infinite regress or an uncaused cause. You have chosen the former; I have chosen the latter.

That was only a 'reasonable' belief until the Big Bang was discovered.

That shocking discovery has led to incredible logical convolutions as some try to preserve their metaphysical preconceptions. For example the 'something from nothing' arguments of Krauss and Hawking. They, in effect, concede there is no infinite regress then claim the uncaused cause is 'nothing'. It is hard to keep a straight face when reading that argument.

Krauss at least has conceded there was a functional something there before the bang. And I don't care for the term "infinite regress" in any case. Time is the measurement of change in whatever area you're measuring from that in any case is always now. Regress implies a former series of nows that go back to the endlessness of time. But if there's only a continuously changing present, there wasn't any backward time to go to. If the sequence of change goes in any direction, it's only forward.

Baron,your argument seems to imply that time does not have a special status. That runs contrary to the spirit of Einstein's work. Sean Carroll has even maintained that time is the most basic dimension of all.

I would suggest that your argument, that there is only continuously changing present, is an artefact of consciousness. Of course you might be arguing that is all time is, an artefact of consciousness.

I have always thought that Neil Turok/Paul Steinhardt's bouncing or cyclic universe would be most attractive for atheists because then you could then more easily claim there always was something. You would then have an infinite regress of bouncing universes, side stepping the something from nothing problem that confused Krauss.

Mind you, I don't subscribe to that belief. I only mention it because it seems to be a more logical position for atheists. However it can easily be accommodated in a theistic framework and from time to time I veer towards that belief. My thinking also seems to be a cyclic universe :)

You should read Endless Universe by Turok/Steinhardt. They give a good explanation that seems entirely plausible.

>His cosmology allows for the idea that there has always been something, namely a multiverse that's constantly spawning universes<

Now that seems implausible. There has always been nothing, continually creating something. What is the source of this vast amount of energy? What is the source of the laws of nature that control this process? I dread to think what would happen when this dense foam of continuously created, expanding universes bump up against each other, as they must inevitably do all the time.

We all speculate and that is a good thing because it is an innovative process that can lead to the birth of good, testable ideas.

But I draw the line at scientists who clothe their speculations as good theory which conveniently are forever beyond empirical validation. Is that science? Or is that faith in the service of metaphysical prejudices? They are amateur and uninformed philosophers who think they can cloak their crude speculations with the imprimatur of their scientific work. It is a con job.

>You should read Endless Universe by Turok/Steinhardt. They give a good explanation that seems entirely plausible.<

I'll take your word for it that they have some explanation for the accelerating expansion of the universe. Still, while it might be possible, the idea of a big crunch seems less likely than the opinion of most cosmologists who seem to think the universe will expand forever.

>Now that seems implausible. There has always been nothing, continually creating something. <

Firstly, I'm not a big fan of Krauss, especially when he claims to have explained how the universe can arise from nothing. I agree with his theist critics that the quantum foam from which the universe arose is not nothing. There is no explanation forthcoming from Krauss about where the physical laws came from.

However the idea of this quantum foam/physical laws existing forever, continually creating universes is no more ridiculous than the idea that the universe has existed forever, expanding and contracting. With the big bounce, you still haven't accounted for where the physical laws come from.

>I dread to think what would happen when this dense foam of continuously created, expanding universes bump up against each other, as they must inevitably do all the time.<

I think that's a naive visualisation of what's happening. I forget the details, but you could imagine for instance that the universes are accelerating away from each other faster than they are expanding due to the expansion of the meta-space between them.

In any case all this is not happening in the familiar three dimensional space we are familiar with, but in some higher-dimensional manifold. You could perhaps visualise each universe as an expanding circle on a sheet of paper, and visualise the multiverse as a stack of such sheets. The universes will never collide because they are not expanding towards each other at all.

I'm clearly talking out of my behind here, but Krauss isn't. He has a solid foundation for the hypotheses he presents (hypotheses which I am probably entirely misrepresenting).

As a theist I don't have a horse in this race since theism easily accommodates either model or the single universe model, though theism also tends towards a parsimonious explanation. I do think however that science should be science and not cloak speculative philosophy as science.

Paul Davies, some years ago, wrote an Op-Ed piece for the NY Times, pointing out that some scientists were now also guilty of acts of faith in their scientific assumptions. The amusing part was the outraged howls of protest that went up from the likes of Krauss, Carroll, etc. Clearly he touched a raw nerve.

We need to clearly separate out assumptions, speculation, philosophy and science so that we don't label them science and lend them false authority. It is this guise of false authority that makes the enterprise so unpleasant.

However, I'm not sure that Krauss's thinking defies Occam's razor. Turok/Steinhardt needed to come up with some unconfirmed justification for how the big crunch could happen despite the fact that the universe appears to be expanding ever faster.

In many ways I think that solutions that propose infinities are simpler than those which don't. This is unintuitive but I explain my thoughts on my blog.

@Peter:"Baron, your argument seems to imply that time does not have a special status. That runs contrary to the spirit of Einstein's work. Sean Carroll has even maintained that time is the most basic dimension of all."As a measure of change, time IS the most basic dimension.The problem is that it's not linear, and thus it's a rather ephemeral concept, like causality, also something that actually comes at you like a vast unmeasurable web.

Wow, this discussion is going on at a pretty good pace, and in all sorts of directions. Not sure any further input from me is necessary, but...

Alex,

> if, for example, the claim is that a computer that passes the best behavioral test we can come up with still does not think because there is some untestable extra that we humans have but the computer doesn't, that pretty much sounds like begging the question to me, or perhaps moving the goalposts, depending on where you think the conversation started. <

Agreed, there may be a behavioral test (or battery of tests) beyond which it would be just as unreasonable to deny that a computer is conscious as to deny that any human outside ourselves is. But I'm pretty sure that's not the Turing test, especially as currently implemented.

Darryl,

> If the issue is the right stuff, and this is a matter of _physical_ nature, not behavioral nature, then we're back into the realm of vitalism <

No, we are not. Unless you think chemistry is vitalistic. We know that different atoms, molecules, etc. have different chemical-physical properties, and that some types of molecules can't do what others can't. That's the basis for biological naturalism, not vitalism.

> Words _really are_ cheap, as are concepts <

Once more: without a clear understanding of what we are talking about (i.e., an agreement on what words mean) there is no discussion, about anything.

> Yudkowsky's question, and surely others' question, remains: what makes a mind more like a hurricane or sugar, and less like (a recording of?) a symphony or (a PDF of?) a book? <

A mind is a biological (i.e., physical) process, a symphony isn't. That's like asking what makes breathing more like sugar. Well, the fact that it is done by lungs. Also, what Alex said.

> That, I think, is the real import of the CTM, that minds really aren't like sugar or hurricanes, they're not substrate dependent <

Yes, I know. And my position is that there isn't much of a good reason to think so, and a number of good reasons to think not.

> Why not just abandon the word entirely, and instead invent some new words <

That won't do because the disagreement is about the concepts identified by particular words, not on the words themselves. No matter what you call it, the idea that minds are substrate independent is, I think, likely to be wrong.

> to me the MUH is not highly speculative but evidently true (and I have arguments to back this up). I think it follows quite naturally from the conjuction of mathematical platonism, the CTM and naturalism, all of which I regard as obviously correct. <

I forgot what the MUH is. But to me naturalism is pretty obviously correct, Platonism is intriguing but probably a bit too far fetched, and CTM is very likely false. So...

> Even you think there is something fundamentally similar, in that they are both physical objects made of atoms. The only difference physically is that one is hugely more complex than the other. <

I knew you were going to say that, but consider this: yes, both rocks and brains are physical objects, so both are subject to the laws of physics. But physics by itself tells you very little of interest about brains specifically. You need biology. It is in a similar sense that — assuming it makes sense to say that everything computes (I don’t think it does, but for the sake of argument) — we are still getting nowhere regarding consciousness unless we bring in, you guessed it!, biology.

> The assumption is that it's something about the organisation and complexity of the algorithm. That by itself doesn't help us understand the brain much better <

That’s a gross understatement. To continue the parallel above, it would be like saying that the difference between rocks and brains is that you just need to add atoms, a lot of atoms...

> an implicit premise which *directly* entails a conclusion is begging the question. That's what you have done. <

No, I haven’t. But I see that I can’t convince you, so I’ll drop it.

> What empirical evidence do you have for (2)? <

The same one I have for claiming that breathing is a physical phenomenon: both are done by physical-biological objects.

> I know you don't think simulating a brain would produce consciousness, but if we can't even *simulate* a brain in *principle* then that in itself is enough to kill the CTM <

True, I just don’t need to go that far.

Jeffrey,

> Someone (Massimo) who confesses to be ignorant of the theory of computation would do better to learn it, instead of insisting that his pre-scientific view of the mind is the right one. <

I admit my limitations and try to learn from others. Speaking of which, you may want to learn a bit of philosophy of language, given your pre-scientific ideas about semantics and syntax.

It's an intriguing idea, very convincing to me (not least because I came up with it independently as a result of thinking deeply about the implications of the CMT, fine tuning and the origins of the universe), and it might be worth an article of yours some day if you're looking for something to write about.

Unfortunately it's probably a non-starter if you're deeply committed to an anti-CMT position.

>But physics by itself tells you very little of interest about brains specifically. You need biology.<

Absolutely. No disagreement there. Certainly lessons learned in biology can greatly enhance our ability to create synthetic solutions for our problems - flapping flight, super-strong materials, even the concept of evolution are all ideas which have been studied by biologists but have found engineering uses in other non-biological spheres. Where we differ is that I see no reason to suppose that consciousness is any different.

>No, I haven’t. But I see that I can’t convince you, so I’ll drop it.<

And, as it's unproductive, I'll try to avoid accusing you of begging the question.

But can you at least see from my point of view that it's not terribly persuasive when you assert "a simulation of X is not X" in order to prove that consciousness is not a computation, when you seem to agree that this assertion does not apply to computations?

> The bits or qubits are always 'embodied' in actual physical processes, albeit that these processes are understood at a deep level in terms of the processing of information. ... So I'm not sure that such a view leads to Platonism <

That’s certainly sensible, but no, there are a good number of physicists and philosophers of mind who do think that the basic “stuff” of the world is no “stuff” at all. Hence the term ontic structural realism that sometimes identifies this view.

Sharkey,

> A rock and a brain can both be computing devices, but their respective powers are much different. <

See my response to Disagreeable above for why even if this were true it is largely irrelevant to the debate at hand.

> The rock essentially just performs the identity function <

You know, no offense, but this sort of language reminds me of the medieval Scholastics. “Performs the identity function” seems a wholly unnecessarily convoluted way of saying that a rock is a rock.

brucexs,

> The author of the article does appear to have a computationalist bent, but I would hope the above quote is still a fair summary of mainstream scientific thinking. <

I think that’s probably a good characterization of part of what is known as cognitive science, specifically the part that is closer to computational science. But neurobiology, brain developmental biology, brain molecular biology and brain evolutionary biology — as far as I can see — make zero use of the computer analogy (the partial exception is some misguided soul in evolutionary psychology).

Massimo wrote: "But neurobiology, brain developmental biology, brain molecular biology and brain evolutionary biology — as far as I can see — make zero use of the computer analogy (the partial exception is some misguided soul in evolutionary psychology)."

Massimo is simply wrong. Pick up any issue of Journal of Neuroscience or Neuron and you will see computor analogy immediately. Let's pick up the current issue of Neuron, for example. "Spatial Segregation of Adaptation and Predictive Sensitization in Retinal Ganglion Cells". The abstract talks about an inferential model and calculations of prior probabilities. This should look familair to anyone with a computer science background (especially those who studies machine learning and signal processing).

Neurosciecne is a very large field. Massimo very conveniently left out the parts of neuroscience that do use computer analogies. Neurophysiology is very computational. Let's say you have to grab an apple you see in your visual field. The brain uses subtle differences of the images coming from the left and the right eye to estimate how far way the apple is. That is a computational problem. No amount of molecular biology, developmental biology, or eveolutionary bioloy will answer that question. What physiologists and computational neuroscientists are doing is figuring out how information is processed to solve the computational problems. The theories and results are not expressed like a c program or a symbolic processing program as some of the theoretists in the 80's envisioned, but they are computational nonetheless. Even at single neuron level, the opening of channels, the change of membrane potentials are expressed in a framework of information processing (see "The Biophysics of Computation" by Christof Koch, for example).

This is not to say that everything that the bran does is computational. This does not mean that you can upload your mind. But computation is integral to neuroscience. There is no branch of neuroscience that make no use of computer analogy. The only caveat is that the computer analogy used in neuroscience is not necessarily similar to the architecture of PCs that we are using today. The architecture of PC is among many forms of computers that are studied in computer science. Computer scientists talk about analog computers, kernel machines, Boltzmann machines, quantum machines... they are all computation. If you limit your definition of neuroscience to just molecular neurobiology and if you limit your definition of computation to just mean Turning machine, then yes. Massimo's statement is not entirely wrong. But what's the point of using these super narrow definitions that do not map to what other people call computation and neuroscience? I have no idea.

I suspect one difference between machine computation and biological computation is the relative ability to change your computational algorithms, your initial programming in other words, through learning from your own experience. Also calculators and brains obviously do solve much of the same problems at much the same levels of difficulty, but that doesn't mean the methodology of their processing systems must be the same. Although if they weren't I admittedly wouldn't be able to tell you where they're different.

I agree that neural computation is much more fault tolerant and adaptive than the computer programs that we are familiar with. But I think the key insight is to do away the concept of programs. The concept of program does not have a straightforward correspondence in the brain. Computation in the brain is specified distributedly by the physical structures of neuronal tissues. There is no "code" to read out (this is also why I am not optimistic about mind uploading)

Some people (Massimo?) might say the concept of programless computation too far removed from Turing machine that they are not computation anymore. I think that is way too conservative. When you speak to Siri on an iPhone, there is no program in Siri that recognizes your voicing of "call mom". The pattern is learned from your interactions with Siri. This is also how face recognition in iPhoto works. This is also how (many parts of) Google works. This type of computation has been studied by computer scientists since Turing's day and they are only becoming more popular. It is simply just a more general form of computation.

I've a joint MSc from UCD in computer and cognitive science and when I say that I've done courses in both AI techniques and post-cognitive cognitive science. See http://postcog.ucd.ie/ for details on the kind of things covered. From what I learned there it would seem quite clear that the only thing computers are good for w.r.t minds, brains and behaviour are modelling and simulation.

> “Performs the identity function” seems a wholly unnecessarily convoluted way of saying that a rock is a rock.

You're giving me a hard time about "identity function" when your own blog uses "unlimited pancomputationalism" non-ironically? You say you want to learn about areas outside your expertise, how's about you try a little harder to respect the ideas?

The identity function is the trivial computation. It's not interesting per se, but it's a type of computation, and shows the distinction between computers can be made at a level that still classifies rocks and people differently, while identifying both as computational devices.

What is ultimately so disappointing about Biological Naturalism (a la Searle) is that it really isn't much of an improvement over the computational theory of mind. The problem with the computational theory of mind is that we can't see how computer programs of any kind can create consciousness at all. The problem is not solve by biological naturalism. We still can't see how the biochemical and biophysical properties of the neural tissue can create consciousness. To our best knowledge, brain issues are not radically different from other biological tissues. All we find in neurons are membrane potentials, electrical fields, magnetic fields, ion concentrations... etc. If any of the physical properties are related to consciousness, we should know something by now. For biological naturalism to work, we are expecting neural tissues to have properties that are completely unknown to modern physics. Now that doesn't make the idea wrong but it does make it highly speculative. It is dangerously close to dualism.

While I agree with you that biological naturalism is not helpful, I have come to realise lately that the use of dualism as a pejorative is a bit unfair.

I agree that Cartesian dualism is rubbish. I don't for a minute think that there is a disembodied spirit which somehow interacts with the brain to control what we do.

But that's not the only kind of dualism. In fact, I think the CTM leads to dualism of a kind, and that should not be an automatic fail the way most people seem to think it is.

The dualism I'm talking about is the distinction between hardware and software, between air molecules and a sound wave passing through them, between the atoms making up the codons in a particular physical chromosome and the gene they represent.

In this view, there are two kinds of things: physical objects and abstract objects. Abstract objects are often physically perceived as patterns in the physical world, but they don't literally physically exist.

I think the CTM leads to the view that the mind is essentially such a pattern, but though this might be described as dualism of a sort I think it's entirely different from Cartesian dualism and not obviously wrong.

>But you'd agree that patterns of the brain are contingent on the chemical and biologocal interactions of very specific types of matter. Without such material, then no such patterns.<

I wouldn't agree actually. An isomorphic pattern could be represented in a computer where the bits that make it up are not material but logical objects in a computer program.

As long as the pattern behaves in the same way, then the substrate doesn't matter. A sound wave can pass from air to water and it's still the same wave. A sound wave can be simulated on a computer program, and while you wouldn't say it's a *sound* wave (which implies that it's a physical vibration), I would say it's still a wave (a pattern which propagates through a medium).

>What is ultimately so disappointing about Biological Naturalism (a la Searle) is that it really isn't much of an improvement over the computational theory of mind.

I'd amplify this by saying that biological naturalism, as Searle applies the ideas to the study of mind and brain, seems to be scientifically sterile compared to computational models (generalized as noted by many posters).

So I hope that at some point Massimo expands on why he is sympathetic to the idea and on what he thinks its empirical content is when applied to the understanding of mind and how it relates to the brain..

As to your specific point, my understanding is that Searle is in agreement with one of the conventional answers to ythe question: once we understand enough about how brains (and the right kind of machines) produce consciousness, the problem will seem irrelevant, as with the concept of vitalism.

< A sound wave can pass from air to water and it's still the same wave. A sound wave can be simulated on a computer program, and while you wouldn't say it's a *sound* wave (which implies that it's a physical vibration), I would say it's still a wave (a pattern which propagates through a medium).>

I'd say this is where we diverge, the fact that it's an wave in air as opposed to one in water is of importance to me.

>I'd say this is where we diverge, the fact that it's an wave in air as opposed to one in water is of importance to me.<

Ok, but even so, the atoms that compose it one second are not the same atoms that compose it the next. In fact it's not really composed of atoms at all. It's just a self-propagating pattern of molecule displacement.

So why do you think the wave does not continue when it moves from air to water but you appear to accept that it does continue when it moves from *those* air molecules to *these* air molecules?

If the CTM is true, then the mind is such a pattern. I don't claim to have proved this, but I don't see how you can rule it out as a possibility.

Disagreeable,common ground is always to be found once the mist of semantic confusion is dispelled.

I take your point that fundamentally there are two levels. As a professional systems developer who has developed many large enterprise systems I struggle endlessly with the problem of how one could create consciousness from software. I cannot even begin to discern a hint of the principle by which this might be done. People talk about it somehow being an emergent property that arises from increasing complexity and connectionism.

The kindest thing I can say in reply is that is elaborate hand waving predicated on the magic incantation 'emergence', stirred in with a liberal dash of wishful thinking.

I want it to be possible and yet I see a fundamental barrier that makes it impossible. Think of it this way. I have never seen or heard a musical instrument. In fact I don't even know what one is. I then learn how to read music scores. I read about classical music, its history and descriptions of the great concert halls. I read a music score note by note and comprehend it analytically.You now pass me another music score and tell me this is far superior musical work. I also read it note by note. Do you think I could possibly comprehend the superiority of one musical work over the other? There is a level of experience that the note by note reading cannot ever convey. Computation is note by note reading.

This is physics so I'm out of my knowledge area, but I would say patterns are precisely made up or contingent on atoms and their energies. The pattern you see in the air of the energetic interactions of it's molecules are different to those you see in water I should think.

I'm actually beginning to lose site of why this notion of patterns is important. Would you contend for example by way of thought experiment that everything valuable about my subjective experience can be captured by an abstract pattern(algorithms) distinct from the pattern of interactions of the substrate i.e. my biology?

I don't claim to know in detail how consciousness emerges, but then I don't know how a brain performs its information processing either.

And yet, unless you suppose that the universe is uncomputable, it must be possible to make a Turing machine perform the same information processing as a brain.

So far so good, but I haven't established that this machine would be conscious.

But if the machine would have all the same behaviour as a human brain, then it will claim to be conscious and indeed actually believe it is conscious.

(Please don't argue that unconscious systems can't have beliefs - let's just say they can represent propositions, ascribe truth values to those propositions and behave as if those propositions are true)

Every event that happens in your brain will happen in a machine that implements the same information processing. Every question you can ask yourself to test whether you are really conscious will have a direct analogue in a machine doing the same thing. If you were not a biological intelligence but a machine intelligence you wouldn't necessarily even be aware of it! This leaves open the possibility that this is what's going on - that real consciousness is just simulated consciousness, but since there is no such thing as real consciousness in this case we might as well just call simulated consciousness the real thing.

So even though I don't know how a brain processes information in detail, for me the obvious fact that it can is equivalent to the proposition that a Turing machine can, and since I don't see any reason to suppose that consciousness is anything but a (potentially illusory) property of that computation then I don't think there's any reason to doubt that such a machine would be conscious in just the same way that we are. In my view, assuming the contrary can lead only to supernaturalism.

By the way, your analogy to the analysis of music is interesting. I would say it is highly unlikely that you could identify the superior piece of music because it is highly unlikely that you could acquire the necessary expertise without having heard music.

I think a lot of this is because of instinctive hard-wired responses to certain auditory patterns, responses which are only poorly understood. If they were perfectly understood then we might be able to design a function which would rate particular pieces of music.

But that seems unlikely because taste is subjective anyway. It's very unusual that one can say that one piece of music really is better than another. So much of that is informed by culture and associations with one's own life.

But as a proponent of CTM I do believe that such a function is in principle realisable with respect to any one individual's taste preferences. If we took a brain scan of your brain and simulated it as we played the two pieces of music to it, we might be able to detect which was preferable - and this is how one might in principle be able to rate music without having ever heard any.

>patterns are precisely made up or contingent on atoms and their energies. The pattern you see in the air of the energetic interactions of it's molecules are different to those you see in water I should think. <

No doubt. But the particular pattern you see in the air at one moment is going to be different to the pattern you see the next. Something is conserved however. That something is the analogy to the mind.

>Would you contend for example by way of thought experiment that everything valuable about my subjective experience can be captured by an abstract pattern(algorithms) distinct from the pattern of interactions of the substrate i.e. my biology?<

Yes, although when I say distinct I would mean so only in a substrate neutral way. The pattern of your mind is in some way isomorphic to the structure of your brain, so not completely distinct.

I think the mind is analogous to software, so it's distinct from the interactions of the substrate in the same way that the logic of a software algorithm is distinct from the arrangement of electrons in a computers memory.

Most CTM proponents would say that the algorithm has to be physically instantiated and running for you to be conscious. I'm a mathematical platonist, however, so my views are a little more unusual and perhaps this isn't the place to get into that.

This I think is where you need to attack or clarify CTM's account for me to accept it:

The way I see it is human brains are machines that are doing something akin to information processing i.e. interacting with it's environment(which in turn will affect it's "processing"). Is it even remotely possible that for the machine to the same thing as a human brain in terms of subjective experience it would have to be in affect like an animal brain in turn pretty much a human brain?

That of course is not to say that the human mammalian brain is the only way to instantiate a concious brain.

--I think the mind is analogous to software, so it's distinct from the interactions of the substrate in the same way that the logic of a software algorithm is distinct from the arrangement of electrons in a computers memory.--

The way of describing the mind or software is distinct from the hardware but when you have a computer system running on the machine it's all electrons moving around the place.

I agree with the CTM account. I might explain or interpret it a little differently than some who seek to avoid accusations of dualism at all costs.

>The way I see it is human brains are machines that are doing something akin to information processing i.e. interacting with it's environment(which in turn will affect it's "processing").<

I agree. But if what they're doing is information processing then this job could be accomplished by any other machine as long as information is processed in the same way (i.e. same outputs for the same inputs, equivalent logical steps along the way).

>Is it even remotely possible that for the machine to the same thing as a human brain in terms of subjective experience it would have to be in affect like an animal brain in turn pretty much a human brain?<

I don't see how it is possible that the machine would have to be physically like an animal brain. It may need to be logically like an animal brain, in terms of the structure of the flow of information through it. The notion that it would have to physically be biological is as strange to me as the proposition that a certain algorithm can only be implemented in Java.

>The way of describing the mind or software is distinct from the hardware but when you have a computer system running on the machine it's all electrons moving around the place.<

Agreed. But if you run two different programs, at a physical level you're seeing pretty much the same thing happening. Electrons moving around the place.

In order to capture the difference you have to analyse what's happening at a higher level and recognise the logical patterns the electron flows represent. These same logical patterns could be transferred to a completely different substrate and you'd be left with the same computation you started with.

It comes off to me like you're conflating descriptions with what is actually going on i.e software code versus binaries, accounts of subjective experience with actual brain metabolism or metabolism full stop.

>i.e software code versus binaries<I'd actually say the software code is irrelevant. What is important is the logical structure represented by the code and compiled to the binaries. If you have the same algorithm implemented in Java or C++, I'd say it's the same algorithm.

>It comes off to me like you're conflating descriptions with what is actually going on<

I could just as easily say to you that you're conflating descriptions of the low-level physical interactions of atoms with what's actually going on.

The difference is only in which level of description is relevant. CTM holds that it's high level. Biological naturalism holds that it's low level.

But that's getting into pretty hairy metaphysical territory. If you just take it back to an example such as the perfect simulation of a brain which believes it is conscious and professes to have a conscious experience, you should be able to see where I'm coming from. If you allow that such a simulation is possible, then you allow that it's possible to believe you are conscious and behave as if you're conscious without actually being conscious.

But if this is your belief then you have no grounds for thinking that you or anybody else is not such a pseudo-conscious automaton. It's unintuitive, but I personally think that this absolutely demolishes the notion of "real" consciousness as a coherent concept. Simulated consciousness *is* consciousness.

Disagreeable,part of the problem is that I cannot see why we even need consciousness at all.What does consciousness offer that a highly programmed robot with a huge, learnable, modifiable rule-set, cannot offer?

Largely answered below, but I'd argue that if the robot is as capable as we are, it's likely to consider itself to be conscious, as it would have the same abilities that we have that lead us to consider ourselves to be conscious.

I'm guessing this was intended as a response to my comment about PhD's who would disagree with you.

I'd have a great deal of interest in hearing why your experiences studying in UCD lead you to doubt the CTM.

However, simply citing your experience and qualifications isn't a very persuasive argument because there are other people with similar or even greater qualifications who disagree with you (and of course many who agree with you).

Oh you're Irish? You hang out with Derek Bridge? The head of the course is a guy called Fred Cummins. He describes himself thus:"I am an unreformed anti-representationalist and I think in terms of dynamic systems theory."I found him very persuasive. At this moment I couldn't effectively condense what I learned but I was just recommending looking him up.

The post you've responded on was original a response to scitation where he says:

I think this is an extraordinary claim and quite controversial. I'd like to know how he came to this conclusion because I wouldn't but that come be from a position of ignorance.

In this way it is like every large enterprise computer system I have developed. My systems did not need consciousness. If they displayed consciousness I would have been extremely disconcerted and worried and would have promptly reverted to the earlier version.

Given your definition, why do we need consciousness at all. This would seem to be a wasteful and unnecessary capability.

@PeterWe'd need a definition of consciousness for that question to be answerable. Broadly, the assumption is that it's adaptive.

If you break consciousness down to its influence on behaviour, you'd be including stuff like:

Genral intelligence: allows us to adapt to novel situations and to plan and react accordinglyQualia: allow us to distinguish different sensations from each otherIntrospection: assists in planning and self-understandingA sense of personal identity: assists in pursuing long term goalsEmotion: helps to flexibly drive adaptive behaviour and manage choices when goals are in conflict

That kind of stuff. Your enterprise computer systems don't need this, or at least they can get by without them.

Essentially you are saying a program can be written that creates consciousness. This consciousness is in turn programmed to exhibit certain properties. If you can do all that why not program those properties directly and skip the consciousness part? I still fail to see what additional property consciousness offers.

No, I'm saying that consciousness is essentially the conjunction of these abilities/properties. If you were able to program all of those properties directly so that that the program was every bit as capable as us, it would probably perceive itself as conscious.

This may not be persuasive to you as a religious person, but I would expect anyone committed to philosophical naturalism to agree. It seems very unlikely that we have evolved consciousness if it really is a separate thing from what our information processing abilities allow us to do.

If on the other hand you believe in God, then you can just say that we evolved all our abilities but it is God that gave us consciousness (and that consciousness has a supernatural element).

Disagreeable,>...but it is God that gave us consciousness (and that consciousness has a supernatural element<

No, I don't believe that at all. God created us through the laws of nature so every facility we have is explainable through evolution and related laws of nature.

There will be natural explanations for everything for the simple reason that the laws of nature are the means that God realises his purpose. Science is religion's best friend, it reveals how God did his work.

Since God created the laws of nature no other conclusion is possible(if you believe in God).