I was sentence-by-sentence restating the post to which I was replying. Which sentence did I restate incorrectly?

Everything except for the first, basically.

Quote:

Originally Posted by wolfpup

Only if you believe Chalmers. I proposed above that novel emergent properties can develop in the interconnections and/or states of lower-level components that were not found in any form in the components themselves.

Given the lower level properties and their arrangement, do the emergent properties follow necessarily? If I fix the base facts, do I fix the emergent facts, or are there more facts left to fix?

If the former, you don't get the sort of emergence you claim, because then, the emergent facts follow from the base facts---and for every emergent fact, you can state the precise way it follows from the base facts. Thus, that's the story you need to at least provide some plausibility argument for in order to have your claim of the emergence of consciousness be contentful.

If the latter, then the base facts don't determine the emergent facts. Then, of course, consciousness just might come 'round at some point. Somehow, for basically no reason. But also, it's no longer the case that fixing the physical facts ('particles, fields, and their arrangement') fixes all the facts, and physicalism is wrong.

It's not a matter of believing Chalmers or not. He's not stating this out of the blue; he's just clearly articulating what the options are. There's no middle ground. You either eat the cake, or keep it.

Quote:

I note that you're avoiding my cited quote about the important role of computational theory in cognitive science.

I've never doubted the role of computational theory in cognitive science. The problem is just that the model doesn't imply the character of the thing modeled, so computationalism just amounts to a category error. Just because you can model the solar system with an orrery doesn't mean gravity works via wires and gears.

And really, you shouldn't start accusing others of avoiding things.

Quote:

Also, Dreyfus developed a very bad reputation in the AI and cognitive science communities early on that he was never able to shake, despite apparently some level of vindication of a few of his ideas in later years. I can tell you first-hand that contempt for Dreyfus is still prevalent in those communities.

And that's supposed to be arguing for what, exactly? Because computer science people have sour grapes with Dreyfus, you can't criticize computationalism...? Seriously, I can't figure out what your aim is in bringing this up again and again.

Quote:

What is that computation, you ask? Let's ask a hypothetical intelligent alien who happens to know nothing about number systems. The alien's correct answer would be: it's performing the computation that produces the described pattern of lights in response to switch inputs.

OK, so it's actually just the physical evolution that you want to call 'computation' for some reason. And me computing sums using the device isn't computation. In that case, as I pointed out, you're not defending computationalism, but identity theory physicalism. Combined with your notion of strong-but-not-really-that-strong emergence, you don't really have any consistent position to offer at all.

Quote:

"You could do it by computer" as a synonym for "trivial" sounds like something Dreyfus would have said!

Well, thanks!

Quote:

Of course "surprise" by itself isn't a criterion for much of anything, but surprise in the sense that properties that we explicitly denied could emerge from certain processes, like high intelligence or strong problem-solving skills, if and when they actually emerge does mean that we have to re-evaluate our beliefs and assumptions.

It means we were wrong, that's it. It's hard to see how chess, or Jeopardy, translated to simple Boolean logical rules. But the fact that you can do it by computer simply demonstrates that it does; that what next move to make in a game of chess is equivalent to tracking through some ungodly-huge Boolean formula that one could explicitly write down.

Nobody can be in any doubt about that. There's no mystery about how chess-playing emerges in computers, and that's precisely because we understand how the lower-level properties lead to the chess-playing behavior. Getting a computer to play chess means we understand how that sort of behavior comes about, whereas even if a computer were conscious, we still wouldn't have the faintest clue about how consciousness comes about. We have no idea how consciousness reduces to some Boolean formula, some program, some algorithm. Blithely positing that 'it emerges' simply sweeps our ignorance under the rug, where the only thing we can do that has any hope of getting us ahead on the problem is meeting it head-on.

And seriously, the fact that you have repeatedly refrained from responding to the fact that your argument disproves your own mind seems telling to me - specifically it's telling me that you have no refutation for the fact your argument is disproven but you're carrying on anyway because you'd rather keep arguing it because arguing is fun.
152

Last edited by begbert2; 05-22-2019 at 01:04 PM.
Reason: dammit what is with the typos today? I'm going to lunch.

Every computer with an operating system has at least some form of "self-awareness" in at least the most basic, literal sense of the term. The whole point of an operating system is to keep track of and manage what the computer itself is doing.

That is really stretching the meaning of the concept, though. The operating system is an automoron that provides an environment for useful processes to run in. It really has zero understanding of what the processes are doing, only that they have specific needs that have to be met and that shit has to be cleaned up when they gracefully exit or explode all over the place.

It is more analogous to involuntary bodily functions like the beating heart, filtering liver and rippling GI tract that keep the biological creature operative. Self-awareness seems to be some sort of instinct-based adjunct to the simple or complex neurological function of the beast, and it is not even clear whether it enhances that functionality in any way other than to supply impetus to continue.
153

And seriously, the fact that you have repeatedly refrained from responding to the fact that your argument disproves your own mind seems telling to me

I didn't, because there is no way, shape or form in which anybody could interpret my argument as having any such implication.

So fine, one last time:

Computationalism requires that, in order to give rise to a mind M, a brain B needs to implement a certain computation CM.

A brain is a physical system.

A computation is a formal object that we can take to be, without loss of generality, defined by a partial function over natural numbers (a definition that is equivalent to the one via Turing machines).

B must implement CM in an objective, mind-independent way, as otherwise, whether or not B implements CM depends on whether B implements CM, and we lapse into vicious circularity.

Binary addition is such a function. So is my function f'.

In order for B to implement CM, it must more broadly be possible for a given physical system D to implement some certain computation C.

Implementing a computation C by means of D entails that one can use D to compute C (just as, for example, one uses a pocket calculator to calculate, or a chess computer to compute chess moves).

It is possible to specify a system D such that one can use it to implement f (binary addition) and f'.

f and f' are distinct computations (follows from the definition of computation above: they're different functions over the natural numbers).

The difference between D implementing f and D implementing f' is one of interpretation: the states of D, or aspects thereof, are interpreted as symbolic vehicles whose semantic content pertains to the formal specification of either f or f'.

The process applied to use D as implementing both f and f' is completely general, and can be performed on every system claimed to perform some computation C in order to use it to compute C'.

Interpretation is a mental faculty (as in, a faculty our minds do have, not a property that necessarily only minds can have).

Whether a system D implements a computation is thus dependent on mental faculties.

All mental faculties, on computationalism, are due to some computation CM.

Whether D implements a computation is thus due to the particulars of CM.

By specification, whether B implements CM is thus due to the particulars of CM.

Hence, computationalism lapses into vicious circularity.

There. I hope that makes things clear. It should, in particular, be obvious now that nothing about the impossible of minds in general is implied; merely, that minds, which clearly do exist, are not computational in nature, that is, are not produced by the brain implementing a certain computation.

Now, I believe you won't give up on the 'but CM just interprets itself'-issue so easily. In that case, I have good news for you, because you can prove the existence of god!

For, god is an omnipotent being. Omnipotence entails the possibility to create gods. Consequently, god may just create him/hir/herself. And that's it!

Logically, this is perfectly analogous to CM interpreting B as computing CM.
154

It doesn't appear possible with the technology we currently have. We have really kick-ass computers, but they are not self-aware. So, there is no reason to assume that your "consciousness" would survive death. Even if we had the technology necessary to convert RNA based memory into 1's and 0's, it would just be data. It wouldn't be a person.

"Ghost in a Shell" had science approaching it in a different way. They perfected something they called a "Cerebral Salvage", which was implanting a human brain in a completely artificial body and integrating them so that they worked together just like our brains and organic bodies work together.
155

That is really stretching the meaning of the concept, though. The operating system is an automoron that provides an environment for useful processes to run in. It really has zero understanding of what the processes are doing, only that they have specific needs that have to be met and that shit has to be cleaned up when they gracefully exit or explode all over the place.

It is more analogous to involuntary bodily functions like the beating heart, filtering liver and rippling GI tract that keep the biological creature operative. Self-awareness seems to be some sort of instinct-based adjunct to the simple or complex neurological function of the beast, and it is not even clear whether it enhances that functionality in any way other than to supply impetus to continue.

Well, I never said computers were very smart.

It really comes down to what is actually meant by self-awareness - I generally think of it as that the entity in question is continuously aware of its ongoing calculative processes, and has the capability to interpret and make decisions based on them. The sticking point, of course, is the "aware" part; we humans have a 'in the driver's seat' perspective on the world that we're hard-pressed to explain or even describe. I find myself wondering whether even a simple computer program that assesses inputs, outputs, and its own data storage might, from its own perspective, have a 'drivers's seat view'. I mean, we do program them to - they have a main execution loop that is constantly running back and forth through its event handlers and dealing with inputs as they come. The only think lacking is that the main loop doesn't have access to the data handled by the event handlers - but at what level would the 'driver's seat' manifest? One could theorize that the self-contained execution loop gives rise to the 'drivers's seat view' but that the 'view' included the data handlers as well.

This is, of course, all frivolous theorizing, but one thing that stands out to me is that I've never heard any other explanation for where or how that 'drivers's seat view' originates. I mean sure, there's bullshit about souls, but that just kicks the can down the street - how do souls do it, mechanically speaking? Whether material or spiritual the process is being carried out somehow.

And now, on to attempting to see if HMHW's explanation makes some sense to poor old simple me, and to see if it accomplished the literally impossible and draws a distinction between the function of a physical brain and its functionally exact copy.
157

And now, on to attempting to see if HMHW's explanation makes some sense to poor old simple me, and to see if it accomplished the literally impossible and draws a distinction between the function of a physical brain and its functionally exact copy.

The way you frame this already presupposes a speculative and contentious particular view of consciousness, known as functionalism. But functionalism, while it has a large following, is hardly the only game in town when it comes to consciousness; even the straightforwardly physicalist account isn't functionalist. So on most current ideas of how consciousness works, that it's the function of a physical brain is simply false.

I think that both you and wolfpup should take a look at Integrated Information Theory, which posits that consciousness is either due to or constituted by the information distributed among the parts of a system in a certain way. In a simplified sense, integrated information can be thought of as the amount of information the whole of a system has over and above that which is locked in its parts. IIT then postulates that this is what consciousness either correlates with or comes down to.

Now, I think in doing so, IIT is massively begging the question. But that doesn't matter right now. What's important is that IIT gives a scientifically respectable reductive account of consciousness---it shows us exactly how (what it purports to amount to) consciousness emerges from the simpler parts, and gives us objective criteria for its presence. So, IIT provides the sort of story that's needed to make wolfpup's emergence reasonable.

Furthermore, on IIT, consciousness isn't a functional property of the brain. It's a property relating to the correlations between the brain's parts, or regions. So functionalism isn't really the sort of obvious truth begbert2 seems to take it to be, either.

More interestingly, perhaps, is that on IIT, computationalism comes out straightforwardly false. While a brain has a high degree of integrated information, the typical computer, due to its modular architecture, has very little to none of it. So a computer implementing the simulation of a brain won't lead to conscious experience, even if the brain it simulates does.

This suffices to falsify many of the seemingly 'obvious' claims in this thread and shows that other options are possible.
158

The difference between D implementing f and D implementing f' is one of interpretation: the states of D, or aspects thereof, are interpreted as symbolic vehicles whose semantic content pertains to the formal specification of either f or f'.

This is nonsense. I'm a computer programmer, and there is a literal, practical, real difference between implementing f and f'. This is true regardless of whether the outputs are the same for a given input. An obvious example is sorting; there are a numerous different algorithms all of which produce the same output: a sorted list. Quicksort, Mergesort, Insertion Sort, Bubble Sort, Stoogesort. However they differ internally quite a bit, and operate quite a bit differently despite producing the same output, going through states that are entirely and objectively distinct from one another. The differences between them are NOT merely a matter of 'interpretation'.

Quote:

Originally Posted by Half Man Half Wit

By specification, whether B implements CM is thus due to the particulars of CM.

Hence, computationalism lapses into vicious circularity.

I'm not seeing any circularity, viscious or not. You're basically saying that if you reach into a bag of red and blue balls and happen to pull out the red one, the fact that it's red (as opposed to being blue) is due to the fact it's a red one. It's not a particularly helpful observation, but it's not circular in any sort of problematic way.

Or to put it in terms of computational processess, if you have two physically identical machines, one of which is running Quicksort and one of which is running Stoogesort, then the only thing that is making the Quicksort one the Quicksort one is the fact it's running the Quicksort one. It is distinct from the Stoogesort one (among other things it's a lot faster), but the difference in what it is is indeed a result of it being what it is.

None of this is self-contradictory.

And honestly, even if you were right about computationalism being impossible (which you're not), that doesn't mean that brains can't be copied to computers - it would just mean that it's equally impossible for existing computers to be computational! Which would mean that existing computers are still exactly as capable as brains are to support minds, because none of them are the thing that's impossible. Which makes sense, because existing computers aren't impossible and thus if you prove that something is impossible to do, they're not doing it.

Quote:

Originally Posted by Half Man Half Wit

There. I hope that makes things clear. It should, in particular, be obvious now that nothing about the impossible of minds in general is implied; merely, that minds, which clearly do exist, are not computational in nature, that is, are not produced by the brain implementing a certain computation.

Your argument (if it worked, which it doesn't) doesn't just disprove minds being computational - there's nothing about your argument that's specific to mind. It just proves that nothing is computational! Not brains, not computers, nothing! (If the argument worked, which it doesn't.)

Quote:

Originally Posted by Half Man Half Wit

Now, I believe you won't give up on the 'but CM just interprets itself'-issue so easily. In that case, I have good news for you, because you can prove the existence of god!

For, god is an omnipotent being. Omnipotence entails the possibility to create gods. Consequently, god may just create him/hir/herself. And that's it!

Logically, this is perfectly analogous to CM interpreting B as computing CM.

It isn't equvalent at all, of course. Your god argument is trying to pull itself up by its bootstraps; the god in question doesn't even exist until it exercises its powers. The brainstate on the other hand does exist. Cognition doesn't predate the brainstate or cause the brainstate; the brainstate exists in the phsyical or simulated realm supported by the phsyical or simulated physics of the world it exists in. Cognition, on the other hand, is the byproduct of the ongoing brain state - the same way that sound is the byproduct of an ongoing vibration in a speaker's membranes.

None of this is analogous to nothing creating something from nothing, none of it is circular, and none of it is self-contradictory.
159

It's referring to various studies that show you can take RNA from an animal that has learned a task, inject it into a different animal (same species) and successfully test for the trained task in the second animal.

There are other studies that don't specifically say anything about RNA, but have shown the ability for isolated individual purkinje cells to learn temporal sequences, which means our biology is using multiple methods for learning.
160

The way you frame this already presupposes a speculative and contentious particular view of consciousness, known as functionalism. But functionalism, while it has a large following, is hardly the only game in town when it comes to consciousness; even the straightforwardly physicalist account isn't functionalist. So on most current ideas of how consciousness works, that it's the function of a physical brain is simply false.

I think that both you and wolfpup should take a look at Integrated Information Theory, which posits that consciousness is either due to or constituted by the information distributed among the parts of a system in a certain way. In a simplified sense, integrated information can be thought of as the amount of information the whole of a system has over and above that which is locked in its parts. IIT then postulates that this is what consciousness either correlates with or comes down to.

Now, I think in doing so, IIT is massively begging the question. But that doesn't matter right now. What's important is that IIT gives a scientifically respectable reductive account of consciousness---it shows us exactly how (what it purports to amount to) consciousness emerges from the simpler parts, and gives us objective criteria for its presence. So, IIT provides the sort of story that's needed to make wolfpup's emergence reasonable.

Furthermore, on IIT, consciousness isn't a functional property of the brain. It's a property relating to the correlations between the brain's parts, or regions. So functionalism isn't really the sort of obvious truth begbert2 seems to take it to be, either.

More interestingly, perhaps, is that on IIT, computationalism comes out straightforwardly false. While a brain has a high degree of integrated information, the typical computer, due to its modular architecture, has very little to none of it. So a computer implementing the simulation of a brain won't lead to conscious experience, even if the brain it simulates does.

This suffices to falsify many of the seemingly 'obvious' claims in this thread and shows that other options are possible.

By "functional" I just meant "it works at all". A non-functional brain is one that doesn't have a mind in it.

Don't try to impune upon me any particular cognitive model. All my position requires are the following three facts:

That's it. That's all I need to prove that it's theoretically possible to emulate minds - you may have to emulate everything in the world with even a tangential physical effect on the brain (which could include the entire universe!), but that's still theoretically possible, and thus you provably can theoretically emulate minds on a computer.

Proven, that is, unless you can disprove one of the three premises. And note that I don't care how the brains create minds. It doesn't matter. I don't care. It's utterly irrelevant. All that matters to me is that brains produce minds at all.

* Actually I don't even require the minds to exist entirely in the physical realm - they just have to exist in some realm that has reasonably consistent mechanistic rules. Which isn't really much to ask, because nothing can possibly work or even consistently exist in a realm without reasonably consistent rules.
161

It's referring to various studies that show you can take RNA from an animal that has learned a task, inject it into a different animal (same species) and successfully test for the trained task in the second animal.

There are other studies that don't specifically say anything about RNA, but have shown the ability for isolated individual purkinje cells to learn temporal sequences, which means our biology is using multiple methods for learning.

Note: this is all distinct from the "normal" learning process that includes epigenetic changes to produce proteins that maintain the synapse.
162

This is nonsense. I'm a computer programmer, and there is a literal, practical, real difference between implementing f and f'.

Then it should be easy for you to point out what those differences are in the example I gave, where the physical system is completely identical in each case, and yet, can be taken to implement both f and f'.

Quote:

I'm not seeing any circularity, viscious or not. You're basically saying that if you reach into a bag of red and blue balls and happen to pull out the red one, the fact that it's red (as opposed to being blue) is due to the fact it's a red one.

No. My claim is, a system only computes once it is interpreted as computing. Computationalism says that only computations can interpret things (or exercise any other mental faculty). Thus, in order to compute, some computation must interpret a system as computing. But this is a vicious circle: if physical system P1 must be interpreted as computing C1 by system P2, and the only way it can do so is by implementing C2, then P2 must first be interpreted by P3 via implementing C3 as computing C2, and so forth.

Obviously, P1 cannot interpret P1 as implementing C1, as to do so, it would have to already be implementing C1 to do the interpreting.

That's it. That's all I need to prove that it's theoretically possible to emulate minds - you may have to emulate everything in the world with even a tangential physical effect on the brain (which could include the entire universe!), but that's still theoretically possible, and thus you provably can theoretically emulate minds on a computer.

Proven, that is, unless you can disprove one of the three premises.

There are lots of hidden premises in your stance, but for now, I leave you with the fact that IIT is a theory satisfying your three premises on which it is nevertheless false that computation can emulate minds.
164

Then it should be easy for you to point out what those differences are in the example I gave, where the physical system is completely identical in each case, and yet, can be taken to implement both f and f'.

I don't know what you mean by "identical". If you mean "identical at the level of which electron is moving down which wire at the same time", then you can't be implementing both f and f' at the same time and having them both producing the same single output. It's literally impossible. It's sort of like trying to pat your head and rub your belly button at the same time with the same hand. Or with two hands that are doing exactly the same actions on the same identical body, as the case may be.

Does your argument rely on something impossible?

Seriously, when we talk about calculations being a 'black box' where you can't tell what's going on inside it, it doesn't mean that the internal calculations are experiencing multiple different program states simultaneously like schroedinger's cat. That's bizarre.

Quote:

Originally Posted by Half Man Half Wit

No. My claim is, a system only computes once it is interpreted as computing.

Quote:

Originally Posted by Half Man Half Wit

Minds are the only sorts of things we know of that are capable of interpretation, so yes, there's a little bit (you know, it's key point) about my argument that's specific to mind.

So what you're saying is that if everyone who knows about a computer dies the computer magically stops working because it depended on the "interpretation" (which only human brains can do) knowing about it to function.

There are lots of hidden premises in your stance, but for now, I leave you with the fact that IIT is a theory satisfying your three premises on which it is nevertheless false that computation can emulate minds.

That's less a disproof of my argument and more a disproof of IIT - or your interpretation if IIT, as the case may be.
166

One could theorize that the self-contained execution loop gives rise to the 'drivers's seat view' but that the 'view' included the data handlers as well.

If there is a “driver's seat view”, it would have to be the programs top level process that handles major data traffic control. Except, even that process is just making decisions on abstract symbols that it receives from lower-level processes, without any real understanding of what they mean, apart from what the application parametrics tell it about how to handle specific data. An entire program, that seems so useful and sometimes even collaborative, is just a massive rat's nest of Half Man Half Wit's box-of-switches-and-lights with all the switches and lights obfuscated into nanoscale silicon and metal traces. You can make it vast, but the fundamental structure is no different.

Quote:

This is, of course, all frivolous theorizing, but one thing that stands out to me is that I've never heard any other explanation for where or how that 'drivers's seat view' originates. I mean sure, there's bullshit about souls, but that just kicks the can down the street - how do souls do it, mechanically speaking? Whether material or spiritual the process is being carried out somehow.

I put forth already, that the most sensible view is that this elusive thing is merely a manifestation of the survival instinct that is hardwired into critters (of which we are). It would explain the whole "immortal soul" concept, as in, “I don't want to die, so it makes me happy to believe in an ectoplastic part of me that will persist forever in the Elysian Fields (or Asphodel, or Og forbid, Tartarus)”, and so far, I have yet to hear a more satisfying explanation.

Quote:

And now, on to attempting to see if HMHW's explanation makes some sense to poor old simple me, and to see if it accomplished the literally impossible and draws a distinction between the function of a physical brain and its functionally exact copy.

We might find out once we are able to actually accomplish that, but as has been said, the function of a program is defined by externalities. Two exact copies of the same logic-processing system will only function exactly the same way for exactly the same set of inputs. Consider that the human brain is subjected to a constant, roiling stew of hormonal inputs, and unless you can replicate the biochemistry with precision, there will be distinctions. Not to mention the ever-present survival instinct.
167

When the word "mind" or "cognition" is used, is it assumed that consciousness is always included within those terms, or is "mind" and "cognition" ever assumed to be just the processing less conscious experience?
168

If there is a “driver's seat view”, it would have to be the programs top level process that handles major data traffic control. Except, even that process is just making decisions on abstract symbols that it receives from lower-level processes, without any real understanding of what they mean, apart from what the application parametrics tell it about how to handle specific data. An entire program, that seems so useful and sometimes even collaborative, is just a massive rat's nest of Half Man Half Wit's box-of-switches-and-lights with all the switches and lights obfuscated into nanoscale silicon and metal traces. You can make it vast, but the fundamental structure is no different.

Our current programs keep things partitioned and isolated for simplicity and maintainability. That's not strictly necessary - you could have a program's main loop running directly through the input, processing, storage uptating, and output phases directly each time.

Or you could note that the thread(s) of execution do run through all those parts each time, constantly, moving in and out of all the layers and back again. It's really a question of how (and where) the 'seat of consciousness' manifests, and how it ''feels like' from the inside - and whether the actual implementation being partitioned interferes with that at all. (Assuming it manifests in the first place, of course.)

Quote:

Originally Posted by eschereal

I put forth already, that the most sensible view is that this elusive thing is merely a manifestation of the survival instinct that is hardwired into critters (of which we are). It would explain the whole "immortal soul" concept, as in, “I don't want to die, so it makes me happy to believe in an ectoplastic part of me that will persist forever in the Elysian Fields (or Asphodel, or Og forbid, Tartarus)”, and so far, I have yet to hear a more satisfying explanation.

But how does "survival instinct" actually "manifest", physically speaking? Some part of the entity constantly checking the senses for threats, conferring with the memory about the threats, and prompting reactions to the threats?

Survival is just a goal. A goal that is conducive to there being future generations, yes, but the thing that has the goal is what I'm curious about. The thing that has the awareness of the situation to see threats and avoid them. How does it work?

We might find out once we are able to actually accomplish that, but as has been said, the function of a program is defined by externalities. Two exact copies of the same logic-processing system will only function exactly the same way for exactly the same set of inputs. Consider that the human brain is subjected to a constant, roiling stew of hormonal inputs, and unless you can replicate the biochemistry with precision, there will be distinctions. Not to mention the ever-present survival instinct.

If you're going to emulate minds in the computer you have two choices: let the minds observe and react to the computer's inputs directly, or build a virtual "Matrix" (referring to the movie) for them to exist within. I'm thinking the Matrix approach would be way more pleasant for them - computers themselves tend to be completely hamstrung with regard to input and output. Very 'Have no Speaker and I Must Scream" kind of thing. Plus of course the Matrix approach lines up nicely with the simplest, most brainless way to go about emulating minds - emulate the entire freaking room the person's in, and get the brain (and mind) as a bonus. Expanding that to include a massive multiplayer world for them to walk around in is just a problem of scale.

Once you've got your Matrix for the minds to live in, the minds ought to be able to get all the inputs they're accustomed to just fine, presuming you designed the Matrix properly and with sufficient detail. Though of course you'd eventually want to tweak a few things - the whole goal here was to let the minds live forever after all. So once you have things all properly uploaded into a simulation you'll probably want to tweak a few things rather than accurately simulating the ravages of time and all. This would of course cause the minds in the simulation to diverge in thought and action from the originals, but really, wasn't that the point all along?
169

When the word "mind" or "cognition" is used, is it assumed that consciousness is always included within those terms, or is "mind" and "cognition" ever assumed to be just the processing less conscious experience?

I can't speak for anyone else, but I believe all the terms all refer to the "seat of consciousness" - the "I" in "I think therefore I am". When we look out at the world through our eyes, the mind/cognition/consciousness is the thing doing the looking.
170

I can't speak for anyone else, but I believe all the terms all refer to the "seat of consciousness" - the "I" in "I think therefore I am". When we look out at the world through our eyes, the mind/cognition/consciousness is the thing doing the looking.

But it seems like we could have a brain that successfully models the environment and produces appropriate behavior without having conscious experience.

I never liked the whole zombie argument before, but I think I see what that argument is getting at.
171

But it seems like we could have a brain that successfully models the environment and produces appropriate behavior without having conscious experience.

I never liked the whole zombie argument before, but I think I see what that argument is getting at.

I'm of the personal opinion that it's incoherent to think that full human reaction and interaction can be achieved by a so-called "zombie" - the mere act of observing the world, interpreting the world, and reacting to the world in an ongoing and consistent way requires that the entity be aware of its environs, itself, and its opinions and memories, in the same way that a car needs some sort of engine to run. (Could be a hemi, could be a hamster wheel, but it's gotta be something.)
172

I'm of the personal opinion that it's incoherent to think that full human reaction and interaction can be achieved by a so-called "zombie" - the mere act of observing the world, interpreting the world, and reacting to the world in an ongoing and consistent way requires that the entity be aware of its environs, itself, and its opinions and memories, in the same way that a car needs some sort of engine to run. (Could be a hemi, could be a hamster wheel, but it's gotta be something.)

I just remembered why i didn't like the zombie idea, because it requires the person to be identical except for experience.

But what I was picturing in my mind was that the operational attributes of the brain (e.g. modeling the environment, making decisions towards a goal, etc.) can be independent of the conscious experience, if the conscious experience is just a layer above that maybe only influences deciding on the goal (for example).

Responding to your post:
There are examples where people seem to operate correctly in their environment but don't have awareness (e.g. sleep walking).
173

I just remembered why i didn't like the zombie idea, because it requires the person to be identical except for experience.

But what I was picturing in my mind was that the operational attributes of the brain (e.g. modeling the environment, making decisions towards a goal, etc.) can be independent of the conscious experience, if the conscious experience is just a layer above that maybe only influences deciding on the goal (for example).

I dunno, I feel like my conscious influence on my actions is pretty strong. I don't feel like a passive passenger in my own head (though I suppose my subconscious could just be pretending to let me lead, like my mom does with my dad).

Quote:

Originally Posted by RaftPeople

Responding to your post:
There are examples where people seem to operate correctly in their environment but don't have awareness (e.g. sleep walking).

There's pretty good reason to think that the human brain has two actively running processes (at least) which occasionally compete for control. And apparently if you mess with the brain physically you can split it down the middle and get two different cognitions operating in the brain at once!

I dunno, I feel like my conscious influence on my actions is pretty strong. I don't feel like a passive passenger in my own head (though I suppose my subconscious could just be pretending to let me lead, like my mom does with my dad).

The more I ponder this stuff, the more I question that assumption. When I look at my behavior patterns and the behavior patterns of people I know, it sure looks like we are operating on machinery that is significantly and consistently driven by the same patterns over and over.

Even though it does seem like we can react to our environment and make choices, it sure seems like that choice process is much more of a formula highly constrained by the machinery (due to nature+nurture), as opposed to an open ended consciousness based selection process.

The only reason I would make a different decision (than the ones I typically make) is if there was an explicit input identifying the fact that a non-standard decision is being targeted/tested and therefore I should choose that path to prove it can be done (based on my internal motivation that would push me towards even evaluating that condition).
175

The more I ponder this stuff, the more I question that assumption. When I look at my behavior patterns and the behavior patterns of people I know, it sure looks like we are operating on machinery that is significantly and consistently driven by the same patterns over and over.

Even though it does seem like we can react to our environment and make choices, it sure seems like that choice process is much more of a formula highly constrained by the machinery (due to nature+nurture), as opposed to an open ended consciousness based selection process.

The only reason I would make a different decision (than the ones I typically make) is if there was an explicit input identifying the fact that a non-standard decision is being targeted/tested and therefore I should choose that path to prove it can be done (based on my internal motivation that would push me towards even evaluating that condition).

You say that like it doesn't make perfect logical sense for people to fall into patterns. In actual fact I consciously choose to maintain my patterns, because I like the same stuff today as I did yesterday.
176

You say that like it doesn't make perfect logical sense for people to fall into patterns. In actual fact I consciously choose to maintain my patterns, because I like the same stuff today as I did yesterday.

I'm not passing a value judgement on it, I'm just saying that my analysis is leading me to believe that the patterns that drive us seem like they are stronger and further below the conscious level than I previously assumed.
177

I'm not passing a value judgement on it, I'm just saying that my analysis is leading me to believe that the patterns that drive us seem like they are stronger and further below the conscious level than I previously assumed.

I guess my only real response is, brains be complicated, yo!

Though, to say on-topic, complicated doesn't mean uncopyable. It just means that when we replicate all the neuron-states and chemical soups, we might not know what they're doing even as we move them over.

One fun (though debatably ethical) thing we could do once we were emulating the brains would be to sick an AI on them and have it make multiple copies of the emulated brains and selectively tweak/remove physical elements and compare ongoing behavior afterward. We could find out which aspects of our brain's physicality are necessary for proper mind function real quick. (Hmm, removing all the blood had an adverse effect. Guess we needed that. Next test!)

Given enough whittling we might be able to emulate a mind without emulating the whole brain - just the parts and processes that actually matter. (With the brain matter's physical weight or the skull enclosing it being possible superfluous factors that could be removed, for example.) In this way we might be able to emulate minds more efficiently than fully emulating all the physical matter in the vicinity.
178

Given the lower level properties and their arrangement, do the emergent properties follow necessarily? If I fix the base facts, do I fix the emergent facts, or are there more facts left to fix?

If the former, you don't get the sort of emergence you claim, because then, the emergent facts follow from the base facts---and for every emergent fact, you can state the precise way it follows from the base facts. Thus, that's the story you need to at least provide some plausibility argument for in order to have your claim of the emergence of consciousness be contentful.

If the latter, then the base facts don't determine the emergent facts. Then, of course, consciousness just might come 'round at some point. Somehow, for basically no reason. But also, it's no longer the case that fixing the physical facts ('particles, fields, and their arrangement') fixes all the facts, and physicalism is wrong.

It's not a matter of believing Chalmers or not. He's not stating this out of the blue; he's just clearly articulating what the options are. There's no middle ground. You either eat the cake, or keep it.

If an artificial neural network like the ones used in AlphaGo starts from essentially zero skill and proceeds to play chess or Go at a championship level after a period of deep learning, where in the original base configuration can you find any "Go-like" strategy knowledge? Trivially, the game rules are built in to the program, and trivially, one might guess that building neural connections might lead to some interesting phenomena, but what could you possibly see in the components in their initial state that could lead you to make confident predictions about skill at that particular game?

It would be fair to ask, for course, what the developers saw in it, and why they built it that way. The answer is that they saw only a general-purpose learning mechanism, not something that bore any of the specific primordial traits of what they hoped to achieve. Just the same way as they built a massively parallel general-purpose computer system to run it on. What actually came together was something qualitatively new, and something that many had believed was at least another decade away.

Quote:

Originally Posted by Half Man Half Wit

I've never doubted the role of computational theory in cognitive science. The problem is just that the model doesn't imply the character of the thing modeled, so computationalism just amounts to a category error. Just because you can model the solar system with an orrery doesn't mean gravity works via wires and gears.

(Emphasis mine.) Oh, my. You absolutely certainly have done exactly that, many many times throughout this thread:

Well, I gave an argument demonstrating that computation is subjective, and hence, only fixed by interpreting a certain system as computing a certain function. If whatever does this interpreting is itself computational, then its computation needs another interpretive agency to be fixed, and so on, in an infinite regress; hence, whatever fixes computation can't itself be computational.https://boards.straightdope.com/sdmb...2&postcount=34

The CTM is one of those rare ideas that were both founded an dismantled by the same person (Hilary Putnam). Both were visionary acts, it's just that the rest of the world is a bit slower to catch up with the second one. https://boards.straightdope.com/sdmb...0&postcount=59

Ignoring the incorrect assertions about Putnam that I dispelled earlier, waffling over "category errors" is disingenuous and meaningless here. The position of CTM isn't that computational theories help us understand the mind in some vague abstract sense; the position of CTM is that the brain performs computations, period, full stop -- as in the basic premise that intentional cognitive processes are literally syntactic operations on symbols. This is unambiguously clear, and you unambiguously rejected it. The cite I quoted in #143 says very explicitly that "the paradigm of machine computation" became, over a thirty-year period, a "deep and far-reaching" theory in cognitive science, supporting Fodor's statement that it's hard to imagine any kind of meaningful cognitive science without it, and that denial of this fact -- such as what you appear to be doing -- is not worth a serious discussion.

Quote:

Originally Posted by Half Man Half Wit

And that's supposed to be arguing for what, exactly? Because computer science people have sour grapes with Dreyfus, you can't criticize computationalism...? Seriously, I can't figure out what your aim is in bringing this up again and again.

It's a response to your accusation that "the main determining factor in whether or not a dead philosopher is worthy of deference is whether you agree with them". No, it isn't. Jerry Fodor was widely regarded as one of the founders of modern cognitive science, or at least of many of its foundational new ideas in the past half-century. Dreyfus wasn't the founder of anything. I asked someone about Dreyfus some years ago, someone who I can say without exaggeration is one of the principal theorists in cognitive science today. I can't give any details without betraying privacy and confidentiality, but I will say this: he knew Dreyfus, and had argumentative encounters with him in the academic media. His charitable view was that Dreyfus was a sort of congenial uncle figure, "good-hearted but not very bright".

Quote:

Originally Posted by Half Man Half Wit

OK, so it's actually just the physical evolution that you want to call 'computation' for some reason. And me computing sums using the device isn't computation. In that case, as I pointed out, you're not defending computationalism, but identity theory physicalism. Combined with your notion of strong-but-not-really-that-strong emergence, you don't really have any consistent position to offer at all.

I've been absolutely consistent that computationalism and physicalism are not at odds, and I disagree with your premise that they are. Nor do I believe, for the reasons already indicated, that Chalmers' view that what he calls "strong emergence" need be at odds with physicalism. My evolution on this topic is that I'm doubtful that there's much meaningful distinction between "weak" and "strong" emergence.

Quote:

Originally Posted by Half Man Half Wit

It means we were wrong, that's it. It's hard to see how chess, or Jeopardy, translated to simple Boolean logical rules. But the fact that you can do it by computer simply demonstrates that it does; that what next move to make in a game of chess is equivalent to tracking through some ungodly-huge Boolean formula that one could explicitly write down.

It's worse than that. Even talking about some Boolean formula or algorithm to emulate consciousness is silly because it isn't even a behavior, it's an assertion of self-reflection. The truth of the assertion may or may not become evident after observing actual behaviors. My guess, again, is that the issue is less profound than it's made out to be. It wouldn't surprise me if at some point in the future, generalized strong AI will assert that it is conscious, and we just won't believe it, or will pretend it's a "different kind" of consciousness.
179

But how does "survival instinct" actually "manifest", physically speaking? Some part of the entity constantly checking the senses for threats, conferring with the memory about the threats, and prompting reactions to the threats?

Survival is just a goal. A goal that is conducive to there being future generations, yes, but the thing that has the goal is what I'm curious about. The thing that has the awareness of the situation to see threats and avoid them. How does it work?

How? Well first we have to find out what it is. How do candles work? You have to understand several “what” vectors in order to arrive at a useful answer.

I suspect self-awareness/the survival instinct are analogous to something like hair: it is there, many of us are rather fond of it, it has its uses, but it does not actually do anything. It is an interesting feature.

But a feature of what? Hair is a simple thing that is a result of follicular activity. Self-awareness seems to be a rather complex feature that probably arises from disparate sources (some most likely chemical), and may not be localized (just like hair).

The point is, it is not evident that it actually does anything. Kind of like data tables which, in and of themselves are not active (in the way that program code is active), but our mental processes take note of it and adjust their results to account for it.

So, would self-awareness be a valuable feature for intelligent machines? Perhaps. Then again, maybe not. If we just want them to do what we need them to do, strict functionality might be the preferable design strategy. Unless uncovering the underlying nature of self-awareness is the research goal, in which case, they are probably best confined to a laboratory setting.
180

Will it be held against me if I don't have time right this moment to read the whole thread but want to ask a question that might have been answered already? Here goes: define exactly what you mean by downloading my consciousness, or I simply have no way to reply.

I don't know what you mean by "identical". If you mean "identical at the level of which electron is moving down which wire at the same time", then you can't be implementing both f and f' at the same time and having them both producing the same single output. It's literally impossible.

And yet, it's literally what happens, as I showed by example! Of course, that example appears to be, unfortunately, invisible to you, or else I can't really explain your stubborn refusal to just go look at it.

Quote:

So what you're saying is that if everyone who knows about a computer dies the computer magically stops working because it depended on the "interpretation" (which only human brains can do) knowing about it to function.

Again, none of this has any relation to my argument.

Quote:

Originally Posted by begbert2

That's less a disproof of my argument and more a disproof of IIT - or your interpretation if IIT, as the case may be.

Well, it at least demonstrates admirable confidence that you think you understand IIT better than its founders, with nothing but a cursory glance!

Regardless, this is not my interpretation of IIT, but one of its core points. Take it from Christoph Koch:

Quote:

If I build a perfect software model of the brain, it would never be conscious, but a specially designed machine that mimics the brain could be?

Correct. This theory clearly says that a digital simulation would not be conscious, which is strikingly different from the dominant functionalist belief of 99 percent of people at MIT or philosophers like Daniel Dennett. They all say, once you simulate everything, nothing else is required, and it’s going to be conscious.

Quote:

Originally Posted by wolfpup

If an artificial neural network like the ones used in AlphaGo starts from essentially zero skill and proceeds to play chess or Go at a championship level after a period of deep learning, where in the original base configuration can you find any "Go-like" strategy knowledge?

Nowhere, but given the base configuration, it's completely clear how one could teach it Go, which will change its base configuration (altering neuronal connection weights via backpropagation or some similar process), and after which the new base configuration in every case forms a sufficient explanation of AlphaGo's Go-playing capacity, without any need to appeal to emergence whatsoever.

You can exactly prove what sort of tasks a neural network is capable of learning (arbitrary functions, basically), you can, at every instant, tell exactly what happens at the fundamental level in order for that learning to happen, and you can tell exactly how it performs at a given task solely by considering its low-level functioning. This is an exact counter-example to the claims you're making. For a good explanation of the process, you might be interested in the 'Deep Learning'-series on 3Blue1Brown.

The people who built AlphaGo didn't just do it for fun, to see how it would do. They knew exactly what was going to happen---qualitatively, although I'd guess they weren't exactly sure how good it was going to get---and the fact that they could have this confidence stands in direct opposition to your claims. Nobody just builds a machine to see what's going to happen; they build one precisely because their understanding of the components allows them to say so with pretty good confidence. Sure, there's no absolute guarantee---but as you said, surprise isn't a sufficient criterion for emergence. Sometimes a bridge breaks down to the surprise of everybody; that doesn't entail that bridges have any novel qualitative features over and above those of bricks, cement, and steel.

Quote:

(Emphasis mine.) Oh, my. You absolutely certainly have done exactly that, many many times throughout this thread:

I have also emphasized that I think computational modeling is effective and valuable, which you conveniently missed. The one claim that I take issue with is that the brain is wholly computational in nature, and that, in particular, consciousness is a product of computation. That's however a claim that cognitive science can proceed without making, and whose truth has no bearing on its successes so far.

You've missed it at least two times now, and you'll probably miss it a third time, but again: an orrery is a helpful instrument to model the solar system, and one might get it into one's head that the solar system itself must be some giant world-machine run on springs and gears; but the usefulness of the orrery is completely independent of the falsity of that assertion.

Quote:

This is unambiguously clear, and you unambiguously rejected it. The cite I quoted in #143 says very explicitly that "the paradigm of machine computation" became, over a thirty-year period, a "deep and far-reaching" theory in cognitive science, supporting Fodor's statement that it's hard to imagine any kind of meaningful cognitive science without it, and that denial of this fact -- such as what you appear to be doing -- is not worth a serious discussion.

Even Fodor, as you noted, didn't believe the mind is wholly computational. That's the whole point of The Mind Doesn't Work That Way: Computational theory is a 'large fragment' of the truth, but doesn't suffice to tell the whole story (in particular, abduction, I think, is one thing Fodor thinks the mind can do that computers can't). So leaning onto Fodor to support your assertion that computationalism is 'the only game in town' isn't a great strategy.

Quote:

It's a response to your accusation that "the main determining factor in whether or not a dead philosopher is worthy of deference is whether you agree with them". No, it isn't. Jerry Fodor was widely regarded as one of the founders of modern cognitive science, or at least of many of its foundational new ideas in the past half-century. Dreyfus wasn't the founder of anything. I asked someone about Dreyfus some years ago, someone who I can say without exaggeration is one of the principal theorists in cognitive science today. I can't give any details without betraying privacy and confidentiality, but I will say this: he knew Dreyfus, and had argumentative encounters with him in the academic media. His charitable view was that Dreyfus was a sort of congenial uncle figure, "good-hearted but not very bright".

I'm not going to defend Dreyfus here, but you have been given a seriously one-sided view. Dreyfus was an early proponent of embodied cognition views that are increasingly gaining popularity, and much of his critique on GOFAI is now even acknowledged by AI researchers to have largely been on point.

Quote:

I've been absolutely consistent that computationalism and physicalism are not at odds, and I disagree with your premise that they are.

It's not a premise, it's a conclusion. I've asked you a couple of questions about the connection between the fundamental-level facts and the emergent facts in my previous post, which you neglected to answer, because the answers expose what should really be clear by now: either, the emergent properties are entailed by the base properties---then, the sort of emergence you claim consciousness to have doesn't exist, and I'm right in arguing that you owe us some sort of inkling as to how consciousness may emerge from lower-level facts, as for instance given in IIT (see my example above). Or, there is genuine novelty in emergence; but then, specifying the fundamental, physical facts doesn't suffice to fix all the facts about the universe, and thus, physicalism is wrong.

Which is a separate issue from the fact that such emergence, obviously, doesn't occur in computers, which are the poster children of reducibility.

I'm still waiting for you to tell me what my example system computes, by the way. I mean, this is usually a question with a simple answer, or so I'm told: a calculator computes arithmetical functions; a chess computer chess moves. So why's it so hard in this case? Because, of course, using the same standard you use in everyday computations will force you to admit that it's just as right to say that the device computes f as that it computes f'. And there, to any reasonable degree, the story ends.
182

I'd figure that an interrupt system is more likely. If you touch a hot stove, I don't think your brain is polling your nerve endings. They interrupt your thoughts - while also causing involuntary actions, just as interrupts can do.
184

More interestingly, perhaps, is that on IIT, computationalism comes out straightforwardly false. While a brain has a high degree of integrated information, the typical computer, due to its modular architecture, has very little to none of it. So a computer implementing the simulation of a brain won't lead to conscious experience, even if the brain it simulates does.

Computers have modular functional units. So does the brain. But not necessarily modular information. In fact memory hierarchies are designed so that their hierarchical nature is invisible to the program, except perhaps as regards to access time. Thus, information integration inside a modular computer system is not necessarily modular.
185

Computers have modular functional units. So does the brain. But not necessarily modular information. In fact memory hierarchies are designed so that their hierarchical nature is invisible to the program, except perhaps as regards to access time. Thus, information integration inside a modular computer system is not necessarily modular.

I'm not a proponent of IIT, but, as far as I understand it, the issue is: you can calculate (approximately, at least) the integrated information between the parts of a physical system (actually, what to consider a system's 'parts' is only determined, via a minimization, upon calculating---essentially, you split the system up such that the integrated information is minimal), essentially calculating a quantity that's somewhat similar to the mutual information. For a brain, that gets you a high number, for a computer, a low one.

So in that sense, the integrated information is a physical quantity that's present in a system, but not, in general, in a system that's simulating that system---just like the mass of a black hole isn't present in a simulation of said black hole (which I gather is rather a good thing).
186

In an effort to keep it brief, I'm omitting those points where I would simply be repeating myself.

Quote:

Originally Posted by Half Man Half Wit

I have also emphasized that I think computational modeling is effective and valuable, which you conveniently missed. The one claim that I take issue with is that the brain is wholly computational in nature, and that, in particular, consciousness is a product of computation. That's however a claim that cognitive science can proceed without making, and whose truth has no bearing on its successes so far.

You've missed it at least two times now, and you'll probably miss it a third time, but again: an orrery is a helpful instrument to model the solar system, and one might get it into one's head that the solar system itself must be some giant world-machine run on springs and gears; but the usefulness of the orrery is completely independent of the falsity of that assertion.

Only when challenged on it are you now offering creative re-interpretations. But perhaps you'd like to take on the creative challenge of re-interpreting what you meant by CTM having been "dismantled".

Quote:

Originally Posted by Half Man Half Wit

Even Fodor, as you noted, didn't believe the mind is wholly computational. That's the whole point of The Mind Doesn't Work That Way: Computational theory is a 'large fragment' of the truth, but doesn't suffice to tell the whole story (in particular, abduction, I think, is one thing Fodor thinks the mind can do that computers can't). So leaning onto Fodor to support your assertion that computationalism is 'the only game in town' isn't a great strategy.

"Wholly computational" was manifestly never my claim, and I was clear on that from the beginning. And if it had been, I'd certainly never lean on Fodor for support, as he was one of the more outspoken skeptics about its incompleteness, despite his foundational role in bringing it to the forefront of the field.

Quote:

Originally Posted by Half Man Half Wit

I'm still waiting for you to tell me what my example system computes, by the way. I mean, this is usually a question with a simple answer, or so I'm told: a calculator computes arithmetical functions; a chess computer chess moves. So why's it so hard in this case? Because, of course, using the same standard you use in everyday computations will force you to admit that it's just as right to say that the device computes f as that it computes f'. And there, to any reasonable degree, the story ends.

A Turing machine starts with a tape containing 0110011. When it's done the tape contains 0100010. What computation did it just perform?

My answer is that it's one that transforms 0110011 into 0100010, which is objectively a computation by definition, since it is, after all, a Turing machine exhibiting the determinacy condition -- even if I don't know what the algorithm is.

Your answer would appear to be that it's not a computation at all until it's been subjectively understood by you and assigned a name. I think Turing would disagree.

I think the core of the problem here is that you're confusing "computation" with "algorithm". But as Turing so astutely showed, the question of what a "computation" is, in the most fundamental sense, is quite a different question from asking what class of problem is being solved by the computation.
187

In an effort to keep it brief, I'm omitting those points where I would simply be repeating myself.

You should have instead kept it longer in an effort to reply to the points you keep omitting...

Quote:

This can ONLY be interpreted as "no cognitive processes at all can be computational", since ANY such computation would, according to your claim, require an external interpretive agent. If true, that would invalidate CTM in its entirety.

The computational theory of mind is the statement that computation is all the brain does, and, in particular, that consciousness is computational. This, I indeed have shown to be in error.

That does in no way imply that no process that goes on in the brain is computational. I've been careful (to no avail, it seems) to point out that my argument threatens solely the interpretational abilities of minds: they can't be computational. Using these interpretational powers, it becomes possible to assign definite computations to systems---after all, I use the device from my example to compute, say, sums.

Furthermore, even systems that aren't computational themselves may be amenable to computational modeling---just as well as systems that aren't made of springs and gears may be modeled by systems that are, like an orrery, but I suspect where these words are, you just see a blank space.

Quote:

Only when challenged on it are you now offering creative re-interpretations. But perhaps you'd like to take on the creative challenge of re-interpreting what you meant by CTM having been "dismantled".

I hold consistently to the same position I did from the beginning: computational modeling of the brain is useful and tells us much about it, but the mind is not itself computational. I have been very clear about this. Take my very first post in this thread:

Quote:

Originally Posted by Half Man Half Wit

But if that's so, then computation can't be what underlies consciousness: if there's no fact of the matter regarding what mind a given system computes unless it is interpreted as implementing the right computation, then whatever does that interpreting can't itself be computational, as otherwise, we would have a vicious regress---needing ever higher-level interpretational agencies to fix the computation at the lower level. But if minds then have the capacity to interpret things (as they seem to), they have a capacity that can't be realized via computation, and thus are, on the whole, not computational entities.

There, I clearly state that whatever realizes the mind's interpretational capacity can't be computational, and thus, minds can't be computational on the whole. That doesn't entail that nothing about minds can be computational. That would be silly: I have just now computed that 1 + 4 equals 5, for instance.

Also, I have been clear that my arguments don't invalidate the utility of computational modeling:

Quote:

Originally Posted by Half Man Half Wit

Also, none of this threatens the possibility or utility of computational modeling. This is again just confusing the map for the territory. That you can use an orrery to model the solar system doesn't in any way either imply or require that the solar system is made of wires and gears, and likewise, that you can model (aspects of) the brain computationally doesn't imply that the brain is a computer.

Quote:

Originally Posted by wolfpup

"Wholly computational" was manifestly never my claim, and I was clear on that from the beginning.

Then why take issue with my claim of demonstrating a non-computational ability of the mind?

Quote:

A Turing machine starts with a tape containing 0110011. When it's done the tape contains 0100010. What computation did it just perform?

As such, the question is underdetermined: there are infinitely many computations that take 0110011 to 0100010. This isn't a computation, it's rather an execution trace.

But of course, I know what you mean to argue. So let's specify a computation in full: say, the Turing machine has an input set consisting of all seven bit strings, and, to provide an output, traverses them right to left, replacing each block '11' it encounters with '10'. Thus, if produces '0100010' from '0110011', or '1000000' from '1111111', or '0001000' from '0001100'.

This is indeed a fully formally specified, completely determinate computation. You'll note it's of exactly the same kind of thing as my functions f and f'. So why does a Turing machine execute a definite computation?

Simple: a Turing machine is a formally specified, abstract object; its vehicles are themselves abstract objects, like '1' and '0' (the binary digits themselves, rather than the numerals).

But that's no longer true for a physical system. A physical system doesn't manipulate '1' and '0', it manipulates physical properties (say, voltage levels) that we take to stand for or represent '1' or '0'. It's here that the ambiguity comes in.

If you were to build the Turing machine from your example, then all it could do would be to write 1 or 0 (now, the numerals, not the binary digits) onto its tape. Anybody familiar with the Arabic numbers could then grasp that these ink-blots-on-paper are supposed to mean '1' and '0' (the binary digits, again). But, somebody grown up on Twin Earth that's identical to ours except for the fact that 1 means '0' and 0 means '1' would, with equal claim to being correct, take the Turing machine to implement a wholly different computation; namely, one where every '00' string is replaced by a '01'-string.

That's why I keep asking (and also, why you keep neglecting to answer): what computation is implemented by my example device? You're backed into a corner where you'd either have to answer that it's a computation taking switch-states to lamp-states, in which case the notion of computation collapses to that of physical evolution, or agree with me that it can be equally well taken to implement f and f'.

Although I note that you seem to have shifted your stance here somewhat---or perhaps, it hasn't been entirely clear from the beginning: you've both argued that the two computations are the same (which amounts to accepting they're both valid descriptions of the system, just equivalent ones, which starkly conflicts with you singling out a function of the same class as individuated computation in this post), and that multiple interpretations become, for reasons vaguely tied to 'emergence', less likely with increasing complexity. So which is it, now?

Perhaps for one last time, let me try and make my main point clear in a different way. Symbols don't intimate their meanings on their own. Just being given a symbol, or even a set of symbols with their relations, doesn't suffice to figure out what they mean. This is what the Chinese Room actually establishes (it fails to establish that the mind isn't computational): given a set of symbols (in Chinese), and rules for manipulating these, it's in principle possible to hold a competent conversation; but it's not possible to get at what the symbols mean, in any way, shape, or form.

Why is that the case? Because there's not just one thing they could mean. That must be the case, otherwise, we could just search through meanings until we find the right one. But it just isn't the case that symbols and rules to manipulate them, and relationships they stand in, determine what the symbols mean.

But it's in what their physical properties mean that physical systems connect to abstract computation. Nothing else can be right; computations aren't physical objects, and the only relation between the physical and the abstract is one of reference. So just the way you can't learn Chinese from manipulating Chinese letters according to rules, you can't fix what computation a system performs by merely having it manipulate physical properties, or objects, according to rules. An interpretation of what those properties or objects mean, what abstracta they refer to, is indispensable.

But this reference will always introduce ambiguity. And hence, there is no unambiguous, objective computation associated with a physical system absent it being interpreted as implementing said computation.
188

A Turing machine starts with a tape containing 0110011. When it's done the tape contains 0100010. What computation did it just perform?

My answer is that it's one that transforms 0110011 into 0100010, which is objectively a computation by definition, since it is, after all, a Turing machine exhibiting the determinacy condition -- even if I don't know what the algorithm is.

What if that same transformation was performed by HMHW's light switch box - would that be considered a computation?

Is that transformation always a computation regardless of the nature of the machinery that performed it?
190

And yet, it's literally what happens, as I showed by example! Of course, that example appears to be, unfortunately, invisible to you, or else I can't really explain your stubborn refusal to just go look at it.

If you're talking about your examples in posts 18 and 93, then you literally don't know what you're talking about. Your back-assward argument is that given a closed calculation machine, a given input, and the output from it, you won't be able to unambiguously infer the internal process of the machine. This much is true. From that you make the wild leap to the claim that there isn't an unambiguous process inside the machine. This is stupid. How can I put this more clearly? Oh yeah. That's incredibly stupid.

If you have a closed calculation machine, a so-called "black box", the black box has internal workings that in fact have an unambiguous process they follow and an unambiguous internal state, whether or not you know what it is. We know this to be the case because that's how things in the real world work. And your knowledge of the internal processes is irrelevant to their existence. Or put another incredibly obvious way, brains worked long before you thought to wonder how they did.

So. Now. Consider this unambiguous process and unambiguous internal state. Because these things actually exist, once you have a black box process in a specific state, it will proceed to its conclusion in a specific way, based on the deterministic processes inside advancing it from one ambiguous internal state to the next. The dominos will fall in a specific order, each caused by the previous domino hitting them. And if you rewound time and ran it again, or made an exact copy and ran it simultaneously from the same starting state, it will proceed in exactly the same way following the same steps to the same result.

(Unless the system is designed to introduce randomity into the result, that is, but that's a distracting side case that's irrelevant to the discussion at hand. The process is still the same unambiguous process even if randomity perturbs it in a different direction with a different result. And I'm quite certain based on observation that randomity has a negligible effect on cognition anyway.)

So. While you think that your example includes a heisenberg uncertainty machine that has schroedinger's internals which simultaneously implement f and f' and thus hold varying internal states at the same time, in actual, non-delusional fact if you have a specific deterministic machine that has a specific internal state that means that it *must* be in the middle of implementing either f or f', and not both. This remains true regardless of the fact that you can't tell which it's doing from eyeballing the output. Obviously.

Your argument is entirely reliant on the counterfactual and extremely silly idea that things can't exist without you knowing about them. Sorry, no. The black box is entirely capable of having a specified internal process (be it f, f', or something else) without consulting you for your approval.

Quote:

Originally Posted by Half Man Half Wit

Again, none of this has any relation to my argument.

Your argument is that the "interpretation" of an outside observer has some kind of magical impact on the function of a calculative process, despite there being no possible way that the interpreter's observation of the outside of the black box can impact what's going on inside it.

Or at least that's what you've been repeatedly saying your argument is. I can only work with what you give me.

Quote:

Originally Posted by Half Man Half Wit

Well, it at least demonstrates admirable confidence that you think you understand IIT better than its founders, with nothing but a cursory glance!

Regardless, this is not my interpretation of IIT, but one of its core points. Take it from Christoph Koch:

I note that while he baldly asserts that simulations can't be conscious, the only reason he gives for this is that physical matter is magic. He even admits that if you built an emulation of the conscious thing it would behave in exactly the same way, with the same internal causal processes (the same f, in other words), and despite functioning and behaving exactly the same as the original it would still be a philosophical zombie because reasons.

He then goes on to insist that he's not saying that consciousness is a magic soul, before clarifying that he's saying that physical matter has a magic soul that's called 'consciousness'.

I'm sure he's a very smart fellow, but loads and loads of smart fellows believe in magic and souls and gods. Smart doesn't mean you can't have ideological beliefs that color and distort your thinking.

So yeah. To the degree that IIT claims that physical matter has magical soul-inducing magic when arranged in the correct pattern to invoke the incantation, I understand it better than it does, because I recognize silliness when I see it. You think I'm misstating his position? First you have to "build the computer in the appropriate way, like a neuromorphic computer"... and then consciousness is magically summoned from within the physical matter as a result! But if you build the neuromorphic computer inside a simulation "it will be black inside", specifically because it doesn't have physical matter causing the magic.

So take heart! You're not the only person making stupid nonsensical arguments. You're not alone in this world!
191

Will it be held against me if I don't have time right this moment to read the whole thread but want to ask a question that might have been answered already? Here goes: define exactly what you mean by downloading my consciousness, or I simply have no way to reply.

Thanks.

Can't speak for anyone else, but I interpret it as creating a process inside a computer that has the same thoughts and memories and beliefs and such as you do, and has a separate conscious awareness of its reality from the one you have. Since he has all the same memories as you he will quite naturally think he is you, until somebody convinces him otherwise.

What it's not, is the digitizing of the whole physical person a la Tron. I mean you could do that, but it really just amounts to destroying the original person as part of the process of scanning them for the information to make the digital copy. (And putting that copy in a fancy sci-fi outfit in the process.) Of course the Tron scenario allows for some confusion/obfuscation of whether some kind of immortal soul left over from the now-disintegrated person somehow locates and attaches itself to the simulated digital avatar, which honestly just seems rife with implementation problems. (Especially since they were just trying to copy an apple.)
193

What if that same transformation was performed by HMHW's light switch box - would that be considered a computation?

Is that transformation always a computation regardless of the nature of the machinery that performed it?

It depends on what question you're asking. If you're concerned about whether a device is Turing equivalent, you need to understand what it's actually doing. But when computation is viewed simply as the output of a black box, it's always reducible to the mapping of a set of input symbols to a set of output symbols. So I take the view that any black box that deterministically produces such a mapping for all combinations of inputs has to be regarded as ipso facto computationally equivalent to any other that produces the same mapping, without reference to what's going on inside it. Of course, the mechanisms involved may be trivial, like a simple table lookup, that may not provide any insights into the nature of computation and may not be Turing equivalent.
194

I'm not a proponent of IIT, but, as far as I understand it, the issue is: you can calculate (approximately, at least) the integrated information between the parts of a physical system (actually, what to consider a system's 'parts' is only determined, via a minimization, upon calculating---essentially, you split the system up such that the integrated information is minimal), essentially calculating a quantity that's somewhat similar to the mutual information. For a brain, that gets you a high number, for a computer, a low one.

So in that sense, the integrated information is a physical quantity that's present in a system, but not, in general, in a system that's simulating that system---just like the mass of a black hole isn't present in a simulation of said black hole (which I gather is rather a good thing).

I think it would be hard to calculate this number based only on the architecture of a computer, not what it was running.
Information, unlike mass, must be present in a simulation of something with that information. If you simulate telephone traffic, say, you don't need switch hardware but you do need the contents of the calls. This is simulation, not modeling, where you can describe the traffic mathematically without the information in them.
That's information, not integrated information of course. I did read at your link but found it less than interesting.
195

The computational theory of mind is the statement that computation is all the brain does, and, in particular, that consciousness is computational. This, I indeed have shown to be in error.

This is flat-out wrong, as evidenced by Fodor's statements that CTM is an indispensably essential theory explaining many aspects of cognition, while at the same time he never even imagined that anyone would take it to be a complete description of everything the mind does. Your characterization of the computational theory of mind is simply wrong in that it fundamentally misrepresents how CTM has been defined and applied in cognitive science. And CTM doesn't even attempt to address consciousness, regarding it as an ill-defined problem. I've provided my own speculations about it, and those you're free to disagree with, but when you make arguments that mischaracterize what CTM means in cognitive science you can expect to be corrected.

Quote:

Originally Posted by Half Man Half Wit

That does in no way imply that no process that goes on in the brain is computational. I've been careful (to no avail, it seems) to point out that my argument threatens solely the interpretational abilities of minds: they can't be computational. Using these interpretational powers, it becomes possible to assign definite computations to systems---after all, I use the device from my example to compute, say, sums.

Furthermore, even systems that aren't computational themselves may be amenable to computational modeling---just as well as systems that aren't made of springs and gears may be modeled by systems that are, like an orrery, but I suspect where these words are, you just see a blank space.

I hold consistently to the same position I did from the beginning: computational modeling of the brain is useful and tells us much about it, but the mind is not itself computational. I have been very clear about this. Take my very first post in this thread:

There, I clearly state that whatever realizes the mind's interpretational capacity can't be computational, and thus, minds can't be computational on the whole. That doesn't entail that nothing about minds can be computational. That would be silly: I have just now computed that 1 + 4 equals 5, for instance.

Also, I have been clear that my arguments don't invalidate the utility of computational modeling:

(emphasis mine)
I see a "blank space" where you provide your ruminations about CTM being somehow related to "computational modeling" because it's so egregiously wrong. Please note the following commentary from the Stanford Encyclopedia of Philosophy. They refer to the family of views I'm talking about here as classical CTM, or CCTM, to distinguish them from things like connectionist descriptions. CCTM is precisely what Putnam initially proposed and was then further developed into a mainstream theory at the forefront of cognitive science by Fodor (bolding mine):

According to CCTM, the mind is a computational system similar in important respects to a Turing machine ... CCTM is not intended metaphorically. CCTM does not simply hold that the mind is like a computing system. CCTM holds that the mind literally is a computing system.https://plato.stanford.edu/entries/c.../#ClaComTheMin

It then goes on to describe Fodor's particular variant of CCTM:

Fodor (1975, 1981, 1987, 1990, 1994, 2008) advocates a version of CCTM that accommodates systematicity and productivity much more satisfactorily [than Putnam's original formulation]. He shifts attention to the symbols manipulated during Turing-style computation.

This is of course exactly correct. The prevalent view of CTM that was first advanced by Fodor and then became mainstream is that many cognitive processes consist of syntactic operations on symbols in just the manner of a Turing machine or a digital computer, and he further advanced the idea that these operations are a kind of "language of thought", sometimes called "mentalese". The proposition is that there is a literal correspondence with the operation of a computer program, and it has no relationship to your suggestions of "modeling" or of doing arithmetic in your head.

Quote:

Originally Posted by Half Man Half Wit

Then why take issue with my claim of demonstrating a non-computational ability of the mind?

Because it's wrong, for the reason cited above.

Quote:

Originally Posted by Half Man Half Wit

As such, the question is underdetermined: there are infinitely many computations that take 0110011 to 0100010. This isn't a computation, it's rather an execution trace.

But of course, I know what you mean to argue. So let's specify a computation in full: say, the Turing machine has an input set consisting of all seven bit strings, and, to provide an output, traverses them right to left, replacing each block '11' it encounters with '10'. Thus, if produces '0100010' from '0110011', or '1000000' from '1111111', or '0001000' from '0001100'.

This is indeed a fully formally specified, completely determinate computation. You'll note it's of exactly the same kind of thing as my functions f and f'. So why does a Turing machine execute a definite computation?

Simple: a Turing machine is a formally specified, abstract object; its vehicles are themselves abstract objects, like '1' and '0' (the binary digits themselves, rather than the numerals).

But that's no longer true for a physical system. A physical system doesn't manipulate '1' and '0', it manipulates physical properties (say, voltage levels) that we take to stand for or represent '1' or '0'. It's here that the ambiguity comes in.

I appreciate the effort you made to once again detail your argument, but I find the view that there is some kind of fundamental difference between an abstract Turing machine and a physical one because the former manipulates abstract symbols and the latter manipulates physical representations to be incoherent. They are exactly the same. The Turing machine defines precisely what computation is, independent of what the symbols might actually mean, provided only that there is a consistent interpretation (any consistent interpretation!) of the semantics.

Let me re-iterate one of my previous comments. Our disagreement seems to arise from your conflation of "computation" with "algorithm". The question of what a "computation" is, in the most fundamental sense, is quite a different question from what problem is being solved by the computation. Your obsession with the difference between your f and f' functions is, at its core, not a computational issue, but a class-of-problem issue.
196

It depends on what question you're asking. If you're concerned about whether a device is Turing equivalent, you need to understand what it's actually doing. But when computation is viewed simply as the output of a black box, it's always reducible to the mapping of a set of input symbols to a set of output symbols. So I take the view that any black box that deterministically produces such a mapping for all combinations of inputs has to be regarded as ipso facto computationally equivalent to any other that produces the same mapping, without reference to what's going on inside it. Of course, the mechanisms involved may be trivial, like a simple table lookup, that may not provide any insights into the nature of computation and may not be Turing equivalent.

In the real world this is an important problem. We need to prove that the implementation of a specification is equivalent to the specification. It turns out that this is basically impossible without being able to see the inside of the black box, even for large systems without internal memory, and practically impossible for those with memory.
Of course you have to agree on the input and output symbols, and they must be consistent across computational systems. This doesn't seem to be a requirement for HMHW's view of interpretation.
In other words, Lincoln was wrong - a horse does have five legs if you interpret the tail as a leg.
197

It depends on what question you're asking. If you're concerned about whether a device is Turing equivalent, you need to understand what it's actually doing. But when computation is viewed simply as the output of a black box, it's always reducible to the mapping of a set of input symbols to a set of output symbols. So I take the view that any black box that deterministically produces such a mapping for all combinations of inputs has to be regarded as ipso facto computationally equivalent to any other that produces the same mapping, without reference to what's going on inside it. Of course, the mechanisms involved may be trivial, like a simple table lookup, that may not provide any insights into the nature of computation and may not be Turing equivalent.

The question I'm asking is whether the nature of the machinery performing a transformation determines whether something is a computation or not, from your perspective.

It sounds like you are saying that if HMHW's box performs the transformation you listed (0110011 into 0100010) then that is considered a computation, right?

Meaning that HMWH's box may not be a Turing machine, it may just be a circuit that performs that transformation, but regardless of how it arrives at the correct answer, the transformation is considered a computation, right?
198

If you're talking about your examples in posts 18 and 93, then you literally don't know what you're talking about. Your back-assward argument is that given a closed calculation machine, a given input, and the output from it, you won't be able to unambiguously infer the internal process of the machine.

No. Not even close. I haven't said anything about internal processes at all, they've got no bearing or relevance on my argument. The argument turns on the fact that you can interpret the inputs (switches) and outputs (lights) as referring to logical states ('1' or '0') in different ways. Thus, the system realizes different functions from binary numbers to binary numbers. I made this very explicit, and frankly, I can't see how you can honestly misconstrue it as being about 'internal processes', 'black boxes' and the like.

Quote:

So. While you think that your example includes a heisenberg uncertainty machine that has schroedinger's internals which simultaneously implement f and f' and thus hold varying internal states at the same time, in actual, non-delusional fact if you have a specific deterministic machine that has a specific internal state that means that it *must* be in the middle of implementing either f or f', and not both. This remains true regardless of the fact that you can't tell which it's doing from eyeballing the output. Obviously.

OK. So, the switches are set to (down, up, up, down), and the lights are, consequently, (off, on, on). What has been computed? f(1, 2) ( = 1 + 2) = 3, or f'(2, 1) = 6? You claim this is obvious. Which one is right?

The internal wiring is wholly inconsequential; all it needs to fulfill is to make the right lights light up if the switches are flipped. There are various ways to do so, if you feel it's important, just choose any one of them.

Quote:

Your argument is that the "interpretation" of an outside observer has some kind of magical impact on the function of a calculative process, despite there being no possible way that the interpreter's observation of the outside of the black box can impact what's going on inside it.

Because what goes on inside has no bearing on the way the system is interpreted. You can think of it in the same way as reading a text: how it was written, by ink on paper, by pixels on a screen, by chalk on a board, has no bearing on whether you can read it, and what message gets transported once you do. Your language competence, however, does: where you read the word 'gift', and might expect some nice surprise, I read it as promising death and suffering, because it means 'poison' in German. In the same way---exactly the same way---one can read 'switch up' to mean '0' or '1'. And that's all there's to it.

Quote:

Or at least that's what you've been repeatedly saying your argument is. I can only work with what you give me.

Evidently not, to both our detriment.

Quote:

I note that while he baldly asserts that simulations can't be conscious, the only reason he gives for this is that physical matter is magic.

I'm not going to defend IIT here, but it's a very concrete proposal (much more concrete than anything offered in this thread so far) that's squarely rooted in the physical.

Quote:

So take heart! You're not the only person making stupid nonsensical arguments. You're not alone in this world!

Well, at least now I know it's not just my fault that my arguments seem so apparently opaque to you.

Quote:

Originally Posted by begbert2

P1: Cognition is a property or behavior that exists in the physical world.

P2: If an emulation is sufficiently detailed and complete, that emulation can exactly duplicate properties and behaviors of what it's emulating.

P3: It's possible to create a sufficiently detailed and complete emulation of the real world.

C1: It's possible to create an emulation that can exactly duplicate properties and behaviors of the real world. (P2 and P3)

C2: It's possible to create an emulation that can exactly duplicate cognition. (C1 and P1)

Premise P2 is self-evidently wrong: if an emulation could exactly duplicate every property of a system, then it wouldn't be an emulation, but merely a copy, as there would be no distinction between it and what it 'emulates'. But of course, no simulation ever has all the properties of the thing it simulates---after all, that's why we do it: we typically have more control over the simulation. For instance, black holes are, even if we could get to them, quite difficult to handle, but simulations are perfectly tame---because a simulated black hole doesn't have the mass of a real one. I can simulate black holes all the live long day without my desk ever collapsing into the event horizon.

You'll probably want to argue that 'inside the simulation', objects are attracted by the black hole, thus, it has mass. For one, that's a quite strange thing to believe: it would entail that you could create some sort of pocket-dimension, with its own physics removed from ours, merely by virtue of shuffling around a few voltages; that it would be the case, even though the black hole's mass has no effects in our dimension, there suddenly now exists a separate realm where mass exists that has no connection to ours save for your computer screen. In any other situation, you'd call that 'magic'.

Holding that the black hole in the simulation has mass is exactly the same thing as holding that the black hole I'm writing about has mass. The claim that computation creates consciousness is the claim that, whenever I'm writing 'john felt a pain in his hip', there is actually a felt pain somewhere, merely by virtue of me describing it. Because that's what a simulation is: an automated description. A computation is a chain of logical steps, equivalent to an argument, performed mechanically; there's no difference to writing down the same argument in text. The next step in the computation follows from the previous one in just the same way as the next line in an argument follows from the prior one.

Quote:

Originally Posted by wolfpup

This is flat-out wrong, as evidenced by Fodor's statements that CTM is an indispensably essential theory explaining many aspects of cognition, while at the same time he never even imagined that anyone would take it to be a complete description of everything the mind does. Your characterization of the computational theory of mind is simply wrong in that it fundamentally misrepresents how CTM has been defined and applied in cognitive science.

Fodor indeed had heterodox views on the matter; but, while he's an important figure, computationalism isn't just what Fodor says it is. After all, it's the 'computational theory of the mind', not 'of some aspects of the mind'. Or, as your own cite from the SEP says,

I see a "blank space" where you provide your ruminations about CTM being somehow related to "computational modeling" because it's so egregiously wrong.

I'm not saying that the CTM is related to computational modeling, I'm saying that computational modeling is useful in understanding the brain even if the mind is not wholly computational. For instance, a computational model of vision need not assume that the mind is computational to give a good description of vision.

Quote:

Because it's wrong, for the reason cited above.

If you intend this to mean that my argument is wrong just because Fodor (or his allies) don't hold to it, then that's nothing but argument from authority. You'll have to actually find a flaw in the argument to mount a successful attack.

Quote:

I appreciate the effort you made to once again detail your argument, but I find the view that there is some kind of fundamental difference between an abstract Turing machine and a physical one because the former manipulates abstract symbols and the latter manipulates physical representations to be incoherent.

There is, really, only one set of questions you need to answer: if I use my device to compute the sum of two inputs, what is the device doing? Is it computing the sums? If not, then what is? If I use it to compute f', what is the device doing?

Because that's computation in the actual, real-world sense of the term, absent any half-digested SEP articles. My claim is nothing but: because I can use the system to compute f (or f'), the system computes f (or f'). There is nothing difficult about this, and it needs loops of motivated reasoning in order to make it into anything terribly complex or contentious.

Quote:

Let me re-iterate one of my previous comments. Our disagreement seems to arise from your conflation of "computation" with "algorithm". The question of what a "computation" is, in the most fundamental sense, is quite a different question from what problem is being solved by the computation. Your obsession with the difference between your f and f' functions is, at its core, not a computational issue, but a class-of-problem issue.

I've explicitly left the algorithm that computes either function out of consideration. The computation is given by the function; the algorithm is the way the function is computed, the detailed series of steps being traversed. I can talk about a system implementing a computation without talking about the algorithm it follows. Again, that's just the everyday usage of the term: I can say that a certain program sorts objects, without even knowing whether it implements mergesort, or quicksort, or bubblesort, or what have you.

So no, the algorithm has no bearing on whether the system implements f or f'. The reinterpretation of the symbolic vehicles it uses entails that any algorithm for computing one will be transformed into one for computing the other. Where it says 'If S12 = '1' Then ...' in one, it'll say 'If S12 = '0' Then ...' in the other, with both referring to the switch being in the 'up' position, say.
199

No. Not even close. I haven't said anything about internal processes at all, they've got no bearing or relevance on my argument. The argument turns on the fact that you can interpret the inputs (switches) and outputs (lights) as referring to logical states ('1' or '0') in different ways. Thus, the system realizes different functions from binary numbers to binary numbers. I made this very explicit, and frankly, I can't see how you can honestly misconstrue it as being about 'internal processes', 'black boxes' and the like.

Your "thus" is flat wrong and stupid. When the box in your original argument transformed its input into its output it used a specific approach to do so. It didn't use all theoretically possible approaches to do so; it used one approach to do so. It doesn't use "different functions" to map the inputs to the result, it uses only one function to do so. Which function? Whichever one it used. You can't tell which from the outside, but frankly reality doesn't give crap what you know.

You made it very explicit that your argument depends on a blatantly false assumption, and it's not misconstruing things to point out that the realities of the function of black boxes and internal processes are what show that your assumption is blatantly false.

Quote:

Originally Posted by Half Man Half Wit

OK. So, the switches are set to (down, up, up, down), and the lights are, consequently, (off, on, on). What has been computed? f(1, 2) ( = 1 + 2) = 3, or f'(2, 1) = 6? You claim this is obvious. Which one is right?

I claim it's obvious that only one approach was used. Your interpretation of the result is completely irrelevant, particularly to what was going on inside the box. The inside of the box does whatever the inside of the box does, and your observation of the output and your interpretation of those observations have no effect on the box.

Quote:

Originally Posted by Half Man Half Wit

The internal wiring is wholly inconsequential; all it needs to fulfill is to make the right lights light up if the switches are flipped. There are various ways to do so, if you feel it's important, just choose any one of them.

The internal wiring of the box is entirely, controllingly important to determining how that box functions. And more importantly to destroying your argument, it's important in that the fact that the internal wiring must exist and must implement a specific function completely destroys that assumption you're relying on.

Quote:

Originally Posted by Half Man Half Wit

Because what goes on inside has no bearing on the way the system is interpreted. You can think of it in the same way as reading a text: how it was written, by ink on paper, by pixels on a screen, by chalk on a board, has no bearing on whether you can read it, and what message gets transported once you do. Your language competence, however, does: where you read the word 'gift', and might expect some nice surprise, I read it as promising death and suffering, because it means 'poison' in German. In the same way---exactly the same way---one can read 'switch up' to mean '0' or '1'. And that's all there's to it.

So what? How you interpret the box's output has no bearing on the box's functionality.

Quote:

Originally Posted by Half Man Half Wit

Evidently not, to both our detriment.

I will readily concede that I don't see why you think interpretation is even slightly relevant to anything. The box itself isn't effected, and you can't prove that calculation is internally inconsistent just by eyeballing some output. (Especially not with the massive false assumption your argument seems to hinge on.)

Quote:

Originally Posted by Half Man Half Wit

I'm not going to defend IIT here, but it's a very concrete proposal (much more concrete than anything offered in this thread so far) that's squarely rooted in the physical.

Yuh-huh.

Quote:

Originally Posted by Half Man Half Wit

Well, at least now I know it's not just my fault that my arguments seem so apparently opaque to you.

Wrongness can indeed be copied from elsewhere.

Quote:

Originally Posted by Half Man Half Wit

Premise P2 is self-evidently wrong: if an emulation could exactly duplicate every property of a system, then it wouldn't be an emulation, but merely a copy, as there would be no distinction between it and what it 'emulates'. But of course, no simulation ever has all the properties of the thing it simulates---after all, that's why we do it: we typically have more control over the simulation. For instance, black holes are, even if we could get to them, quite difficult to handle, but simulations are perfectly tame---because a simulated black hole doesn't have the mass of a real one. I can simulate black holes all the live long day without my desk ever collapsing into the event horizon.

You'll probably want to argue that 'inside the simulation', objects are attracted by the black hole, thus, it has mass. For one, that's a quite strange thing to believe: it would entail that you could create some sort of pocket-dimension, with its own physics removed from ours, merely by virtue of shuffling around a few voltages; that it would be the case, even though the black hole's mass has no effects in our dimension, there suddenly now exists a separate realm where mass exists that has no connection to ours save for your computer screen. In any other situation, you'd call that 'magic'.

Holding that the black hole in the simulation has mass is exactly the same thing as holding that the black hole I'm writing about has mass. The claim that computation creates consciousness is the claim that, whenever I'm writing 'john felt a pain in his hip', there is actually a felt pain somewhere, merely by virtue of me describing it. Because that's what a simulation is: an automated description. A computation is a chain of logical steps, equivalent to an argument, performed mechanically; there's no difference to writing down the same argument in text. The next step in the computation follows from the previous one in just the same way as the next line in an argument follows from the prior one.

You're really, really coming off a somebody who doesn't understand simulations, here. I'm not really sure how to explain simulations in brief, so I'll just say "I accept that you think you've refuted P2, but you really, really haven't." Suffice to say that writing "john felt a pain in his hip" is not a particularly detailed and complete emulation.
200

ST's vBulletin 3 Responsive Styles

Our newly refreshed styles in 2017, brings the old vb3 to the new level, responsive and modern feel. It comes with 3 colors with or without sidebar, fixed sized or fluid. Default vbulletin 3 style made responsive also available in the pack.
Purchase Our Style Pack Now