How Many Lego Bricks to Build a Mind?

How many Lego bricks would it take to build a conscious, rational mind?

This may sound like an absurd question. Lego bricks don’t seem like the sort of thing that you could build a mind out of. (At least, I’m assuming that artificial intelligence researchers aren’t currently tinkering away in their state-of-the-art labs with a “Build Your Own Mind!” Lego set.) But the question shouldn’t, in principle, be absurd for a physicalist. There is a sense in which it should be a perfectly reasonable question to ask.

The Relevant Difference for Physicalism: Arrangement of Parts

The physicalist already believes that the mind is entirely made up of individually non-conscious, non-rational physical parts. (I am not counting panpsychism as a form of physicalism, so rest easy Galen Strawson.) Physicalism, as I am defining it here, requires that the most basic constituents of reality are non-conscious and non-rational—not to mention without teleology (goal directedness), without purpose, and without meaning (intentionality). Although Lego bricks and brain cells are made up of different elements on the periodic table, the different atoms that make up those respective elements are themselves made up of the same sub-atomic particles. So both Lego bricks and brain cells, at the most basic level, are made up of identical physical parts.

The only difference between the two, assuming physicalism is true, is the arrangement of the constituent particles.

What Is It about Arrangement that Explains Consciousness and Rationality?

Suppose we could create built-to-scale models of every sub-atomic particle using Lego bricks. An accurately scaled Lego model of a single atom might end up being the size of an entire planet, or even a solar system. Could we then arrange these Lego brick atoms to form a conscious, rational Lego brain (perhaps the size of the universe)? And if not, why not?

When we imagine arranging sub-atomic particles to form an actual brain (in parallel to arranging Lego bricks to form a Lego Brain), stacking them one-by-one until the overall structure is complete, what is it about the arrangement of individual parts that could possibly be responsible for consciousness and rationality? This question becomes more formidable when you consider that consciousness is an ON/OFF type thing—you either have it or you don’t. At some point in the assembly of a brain (or a complete body—or whatever collection of parts is taken to be conscious) out of individual, unconscious subatomic parts, consciousness would suddenly appear with the addition of a single sub-atomic particle.

If we cannot say what it is about the arrangement of particles in the actual brain that causes consciousness and rationality, we cannot say with any justification that it is impossible for a certain arrangement of Lego bricks to cause consciousness and rationality to arise. Although physicalists can point to plausible reasons for why it is unlikely than any Lego model would ever become conscious or rational, they cannot rule out the possibility until they point to what, precisely, it is about the arrangement of particles in the brain that generates consciousness and rationality.

Not conscious yet? Well, rearrange them.

Image Source

A Difference in Kind, or a Difference of Degree?

The Lego Brain thought experiment draws out, for me, the central problem of any form of physicalism: pointing to the arrangement of physical parts doesn’t seem, in principle, to be able to explain consciousness or rationality. Consciousness and rationality (according the very common intuition that many people share) seem to consist of something fundamentally different than a particular arrangement of particles. The difference between conscious minds, for instance, and non-conscious physical objects appears to be a difference in kind, not just a difference in degree of the complexity of arrangement of particles.

The challenge to the physicalist, then, is to answer this question: What is it about the arrangement of individually non-conscious particles that causes consciousness and rationality? Without an answer to that question, physicalism would seem to offer no explanation at all.

I know you weren’t counting panpsychism as physicalism but it IS physicalism. So the physicalist doesn’t require anything more than to say that consciousness is a fundamental property of matter. It seems that you are questioning emergentism in particular, but if so, it’s less misleading (to the lay reader that isn’t well read) to simply specify emergentism rather than implying that emergentism is simply another word for physicalism. Physicalism has largely been branded as equivalent with emergentism and articles that don’t specify further propagate this misunderstanding. So there’s my initial quibble anyway. 🙂

While I consider myself to be some form of panpsychist physicalist, I still appreciate that emergentism carries support based on precedents in other scientific fields. For example, multicellular organisms emerge from a particular configuration of single cells. Likewise, brains (even ignoring consciousness for the moment) emerge from collections of single cells/neurons and their connections and activities. We can see the correlations between connectome complexity and information processing complexity as well where a particular configuration of neurons cause specific behaviors to emerge that weren’t in any meaningful sense present in the original neurons that interconnected to result in such an outcome. In physics, we also see properties like pressure and temperature emerge from sets of individual atoms that themselves have no pressure (for example). This doesn’t make pressure any less real and it provides a robust precedent of new properties emerging from collections of simpler units that don’t possess these properties in any meaningful sense.

Life likewise emerges from seemingly non-living constituents, where the line is entirely blurred. It seems that self-replication and metabolism are the integral components to what make things living, but nevertheless few would argue that atoms themselves are alive. So if various properties including life itself emerges from configurations of materials that don’t possess such characteristics, is it that far of a leap to assume that consciousness would result from similar property emergentism? I think these precedents provide enough support for it to make it plausible and that’s all that is needed to get it off the ground. Combine this with brain pathology and other fields and we start to get a fairly robust case that indeed configurations of neurons/atoms are at the heart of the mental properties that we associate with them. As a short answer to the question “What is it about the arrangement of individually non-conscious particles that causes consciousness and rationality?”, I would say that, negating my actual position of panpsychism as an obvious answer, from an emergentist’s alternative point of view, they may say that it’s the ability of large sets of particles to process information (excitatory/inhibitory connections) and to synchronize their activity with one another that is central. Once these are in place, then physical representations of information underlying the inputs starts to take form and then likely leads to consciousness and rationality and so forth. If the legos can’t do this, then no configuration of them will ever satisfy this emergent requirement.

You raise some interesting distinctions. I would add that my Lego brick brain illustration also assumes that consciousness (let alone rationality) is a real thing. (Some argue that philosophers like Dennett imply in their writings that consciousness is an illusion–what Chalmers calls a Type-A physicalist.)

There’s a disanalogy between single cells combining to form multi-cellular organisms (and your other illustrations), and the emergence of consciousness: namely, consciousness is an ON/OFF type thing, not a degreed property. In comparing your illustration of temperature and pressure “emerging” from collections of atoms to the emergence of consciousness, it strikes me that temperature and pressure don’t “emerge” from collections of atoms. ANY collection of atoms would have temperature or pressure, whereas on the physicalist view only certain very complex arrangements of atoms would cause consciousness to emerge.

This means your comparison to life would be closest to the mark. However, the question of how life “emerges” is itself one that gets at the heart of some major metaphysical commitments. Dualists, and in particular, substance dualists, are already willing to allow for robust, immaterial realities. Once that line has been crossed, I don’t think the dualist is bound any longer by a strict physicalism in other areas. Could it be that life is also explained by teleology, as Nagel entertains in his book MIND AND COSMOS? If that were the case, then the physicalist cannot appeal to life as a counterexample.

But at the end of the day, these debates, in my mind, come down to the intuition that separates Type A and Type B physicalists: is consciousness real? Can we conceive of the same physical world that lacks consciousness? The Lego brick illustration is meant to draw out the intuition that adding one more brick to an arrangement won’t ever do the trick. Likewise for adding one more atom to an arrangement of atoms. How does that one more atom create consciousness?

Your comment on this:
“…[the emergentists] may say that it’s the ability of large sets of particles to process information (excitatory/inhibitory connections) and to synchronize their activity with one another that is central. Once these are in place, then physical representations of information underlying the inputs starts to take form and then likely leads to consciousness and rationality and so forth.”

The problem for me lies in saying “and then likely leads to consciousness and rationality.” It reminds me of the comic that Dennett used to mock dualists, in which a student is writing the solution to a problem on a board, and in the middle of the entire equation he writes “and then a miracle occurs.” The teacher responds, “I think you need to be a little more explicit in step 2.”

“There’s a disanalogy between single cells combining to form multi-cellular organisms (and your other illustrations), and the emergence of consciousness: namely, consciousness is an ON/OFF type thing, not a degreed property.”

I disagree. I think it would be better to say that consciousness is “off” or it is “on” in different degrees (and not just in a panpsychist sense). Allow me to illustrate in a couple ways that come to mind. Via a neuronal negation approach, we can see that by removing the activity of certain groups of neurons, we lose certain aspects of consciousness (for example losing the ability to recognize faces, having different levels of alertness, etc.) And generally speaking, we can see that if we keep taking away conscious faculties/features (such as can be seen when certain anesthetics take effect), we eventually lose our “autobiographical sense of self” entirely, leading to a core-level of self consciousness limited to experiencing the here and now (no “higher” conscious thought), and with further neuronal negation, eventually becoming so primitive, it constitutes some minimally conscious brain (likely to be on par with the brain of a “lower animal”, insect, etc.). So I would argue it most certainly comes in degrees including via evolution (phylogenic and ontogenic). Now as a panpsychist physicalist myself (like Strawson) I would personally say (independent of Strawson’s views) that this property of consciousness in all matter is amplified to become what we would recognizably call “consciousness” in said minimally conscious brains due to a particular configuration of this “conscious” matter. So while I would argue that consciousness comes in degrees starting with an individual atom, I find it more useful to dispense with the word “consciousness” in that sense (to eliminate confusion and not be mis-characterized as a new-age lunatic). Once a configuration of atoms is such that it constitutes a brain with some minimal degree of information processing complexity, then that’s when I start to use the term “consciousness” even though I think it applied all the way back to before an information processing brain had materialized.

“But at the end of the day, these debates, in my mind, come down to the intuition that separates Type A and Type B physicalists: is consciousness real? Can we conceive of the same physical world that lacks consciousness? The Lego brick illustration is meant to draw out the intuition that adding one more brick to an arrangement won’t ever do the trick. Likewise for adding one more atom to an arrangement of atoms. How does that one more atom create consciousness?”

I agree with the illustration insofar as it is counterintuitive (and I believe also false) that adding one more block/atom would result in some kind of binary switch (from “no consciousness” to “consciousness”). Which is why I would argue that it would actually result in a small quantitative change in degree of consciousness (as per my “degrees” points above).

” The problem for me lies in saying “and then likely leads to consciousness and rationality.” It reminds me of the comic that Dennett used to mock dualists, in which a student is writing the solution to a problem on a board, and in the middle of the entire equation he writes “and then a miracle occurs.” The teacher responds, “I think you need to be a little more explicit in step 2.” ”

Yeah, I could have simply omitted that last part out (hehehe). In any case, I would agree that the “and then a miracle occurs” has to happen for the non-panpsychist physicalist. For them, since they believe that non-experiential “ultimates” somehow lead to the emergence of “experiential” phenomena, they have a large burden of proof that doesn’t seem to be possible to meet nor coherent in terms of what it would entail. However, even for a panpsychist physicalist like me, I believe that what we tend to call “consciousness” (in humans for example) does in fact emerge based on information processing complexity, neural synchronization, physical/neuronal representation of sensory stimuli, etc. So that part of the argument I still buy because it is still needed for a complete explanation under my own panpsychist physicalist views. However if consciousness was always there in a primitive, fundamental sense (in every atom), then it never had to emerge from non-consciousness, but rather a complex recognizable consciousness emerged from a fundamental form of consciousness. This becomes analogous to how certain configurations of matter can give rise to self-replication, information storage, and even macro-scale properties like intense gravitational fields, black holes, etc. The emergentism in my case is really just a matter of degree rather than a leap from non-experiential to experiential.

“So while I would argue that consciousness comes in degrees starting with an individual atom, I find it more useful to dispense with the word “consciousness” in that sense (to eliminate confusion and not be mis-characterized as a new-age lunatic).”

It’s too late, Lage. I’ve already dismissed you as one. 😉 I kid, of course. On a serious note, we have to ignore ad hominem attacks and just care about the truth. If someone wants to label substance dualism “occult” or “unscientific” or whatever, so be it. That’s not the important question. The important question is, Does substance dualism correspond to reality? Is it true?

As for the rest of what you say, I think we’re largely agreed… even on your claim that consciousness is a “degreed” property. In the sense that you are describing it, yes, consciousness can be “more” or “less.” But I don’t think you contradicted me in my claim that it is either there or not there, regardless of what “degree” (as you used the term) it is present in.

I don’t really understand how this could be seen as an argument against physicalism without doing an extraordinary amount of mental gymnastics.

Couple of quick points I’ll make,

1) The fact that the subatomic particles in the atoms that make neurons and the atoms in the Lego are the same (which is true of course), that has absolutely no bearing on the higher level (though still completely physicalist) interactions that are happening on the scale of molecules (say neurotransmitters at a synaptic junction) and that strongly impact what the brain is doing/how it operates.

2) Just deciding any object X can stand in for every neuron and you’d have the same type of entity is very likely untrue. The brain, through completely physicalist processes, takes inputs from the outside environment through a massive number of nerve fibers of several stripes (sight, sound, touch, etc) and outputs a movement or decision or “behavior” and can actively change and rearrange it’s neural map and connective network based on those inputs. We’re not even getting to the point where all the different cells and component parts of neurons and massive number of molecular interactions of all shapes and sizes (again, through completely physicalist processes) contributes to what we call “consciousness.” It’s clearly an incredibly complex phenomenon that’s been honed through evolution over the eons.

3) To quote you: “they cannot rule out the possibility until they point to what, precisely, it is about the arrangement of particles in the brain that generates consciousness and rationality.”
Give it time. We’ve literally only begun to scratch the surface of how animal brains work. One question I’d pose to you is this: If we all started out as a non-conscious fetus that could be explained by pointing out to a couple of cells, and this fetus continues to grow into a human eventually…where exactly does this “non-physical consciousness stuff” enter the picture? I just can’t see it.

How anyone can just say “Oh sure just make them all Lego’s and it’s the exact same thing!” is somewhat puzzling to me. I mean it’s clear that a neuron (nevermind that much, much more is involved than simply the neurons) and a Lego are not anything alike (aside from fundamental particles), despite both being physical.

I can’t rule out the possibility that a certain critical mass or agglomeration of Lego’s could become conscious, but then again I can’t really prove I’m not a brain in a vat.

Hi Pete.
1) You’re exactly right. This criticism is fully justified. Please see my response to Angra Mainyu, point 2, above.
2) This criticism is an extension of 1. I will add, on top of my response to Angra Mainyu, that we don’t have to make the thought experiment with Lego bricks. We could do it with subatomic particles.

Suppose that you add one particle to another to another. Consciousness is an ON/OFF thing. It’s either there or its not. Sure, it can be minimal, as the panpsychist would claim about individual particles, but it’s still there. So, if we assume that individual particles are not conscious, there would have to be a point where adding one more particle causes the structure to go from non-conscious to conscious. What is it about that addition of a single particle that caused the structure to become conscious?

That one-by-one construction illustration is meant to draw out the intuition that consciousness is not the sort of thing that “emerges” from a certain arrangement of physical parts. That’s not the quality or nature of consciousness, which we understand “from the inside”, so to speak.

3) The “give it time” response is reasonable if it is in response to a certain type of dualist argument. If the dualist is saying, “We don’t know how an arrangement of particles could ever cause consciousness. Therefore, consciousness is not physical,” and that is ALL they are saying, I would agree that the “give it time” response is legitimate. The dualist in that case is arguing from what we do NOT know to a strong conclusion that isn’t supported. My LEGO mind-building illustration, however, is meant to draw out the intuition–that many agree with–that individual particles are not conscious, and that no amount of arranging those particles could lead to consciousness because we see consciousness to be of a different QUALITY; it’s a difference of KIND, not DEGREE.

I think the physicalist needs to at least respond to that claim with more than “Well, we just don’t know how, but that doesn’t mean it’s not physical.” The reason I think they need to respond with more is that dualists are offering positive reasons, from what we DO know, for why consciousness, intentionality, personal identity, intentionality, rationality, etc., cannot POSSIBLY be physical.

Here’s an example of that form of argument, in this case from intentionality (meaning):
1. Thoughts have intrinsic meaning. (A thought about the Eiffel Tower couldn’t possibly be a thought about the Eiffel Tower if it lacked aboutness, specifically, being about the Eiffel Tower. The meaning or aboutness is intrinsic to its identity as a thought.)
2. Physical structures, processes, and objects do not have intrinsic meaning. (We can assign meanings to physical things [for example, like the black squiggles on this page], but they don’t have to have any meaning at all. A random rock on the beach has no meaning. A set of neural firings in the brain, likewise, has no meaning or aboutness. It seems like a category mistake to ask, “What is this particular pattern of neural firings about?” But even if, for sake of argument, we say that a certain set of neural firings is the thought about the Eiffel Tower, clearly it would still be the same set of neural firings even if it lacked that meaning. The meaning is not essential or intrinsic to what the set of neural firings is.)
3. Therefore, physical structures, processes or objects are not identical to thoughts. That is, thoughts are not physical things.

Thanks, Pete. I am actually in strong agreement with your line of criticism in your response. I hope that the response I gave to Angra Mainyu demonstrated that.

I’m not a physicalist (I’m undecided between panpsychism and physicalism), but I don’t think this is a problem for the physicalist, since she can reply that it’s a matter for future research what sort of arrangements result in consciousness, and Lego bricks do not appears likely because the arrangements that we do see result in consciousness have complex combinations of different materials, not just plastic.

The physicalist can also draw the following analogies: both arrangements of Lego bricks and arrangements of things including metals are arrangements of particles, but it is not nomologically possible to built a magnet, or a working circuit, out of Lego bricks. They simply do not have the relevant properties. It’s also not possible to self-replicating things (akin to living cells) out of Lego bricks. So, there are plenty of things it’s not nomologically possible to make out of Lego bricks. In fact, most things that work in some specific and complex way (e.g., cells, computers, phones, cars, etc.) are not doable out of Lego bricks. Just by similarity, it seems extremely improbable that it would be nomologically possible for Lego bricks to be conscious, but that’s also a probably brute fact about our universe, and no (correct and warranted) explanation will ever be available, just as no explanation of the laws of nature will ever be available.

That aside, I think a problem for the physicalist is to justify her rejection of panpsychism. Why pick the former over the latter?

Hi Angra. I’m agreed with you on both points: 1. Why does the physicalist choose physicalism over panpsychism? Why assume that fundamental reality does (or even must) lack consciousness?

2. Your comments about nomological possibility are spot on. I actually anticipated them in my original draft, but deleted my discussion for the sake of length. Here’s what I originally wrote:

“An obvious disanalogy exists between the hypothetical Lego-model atoms and real atoms. The behaviour of sub-atomic particles, as described by physical laws, is different than the behaviour of macro-objects like Lego bricks, let alone solar-system sized Lego models. Consider the speed that an earth-sized Lego electron would have to travel around a sun-sized Lego nucleus in order to match the orbits per second of an electron orbiting around its nucleus. This defies physical possibility within our universe. Furthermore, even on the smallest scale, Lego bricks jostling around in a Rubbermaid container don’t possess a corresponding form of attraction and repulsion that sub-atomic particles have with each other.

This disanalogy removes the physical possibility of building a parallel, perfectly-to-scale model of the brain from Lego bricks. Yet, the disanalogy doesn’t definitively answer the original question: Could we build a conscious, rational mind from Lego bricks?”

So, even if it would be nomologically impossible to build a conscious mind from Lego bricks, the illustration puts focus on how the relevant difference between the conscious/rational and the non-conscious/non-rational is the ARRANGEMENT of non-conscious/non-rational bits.

After further consideration, I still think the physicalist might reply by drawing analogies like the ones I mentioned. Maybe there is a way around that sort of reply, but so far, I don’t see any effective one.

For example, let’s consider the difference not between conscious and non-conscious, but between living and non-living.
Living stuff is made up of the same non-living particles as non-living stuff. It appears that the relevant difference between the living/non-living (i.e., what causes stuff to be alive or not) is the arrangement and interaction of non-living bits, though we do not at this point know the specifics of that difference – it’s a matter for future research.
The physicalist might say that the case of conscious/non-conscious is similar: the relevant difference between conscious and non-conscious (i.e., what causes stuff to be conscious or not) is the arrangement and interaction of non-conscious bits, though we do not know at this point the specifics of that difference – it’s a matter for future research (either in science or philosophy or both).
The physicalist might further say: just as we can properly reckon (because of the examples we’ve encountered, making an intuitive probabilistic assessment) that it’s extremely improbable that we could (in a universe with the laws of ours) make a living structure of Lego bricks (though we don’t know the details of what causes stuff to be alive), similarly we can properly reckon it’s extremely improbable that we could (in a universe with the laws of ours) make a conscious structure of Lego bricks (though we don’t know the details of what causes stuff to be conscious).

As I mentioned, I don’t see a way around this sort of reply from the physicalist. Maybe you’d like to object to the assessment that the difference between the living and the non-living is the arrangement and interaction of non-living bits? Or do you think something else goes wrong?

You anticipated my response. I don’t think a dualist has to concede (although many would) that life is merely a combination of physical parts. Here’s what I wrote to Lage above, in response to the same analogy:

“There’s a disanalogy between single cells combining to form multi-cellular organisms (and your other illustrations), and the emergence of consciousness: namely, consciousness is an ON/OFF type thing, not a degreed property. In comparing your illustration of temperature and pressure “emerging” from collections of atoms to the emergence of consciousness, it strikes me that temperature and pressure don’t “emerge” from collections of atoms. ANY collection of atoms would have temperature or pressure, whereas on the physicalist view only certain very complex arrangements of atoms would cause consciousness to emerge.

“This means your comparison to life would be closest to the mark. However, the question of how life “emerges” is itself one that gets at the heart of some major metaphysical commitments. Dualists, and in particular, substance dualists, are already willing to allow for robust, immaterial realities. Once that line has been crossed, I don’t think the dualist is bound any longer by a strict physicalism in other areas. Could it be that life is also explained by teleology, as Nagel entertains in his book MIND AND COSMOS? If that were the case, then the physicalist cannot appeal to life as a counterexample.”

It seems to me consciousness is a property that comes in degrees as well. We can be conscious to different degrees (e.g., when under the effects of drugs, when we sleep, when we’re waking up). There might be a minimum that is on/off, but I don’t know that there is. Also, some of the properties of life also might be on/off, like being self-replicating. Computer programs can do that too, by the way – they self-replicate even if they’re made of bits that don’t.
In any case, though, I think the physicalist may reply as follows:
1. Consider a microwave oven (for example). It can be switched on/off. It emits microwaves depending on whether it’s on or off. But it’s made of bits that do not emit microwaves. And whether a device emits microwaves depends on how the bits without that property are arranged.
2. The same goes for a Tokamak reactor, or a laser, or any of a long list of devices.
In some cases at least, whether a property is there is an on/off thing (e.g., whether a device is producing electricity by means of nuclear fusion).
Why would consciousness not be like one of those?
It might be that depending on how the non-conscious bits are arranged, the whole device (whether a brain or a computer) is conscious or not.

There is still the difficulty of the justification for ruling out panpsychism, but that’s a different issue I think.

Angra, I’m curious. I would ask this to Gordon but he bracketed the panpsychist not being physical question so i’ll leave him alone about this. But you say that you are on the fence between physicalism and panpsychism. But then you say that panpsychism isn’t a problem for physicalists. And then you say, to my ears rightfully so, that a physicalist would be hard pressed to reject panpsychism. So, I’m confused. What do you mean by deciding between the two theories?

FWIW, I think all physicalists who admit that there is consciousness should be panpsychists. I’m with Galen Strawson on this. And I also believe that this physicalist story is better than the dualist story on offer given that any dualist story will seemingly have to posit extra stuff that I just don’t see the need for.

I didn’t say that panpsychism isn’t a problem for physicalists. What I said is that the issue raised by Gordon does not seem to be a problem for physicalists. I think the difficulty for the physicalist is to justify her assessment that panpsychism is false, or if you like, justify that she assigns a very high probability to physicalism (considerably higher than 0.5, given that she’s a physicalist), given the option of panpsychism.
However, I think there is a similar difficulty for the panpsychist – namely, how to justify her assessment that physicalism is so improbable.
Here, I’m using “physicalism” in a way similar to the way I think Gordon is using the word; maybe on a different terminology, the contrast would be between emergentism and panpsychism, but terminology aside, I think the problem is to take sides between the two, given the infomation available to us. I mean, how does one go about ascertaining whether there is some basic consciousness all the way down so to speak?
I don’t think we have empirical evidence that would significantly increase the probability of either option (I don’t know whether we ever will), but without empirical evidence deciding, we’re stuck with the priors + whatever we can get from a priori reasoning, and I don’t see how either of those would justify assessing that one of the two options is correct. Granted, maybe some physicalists or some panpsychists have some a priori arguments I haven’t seen, so that might happen, but at any rate, the physicalist or the panpsychist would require at least some a priori arguments to establish that their position is correct (unless their priors do all of the work, but I don’t think that’s proper, either).

Interesting post, as always. I think you’re right to criticize physicalism for making it unclear how exactly non-minded matter is supposed to combine to create matter which is also a mind. In this respect, I take panpsychism to sort of be a reductio ad absurdum of physicalism.

But I’m not sure this is good news for non-physicalism and/or dualism. To my thinking, dualism is in no better a position than physicalism to explain the mind. In fact, I think both dualism and physicalism share the same basic flaw: treating the mind as a sort of quasi-‘object’.

I prefer the Aristotle-Wittgenstein-Ryle tradition of taking the mind not to be a thing, which conscious entities possess, but rather a set of abilities and capacities that a being has. So a person has a mind not in virtue of having an immaterial res cogitans, nor is a person’s mind just their brain (which, implausibly, is a mind which is made up of bits of non-mind). Rather, to have a mind is to be able to *do* certain sorts of things. In this sense, having a mind is not like having a car; you don’t possess a thing. It is more akin to being recognized as having a role with certain sorts of authority and responsibility, i.e., the boss of an office.

I’m inclined to agree with your criticism in a limited sense: I don’t think a conscious entity could possess a mind. I think the conscious entity just would BE a mind. I’m inclined to reverse the order from the way you characterized dualism: you don’t have an immaterial mind. You ARE an immaterial mind. You have a body.

The question I have in response is, What is the “being” that you refer to when you say the mind is “a set of abilities and capacities that a BEING has.” Is that being an object? A purely physical object? Or an object made of irreducibly conscious bits (as in panpsychism)? By referring to a “person” or a “being” (understood to be a person) that has the capacities and abilities seems to sneak in the mind through the back door.

What the dualist is going to point in order to justify inferring an immaterial mind (an object) are the QUALITIES of the mental that are not possessed by physical objects, so, by extension, must be possessed by a non-physical objects (or SUBSTANCE). For instance, thoughts have intrinsic meaning (intentionality)—physical objects don’t (only derived meaning).

I might be misunderstanding you and the Aristotle-Wittgenstein-Ryle view, but my question would be: What is the ‘you’, the ‘I’, the ‘being’ that has the abilities and capacities (capacities like consciousness, intentionality, rationality) on that view?

Thanks for the reply. From what you’re saying, I think you might have to say some more about why you think it should be the case that we identify the person with the mind. Certainly a certain kind of dualist would think this, but I think that this is still an example of the kind of thing I was disagreeing with, of treating the mind as an entity.

Your question was: “What is the ‘you’, the ‘I’, the ‘being’ that has the abilities and capacities (capacities like consciousness, intentionality, rationality) on that view?”

Roughly speaking, the thing with abilities and capacities to do things is the organism, the human animal. A ‘minded’ being, then, is an animal (or animal-like thing) which has the right array of abilities and capacities.

You might object that this is not helpful, because an animal is just a part of nature and no part of nature has mental properties. But what I’m saying is that it is a mistake to suppose that a *part* of the animal (say, its brain) is its mind, or that a part of that part (a neural state, maybe) is e.g. a thought. The mental properties you are talking about (being rational, being conscious, having thoughts) are properly ascribed only to the whole being, not a part of it (its mind, soul, or brain).

The only exception to that is intentionality–it doesn’t make much sense to say that an animal has intentionality, an animal itself isn’t “about” anything. But on this sort of account, what gives thoughts or other mental acts intentionality is, very roughly, a matter of their use in normative linguistic practices. But that is a story for another time!

Anyway, the only reason I raise all this is to suggest that neither physicalism nor panpsychist “physicalism” (I use scare quotes deliberately) are the only non-dualist options. In the tradition of philosophical thought I work from (pragmatism, broadly speaking), both physicalism and dualism are inside the same Cartesian framework, and we have to *start* by rejecting that framework in order to make progress in epistemology and philosophy of mind.

Thanks, Brandon. I would like to hear that “story for another time”. 😉 That is where the rub would be, I think, given my own view. However, I’d love to hear more sometime.

Your response is helpful. I have a good friend who is always on me to read Ryle and not just read ABOUT Ryle. I’ve started A Concept of Mind, but not seriously worked through it. Shameful, I know. I shouldn’t admit these things in print. I won’t say anything more now in response because I really need to contemplate the view you put forward more seriously. Maybe I’ll pick up that copy of Ryle…

That said, I like your suggestion about saying more regarding my view of why I understand us to be minds that have bodies. (I lean that way because I don’t think we are embodied creatures of necessity.) To steal your line, that will be a story for another time… as in perhaps my next post.

Thanks for breaking into my Cartesian bubble with an outside perspective.

I have recently published an ebook ‘On the Mechanisms of Consciousness: How consciousness works’ https://www.amazon.com/-/e/B01MUAZ8Z2 which I believe defines what the Lego bricks would need to do, and how they would need to be connected together architecturally. In fact, active Lego bricks would be quite a good analogy for the basic elements that are illustrated there. This is an engineer’s take on how to make something conscious, so it’s down to earth; no need for quantum fairy dust here! Would be most interested on your or others’ thoughts on this independent research. Peter Martin pjm678@hotmail.com

I think (although you no doubt meant it humorously) that mentions of things like “quantum fairy dust” or “ectoplasm” or “occult entities” are really just a way to avoid addressing the real concerns that dualists raise through disparagement. The concern is explaining reality as we observe it, and arguably the most salient feature of reality is our own consciousness, our own minds. (This is what let Descartes doubt, rightly or wrongly, everything else, aside from his own internal consciousness of existing.)

The debate over consciousness is largely a debate over how to understand what it is. The claim that it can be assembled brick by brick seems to already assume that it is a very different sort of thing from what I am talking about, which is experienced from the inside, not observed from the outside.

If you were to show me an assembled Lego android, and it appeared to act just like a human, I would think that you’d simply created a machine that cleverly mimics a conscious human. But I’m highly doubtful that an android that could act “just like” a human is even possible. Even so, it still wouldn’t explain why I feel the way I do, and how my consciousness is connected or related to the physical body I am in. (The physicalist more or less says the relation is one of identity.)

My work would say that a conscious entity can be built brick by brick, but it is only once what this implements has the right, looping, architecture that it can operate in a way that generates consciousness. To determine if something is conscious you need to look at the architecture as there is no sure way of telling by looking at its behavour (ie a white box rather than black box analysis is needed).

The architecture is relatively simple, but has to have the right elements. The key is that it enables a persisting (but evolving) representation of the current situation that includes the entity’s own position in state space, its options, and the consequences for it if it follows them. Also that this representation is enactable within the brain. Consciousness therefore arises from a sort of ‘tennis match’ between a persisting representation (what we currently know) and the real-time processing that enables us to both act in the world and update the representation, whilst also being determined by the representation.

All this stuff is difficult to cover in words alone (which is a bit tough for philosophers!). Pictures make it a lot clearer. In my book

I define the architecture in pictures, then go back and explain where the mysterious nature creeps in (eg because representations are in the form of attentional pointers that can themselves be attended to) and how this answers the standard philosophical questions about consciousness.
Regards
Peter

….and to answer your question, it depends on what each Lego brick can do and how rich a conscious mind you want to create; but consciousness pops into existence when the information processing loops are created that enable it to use its own position in state space as if it was sensory data, and to modify the same as if it was taking a motor action. Consciousness then exists over the extent of the whole brain and over the time period that is necessary for information to flow to and fro over the whole brain – it is a phenomenon that only exists therefore over the whole extent of the brain and once a cognitive cycle has completed. A simple device, such as a thermostat, can be made conscious, so only enough Lego bricks are needed to instantiate the architecture for this simple case, then more can be added if you want to deal with a bigger state space.https://www.amazon.com/-/e/B01MUAZ8Z2
Peter Martin

A normal thermostat is not conscious; it does not experience temperature. Once a thermostat’s real-time decision making is governed by what it knows (ie information that persists beyond one cognitive cycle – for example different operating modes), and that includes where it could go in its state space, and how good or bad that would be, and it can either communicate that or use it to change its decision, or to change what it knows, then it experiences temperature. This is because being at a current temperature is represented along with the context of how it got there, what it could do next and what the consequence would be. How rich this experience is depends on the size of state space it has. This is implementable.

Hi Peter. You talk of a thermostat “knowing” things and making “decisions.” I can’t make sense of that. However, I think you’re right that if a thermostat did possess those ablities, it would be conscious. Thoughts, beliefs, sensations, desires, volitions–these are all modes of consciousness. Your description of the thermostat, then, as “knowing” and making “decisions” seems to be just another way of describing consciousness. In other words, I read your description as saying something like, “If the thermostat was a conscious mind (knowing, communicating, deciding), then it would be conscious.” Do you think a thermostat “knows” in the same way we know things?

Thanks for your response, fascinating discussion, just a bit hard to express in a few words without pictures! I have defined an architecture that is necessary and sufficient to make something conscious, and would work for a complex or a simple entity. I am using a slightly enhanced thermostat as the simplest meaningful example I can come up with of something that can be made conscious, and I am implementing this in software at the moment.

A simple thermostat can just route ‘too hot’ to ‘heater off’ and ‘too cold’ to ‘heater on’ and does not need to be conscious. I have added a couple of different modes that the thermostat can be in, with different things it does in those modes (a normal mode and a calibration mode), and criteria for switching between the modes. These modes, once switched on, persist over time as the context for its real-time behaviour. Looping this back, gives it access to its own internal states.

A valence function tells the thermostat what good looks like (pleasure or pain), and it behaves to optimise this function. This gives the possibility for it to consider alternative actions and their consequences; or to communicate with a user about what it is doing or could do.

Here are my definitions of consciousness and knowing:

To be conscious of something is to know that we know it.

Therefore to understand what it means to be conscious we need to define what it is to know.

We know something if we have instantiated in our mind a mental structure that enables us to pay attention to significant inputs, discriminate different values of those inputs, and produce useful outputs in order to give us control relative to that thing.

When we know that we know (ie when we are conscious), we have instantiated in our mind a second mental structure that enables us to pay attention to the first mental structure, and produce outputs that give us control relative to it.

I believe that it is only by being clear about the architecture that gives rise to consciousness that we can cut through the debates and describe exactly what is actually happening in the conscious brain.
Regards
Peter Martin

Terms like “we” and “us” refer to the first-person perspective–which is the self, or the conscious self. We have to be careful how we use those terms in explaining the mind, otherwise we might unwittingly smuggle in the thing that we are trying to explain into our explanation.

The complexity of what happens in our brains is fascinating, beautiful, awe-inpsiring, etc., etc., and to speak of the “architecture” of what is going on in the brain is necessary for those who want to understand the way the brain works, but if someone wants to point at a physical system of neural wiring, no matter how complex, and claim that “that right there is consciousness,”…well, then it’s at that point that I no longer think we’re talking about the same thing.

No, you’re absolutely right. Thanks for pointing that out. I came up with the illustration while thinking about adding one atom to another, over and over, until a complete brain is assembled and asking myself, “Why would the addition of one more atom cause consciousness to suddenly appear?” Same insight, but not nearly as eloquent as Block. Accuse me of plagiarism and I’ll insist it’s cryptoamnesia.

Gordon,
What’s the justification for the claim that consciousness is either ON or OFF? I’m inclined to agree with Lage that there are reasons to see it more as a matter of degree, just as Sorites’ heap never suddenly pops into existence.

That consciousness is ON/OFF is simply something that is apparent upon reflection. I don’t know what justification to offer other than to point out that whatever “degree” of consciousness one supposes to exist (whether .01 degree or 1000), it would still be consciousness. Can you explain the alternative? That is, consciousness as something that isn’t ON/OFF? I think any attempt to do so would lead to absurdity. (I think panpsychism only makes sense and has traction because we first see consciousness as an ON/OFF thing.)

Hmm. Given your response I’m not sure I could provide a satisfactory explanation for how consciousness isn’t binary. I definitely favor non-essentialist conceptions and think that history has generally favored the idea that the classifications we impart on reality are often best understand as continuums. Maybe it would help me understand your position if you provided your answer to the Sorites Paradox? My answer is that a heap is a concept that we associate with piles of material that are numerous and that there simply isn’t a clear delineation or any sort of Platonic essence of “heapness” which appears and disappears as the quantity changes. It seems reasonable to suppose that consciousness can be similarly understood.

We can leave it at a polite disagreement: I don’t think it’s reasonable that consciousness can be understood as non-binary. You acknowledge that you aren’t sure how to go about offering such an explanation. Likewise, I struggle to offer you a counter-explanation, other than to state simply, I think it’s evident on reflection that consciousness is an ON/OFF thing.

That said, another commenter led me to clarify: someone can be more or less conscious, but in either case, that person is still conscious. (This is like a computer screen whose brightness can be increased and decreased, but it is either displaying light (ON) or it’s not (OFF). There’s no third option.) If consciousness is there, it’s there. If it’s not, it’s not. A third option doesn’t make sense to me. What would the third option be? That is essentially what you state you’re not sure you could explain.

What I meant was that I didn’t think I could provide a satisfactory explanation within what I perceive to be an essentialist perspective. I then tried by offering an analogy with the Sorites Paradox. I really would like to try and better understand your contention and I think it would be helpful if you could try to explain why my analogy doesn’t translate, or at least offer your answer to the paradox.

Sure. I don’t think consciousness is like “heap-ness”, where it’s unclear when something becomes a heap. I think consciousness is like the computer screen that is either ON (displaying light, to whatever degree, however faint) or OFF (not displaying any light). It’s just obvious whether it’s on or off–but’s it’s not neither. Again, I don’t expect this to compel you to agree with me, but hopefully this illustrates my view of consciousness and why I don’t think there is any analogy between consciousness and heaps…unless, perhaps, it’s heaps of conscious particles (but I’ll leave that to the panpsychists).

OK, let’s pursue the screen analogy. Given that I’m fairly certain that my computer screen is not at maximum entropy, the atoms of which it is composed are still emitting photons when it isn’t plugged in. So what constitutes “ON” if not the emission of photons? Do we trace the analogy all the way back to the difference between a state of maximum entropy and not-maximum entropy? It seems a bit ludicrous at that point to think that we’re making some sort of analogy with the thing we call consciousness.

Just make the definition of “ON” more specific. It is ON if it is emitting photons, it is OFF when it’s not. It’s either ON or OFF. Is there a third option between emitting photons and not emitting photons? (Notice I defined “ON” as “displaying light, however faint”.)

This discussion has become a red herring. The key point is that consciousness is evidently, upon reflection, an ON/OFF thing. This conception of consciousness is coherent. Your suggestion that consciousness is non-binary is incoherent to me. You are unable to explain how that would be the case. You simply offered an example of a degreed (non-binary) property, like being a heap, and said that is what consciousness could be like. Likewise, I was simply offering a non-degreed (binary) property, like emitting light or not, and saying that that is what consciousness is like. So this disagreement comes down to an intuition of how we understand consciousness (not how accurate our analogies are). Again, I think we should agree to disagree.

I would be curious to know, though: Do you actually think that consciousness is non-binary, or are you simply playing devil’s advocate?

I don’t expect you to pursue the analogy any further but I do want to explain why I don’t think it’s a red herring. I think that you would actually have a very difficult time finding an appropriate analogy for the proposed binary nature of consciousness because when we really dig into the classifications we make for real-world objects or phenomenon, we rarely (if ever) find clear delineations. Labels are, more often than not, heuristics for differentiating between two ends of a continuum, even if the middle of that continuum is sparsely populated (of which I don’t think the consciousness continuum is an example).

With regard to the existence of this thing we call consciousness – rocks and humans fall on opposite ends of the continuum but bacteria, plants, insects, lizards, etc… fall into that gray area where we have trouble differentiating between conscious and unconscious. So, no, I don’t think that consciousness is binary. I don’t see that we have any way to adequately define consciousness so as to support a binary classifications, but that doesn’t mean that it can’t still be a useful concept for distinguishing between opposing ends of a continuum.

I’d like to introduce an analogy between a brain becoming conscious and a bell ringing. Replace the lego bricks with metal blocks being welded together to make a bell. At what point can the bell be said to ring?

It would be necessary that the bell has sufficient structure and shape to resonate. If it is to ring at a particular frequency, it would need to be of a particular size and thickness. If it is to have particular overtones it is to be of a particular shape.

It still will not ring until it is struck and enough time passes for the impulse to pass across the bell and back and reinforce itself, to set up a resonant vibration. And each metal brick must interact correctly with hits neighbours.

The ringing of the bell is a dynamic property of the bell as a whole that only really exists over at least one period of the vibration. Thus ‘ringing’ is something that is dynamic property of the whole bell, but can exist in something put together a block at a time.

This is a close analogy to consciousness arising the brain, although the informational richness adds various elements.
Peter

I don’t think of minds or of consciousness in terms of having parts. A conscious mind, as I conceive of it, is not a complex thing, but rather simple, in the philosopher’s sense. The conscious mind is not divisible into parts. So I don’t think the bell analogy works. As an illustration of how your mind is simple, reflect on the question, Does my experience of touch, sight, smell, sound, and taste happen in different places in my consciousness? Or is my internal experience of being a conscious self a unified, simple whole?

Gordon
The unity of conscious arises primarily (but not only) from the fact that at the end of each cognitive cycle (of a few tenths of a second) a single decision has to be made about what to pay attention to in the next cognitive cycle, and what action to take (ie patterns to generate) dependent on the values taken by the attended information. This decision is made over the available inputs, potentially, across the whole brain (which is why enough time must be allowed for this to have its effect – being a few tenths of a second for, say, 10 neurons connected serially). The decision is made to maximise valence, ie, crudely to maximise pleasure and minimise pain, although this is discounted for probability of success, delay to outcome, and cost.

In respect of being made of parts, consciousness results from a particular style of interaction between parts. The fact that we are conscious of complex information implies that multiple parts are needed to represent it (one bit – 1 or 0 – wouldn’t be enough). As soon as you have multiple parts (neurons) holding information, you have to allow enough time for them each to contribute to a decision about what to do next, and you have to have a way of arbitrating between them. This is what turns a brain, and a conscious creature, into a thing-as-a-whole rather than a bag of bits. Physics does not have much to say about composite things, even though they do not contradict physics. Their behaviour best represented at an appropriate level of abstraction, not the physicist’s fundamental particles (if such exist).

Consciousness is a property of a distributed but interconnected information processing system. Understanding and emulating this is a straightforward engineering activity, albeit the loops and twists in the architecture give rise to some puzzles and interesting properties. What answer do you want to questions about consciousness, if not a distributed architecture that works and the implications that flow from it?
Regards
Peter

Hi Gordon. I’ve been struggling with this article. I understand that physicalism has an explanatory problem: it cannot explain what it is about an arrangement that causes consciousness. However, I don’t know how non-physicalism can address this question any better. Strawson’s panpsychism, for example, can’t explain what it is about existence that causes consciousness. Further, representationalist theories of consciousness cannot explain what it is about representational mental states that cause consciousness, too!

Hi Argon. Thanks for the question. I’ll start my response by making a distinction: we can state that something is the case without being able to state how it is the case. For example, Newton came up with a theory which claimed that a force can act at a distance (gravity) without being able to explain how such a force worked. Some of his contemporaries mocked the mysterious and occult force that he appealed to. Nevertheless, Newton’s theory was eventually accepted because it did an elegant job of accounting simply for the phenomena we observe in the world. He proposed a hypothetical explanatory entity to explain the phenomena.

Now, you state that physicalism “cannot explain what it is about an arrangement that causes consciousness.” Granted. I claimed as much. But there is a special reason why physicalism would need to do so: physicalism is limited to the physical (or physical terms) for offering an explanation. Physicalism cannot posit explanatory entities outside of the physical realm, or outside of what is considered to be physical. Furthermore, after ruling out panpsychism as a form of physicalism (asserting for sake of argument that the fundamental “bits” of the physical world are not conscious), what can physicalism appeal to in order to explain consciousness? If I’m right, only arrangement. Consciousness, by the definition of physicalism stated here, cannot appeal to consciousness as a “brute fact”–a fact that doesn’t need to be explained. That is what panpsychism does. Or at least, it can. Panpsychism isn’t required to explain consciousness further.

But physicalism, unlike panpsychism, cannot explain consciousness as a brute fact. Consciousness has to be derived from (the more fundamental) non-conscious matter. Consciousness is a late arrival in the history of the universe on this view. If I am right, then the only thing that the physicalist can appeal to is the arrangement of physical stuff. And that is where the problem lies. The illustration of the Lego brick brain is meant to draw out our very strong intuition that there is no connection between the arrangement of physical stuff, no matter how complex that arrangement becomes, and the quality of consciousness. Physical stuff lacks that quality (by definition, according to physicalism, at the most fundamental level) and re-arranging and assembling physical parts cannot create that quality. So physicalism cannot account for consciousness. There is no room for consciousness in physicalism.

If I am right, the post only serves to illustrate the absurdity of thinking consciousness can be explained by the arrangement of physical stuff. That opens the door for a non-physicalist explanation, but it doesn’t require it. One can reject a theory without offering an alternative.

I should make a very important qualification here: I just realized that I am assuming that a theory which says something like “Arrange 25,000 neurons in such and such a pattern and consciousness will appear–but the consciousness itself will not be a physical thing–… I’m assuming that that is not a physicalist theory, for the sake of my argument. Consciousness emerging from a physical arrangement, but being something beyond the physical arrangement (i.e., a non-physical reality, as in epiphenomenalism), is not a strict physicalist theory in my view, and for the sake of my argument, since the theorist is positing a non-physical entity to explain consciousness. You see then that I am arguing against a narrow conception of physicalism that many naturalists would reject.)

That said, since you asked, I think substance dualism is the best explanation for our minds. Panpsychism is an option that I reject because it runs in to trouble, as far as I understand it, in accounting for the UNITY of consciousness. I experience the world as a single, simple, unified “Consciousness” or Mind. I am not many conscious bits put together, but one single conscious mind. How does panpsychism explain that?

I’ve dealt with two of three of the common objections to substance dualism, and I’m planning on very shortly responding to the third (the interaction problem, which I consider, along with William Hasker, to be one of the most overrated objections in all philosophy. The interaction problem begs the question, in many of its formulations, against substance dualism, and is really only a problem from a naturalistic worldview.)

Thank you for your response. I feel that I have missed a point. How does dualism, for instance, explain answer the question: “What is it about the [pairing] of individually non-conscious particles [with their dual-substance partners] that causes consciousness and rationality? Without an answer to that question, [dualism] would seem to offer no explanation at all.” I’ve modified the question with which you end your article to address dualism rather than physicalism.

I believe that the task to which you set physicalism (Physicalism must explain what it is about the arrangement of non-conscious particles that causes consciousness, or else it offers no explanation at all) is too stringent! Physicalism adequately addresses our problems just in case it explains that such-and-such arrangements are conscious arrangements.

I don’t think the task I’ve set to physicalism is too stringent. The physicalist constrains himself to that task by insisting that all of reality is physical.

The physicalist is claiming that all reality is derived from non-conscious parts. Since consciousness doesn’t appear to share any of the properties of physical stuff, the physicalist must give us reason to believe that the properties of consciousness are derived from a collection of non-conscious parts.

If I pointed to a pile of Lego bricks (or a pattern of billions neurons firing, for that matter) and told you, “That right there is consciousness,” you would be fully justified to say, “No it’s not. That’s silly. You haven’t explained how that right there is consciousness because I can see immediately that it is nothing like what I experience from within, from a first person perspective.”

In other words, pointing to a correlation between physical activity and consciousness does not explain how consciousness is physical, especially when it is arguably self-evident that the physical activity being pointed at isn’t the consciousness itself. The physical activity doesn’t share any of the unique qualities of consciousness.

“So,” you ask, “how is dualism any more of an explanation?” Fair question.

I see the dualist explanation being something like this: “Why does consciousness appear to be a unique property unlike anything physical? Physical realities are third person observable, and have properties like mass, charge, and location. In contrast, consciouness is not third-person observable, and to describe it as having mass, charge, or location seems like a category mistake. Why is that? Because it is not physical.”

I don’t think the dualist is obligated to offer further explanation for why or how consciousness arises, beyond saying that consciousness is not physical. Why not? Because the extra ontological category explains why physical reality is nothing like our mental reality and vice versa. That, to me, is an explanation, however minimal, that moves us one step closer to understanding what consciousness is.

If you don’t think that that is an explanation in any way, then we will just be talking at cross purposes.

“But why does the physicalist have to say more than just point to a physical story?” The problem is that physicalism, as I defined it in the original post and in my subsequent replies, must in some way hold to an identity between something physical and consciousness. Without offering the explanation of consciousness in physical terms, the physicalist hasn’t given good reason for us to believe that consciousness is physical at all.

Your claim that “physicalism adequately addresses our problems just in case it explains that such-and-such arrangements are conscious arrangements,” would make physicalism compatible with dualism, because the dualist already agrees that we can point to “such-and-such arrangements” and declare them conscious arrangements. The correlation isn’t the relevant question here. The ontological status is.

There are asymmetries between your treatment of physicalism and your treatment of dualism that I believe need to be squared away.

In your writings, you have each theory attempting to answer different questions. This is not so; however, you’ve described dualism as attempting to answer the question: “Why does consciousness appear to be a unique property unlike anything physical?” all while you’ve described physicalism as attempting to answer the question: “What is it about such-and-such arrangements that causes consciousness to arise?”

In a previous post, I modified the physicalist’s question into a question for the dualist: “What is it about the interaction between physical such-and-such and non-physical such-and-such that causes consciousness?” I’ve gathered that the dualist answer is something akin to: “The interaction between physical and non-physical stuff just is consciousness,” but this offers no better an explanation that the physicalist’s “Such-and-such arrangements of physical stuff just is consciousness”. We should ask the dualist: “Why does the interaction between physical and non-physical stuff cause consciousness rather than something else?” This is analogous to your question posed to the physicalist: “Why does the arrangement of simples cause consciousness rather than something else?” You have argued that the physicalist has no good answer for this question. Your defense of dualism, however, has offered no good answer to the question: “Why does the interaction between physical and non-physical stuff cause consciousness rather than something else?” and failing to provide a good answer to this question shows that dualism provides no better an explanation than physicalism.

Another way of seeing the differences between your treatment of physicalism and your treatment of dualism comes from asking the physicalist the question that you’ve described dualism as answering, mutatis mutandis. You’ve described dualism as answering the question: “Why does consciousness appear to be a unique property unlike anything physical?” This begs the question against the physicalist, who could respond by saying “Consciousness, as something physical on my view, must be similar to something else physical, and just because we cannot recognize those similarities does not mean they are not there” or by saying “The nature of physical reality is nuanced enough to incorporate the inside-perspective that consciousness has, and so there is no conflict at all.”

In conclusion, I ask that you expand on why you’ve presented the physicalist and the dualist as answering different questions, because common sense dictates that no legitimate objection can be raised against apples for not being oranges.

Physicalism can hope to explain the behavioral aspects of consciousness (speech, motion…), because those things can be physically defined. But the most important aspect of consciousness is subjective experience (SE, in what follows), which so far has not been physically defined.

The closest things to definitions of SE, are actually ostensions: they attempt to direct the audience’s attention to its own SE. So they depends on the audience having SE; what I would consider a “real definition” would not so depend.

The sort of things I’m calling ostensions: there’s such a thing, as what it (SE) is like (Nagel); it’s inconceivable, that SE doesn’t exist (Descartes); the more “impressionistic” descriptions of qualia (which make them examples of “what it’s like”). Those who “get” these ostensions, are those who know that consciousness truly is a hard problem.

So, to those of you who would physically explain SE: first, what’s your physical definition of SE?

Here’s how I would physically explain SE (Subjective Experience). The brain physically holds a pattern that represents the information that is the content of Subjective Experience. The brain also embodies the mechanisms that read that information and translate it either into physical actions, or changes to mental contents (including those representing SE) and attention at the next cognitive cycle.

What this overall mechanism generates is a conscious entity that both holds the representation of the subjective experience and enacts it, updated on a timescale of the cognitive cycle (a few tenths of a second), and across the whole brain. At shorter timescales, or over shorter length scales (less than a whole brain) are the parts that make this new whole possible, but which are not themselves conscious.

What the conscious entity will care about, do and become is determined by what it calculates to be positive and negative for it. I call this valence.

One of the places physics doesn’t help much, is with compound entities. Physics is great at fundamental quantities and relationships in the physical world. Once we are interested in overall behaviours of compound entities, and control and prediction of their behaviours, physics is not wrong but is at too low a level to help much.

“The brain physically holds a pattern that represents the information that is the CONTENT OF Subjective Experience.” I emphasized “content of”, because I think that relation needs clarification (or “unpacking”, as the hipsters say).

To a non-experiencer: that info is the content of the brain, so you’re just saying SE is the brain.

Only a subjective experiencer, recognizes his own SE as something which that info is “content of”. So that makes this one more ostensive definition. Not bad, really: even ostensive definitions are rare and valuable.

George
The distinction I want to make here is that in talking of ‘subjective experience’ we are implicitly partitioning what is a single brain (at the ‘hardware’ level) into two parts: 1. The information structures that represent the subjective experience, and 2. the experiencer capable of using and updating the information structures. Subjective experience arises through the interaction of the two, as a process.

I find it useful to think of subjective experience from this direction: if I had to create an entity that had a subjective experience what would I need? I would need a representation of the information content of the experience. I would need a measure of what good and bad looks like to the entity, so that there is a ‘feeling’ to it. It would need to have a representation of what actions it could take and has taken/not taken, and what the consequences might be (factually and in terms of good/bad outcomes). I would then need a cyclic mechanism to turn this representation into action and a next, updated representation.

Don’t think I need anything else….is so what? Anything you can feel, I can represent and enact.
Regards
Peter