The so-called "Hard Problem of Consciousness", i.e. the problem of explaining why we have subjective experiences, is not a hard problem, given what we know about the structure and function of brains, and the relation of mind and brain.

We know that mental functions are isomorphic to brain functions. When photons hits our retinas, they cause the photoreceptor cells to emit chemical and electrical signals, which are carried along the optic nerve into the occipital cortex, exciting the cells to which they are connected, which emit further chemical and electrical signals.

We can compare such a reaction to the reaction that e.g. a sunflower has. Like sentient beings, sunflowers have parts that are specially effected by the incidence of light, which trigger chemical changes that result in the motion of the flower towards the light source. But, unlike sunflowers, in humans the excitement of retinal cells doesn't feed directly to the motor cells. Instead, the signal is passed into a network of other cells, which have been trained by a lifetime of similar signals to respond differently to different types of signals.

These networks are layered and looped so that there are parts of the brain that react to specific types of activity in other parts of the brain. As the retinal cells respond to light, the cells in the occipital lobe respond to the excitement of retinal cells. The frontal lobe responds to more macro-level responses, reacting to the reaction in various lobes, and exciting or suppressing reactions elsewhere in the brain as a result. What's happening here is that the brain is literally wired to react to its own activity. Just as the retina responds to light, other parts of the brain respond to the responses to the responses to the responses to light.

This is the basis of subjective experience. Our subjective experience is our brain observing itself. The qualia of "red" is inside view of a brain observing its own reaction to the incidence of light of a certain wavelength on a retinal cell, as well as other sensory inputs, network activity, and pre-trained weightings. We should expect that a network designed to react to its own activity has a subjective experience, because, by hypothesis, it has a quasi-sensory relationship to its inner workings.

This is intended to be a high level sketch, because it's a high-level question. We don't know how neural networks solve problems, but we understand why we don't know and we expect that the result will be too complicated to fully understand. When a neural network beats us in Go, we can describe the network, we can build it, but it involves too many 'cells' with too many interconnections and weightings to fully express why the network chooses the move it does. Similarly, explaining the qualia of "red" as an expression of each cell and each connection and each weighting involved will be impossible, and yet we can see that it must be so.

Conscious experience is the brain literally experiencing itself, as we know it does. Being a brain wired to experience itself just is being conscious.

As far as I can tell you just regressed the problem, but did not solve it. You start with photochemical reactions. Particles hitting particles causing cascades of effects. No noticing is taking place. But later, in the brain, some neurons or chemical conglomerations notice. Suddenly there is noticing. When in fact we could just have the retinal activity leading to internan brain cascades with no noticing, not observing, just effects. You introduce observing and seem to assert it is simple what happens because it is internal to the brain, rather than the more interactive interaction (the outside causes impinging on the outer parts of the nervous system in the eye) but once the action is internal to an organism/nervous system observing arises. But there is no explanation. You simply assume that once it is between facets of the internal organism experiencing will arise.

But, in fact, in a physicalist universe everything is internal and external. There is just matter. There is no reason, yet demonstrated, for why causation inside an arbitrarily distinguished portion of all these causes and effects (the inside of the the brain) should have observation, and not all the others outside of bodies or in the interactions between the retina and photons or between photons outside bodies.

Karpel Tunnel wrote:But, in fact, in a physicalist universe everything is internal and external. There is just matter. There is no reason, yet demonstrated, for why causation inside an arbitrarily distinguished portion of all these causes and effects (the inside of the the brain) should have observation, and not all the others outside of bodies or in the interactions between the retina and photons or between photons outside bodies.

And then there's the part I tend to focus more on. The part where the eye sees something that triggers intellectual and/or emotional and/or primal reactions in the brains of those all along the moral and political spectrum.

The brain/mind is tasked not only with describing what in fact is there to see but with whether it ought not to have been there at all.

Reactions that revolve around the consequences of what we think we see and why we think it is either rational or irrational that it be there to be seen at all.

Those "subjective experiences" that appear considerably more problematic.

He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles

I don't disagree with the mechanisms Carleas is describing, but I came to the same conclusion as Karpel.

This is why I still tend towards the notion that mind precedes body (matter and the physical). For all of the sense and utility that we can make of conceiving things in terms of matter, as the objects of our experience, they never seem to "make that leap" to being the subject of themselves as well. Yet all matter and energy are experienced in terms of mind first and foremost - even if they are subsequently treated as the fundamental substance that inversely causes the mind (that perceived them in the first place).

In fact, I go further and say that "The Hard Problem" is not a valid problem at all because it is simply a misconception founded in the above inversion of cause and consequence.

Further reasoning for this is that for the human mind to be convinced of an explanation, it must be shown a link between something and something else that is of a different kind. This is why dictionary definitions that reference the word being defined are invalid, and why any tautologous explanation is not seen to be adding any further knowledge. In the same way, how is it possible for the mind to have itself explained to itself in a satisfactory way? If you explain mind in terms of body, this is seen as valid because body is of a different "kind" to mind, but it isn't explaining mind in terms of mind. But if we did explain mind in terms of mind, it would be seen as tautologous, adding nothing and invalid.

I concede that my language is sloppy, but then this is a sloppy area and language is going to let us down. At some point between "light hitting a photoreceptor" and "functioning brain experiencing the qualia 'red'", we get something we would call observation. That will be true for any explanation that accepts mind-brain identity. There may not be a sharp line between observation and not-observation as we abstract up to the whole brain, and that is not necessarily a defeater for a theory of consciousness.

However, some of the sloppiness is of course my own; I will attempt to tighten that up:

I did not mean to suggest that merely by entering the skull a causal chain becomes conscious. I mean to talk about subjective experience as isomorphic to function, i.e. mind state-transitions correspond to brain state-transitions. Cabining such function inside a brain is neither necessary nor sufficient to that isomorphism, it's incidental. Allow me to clarify this point.

Consider again the sunflower, and compare it to a rock (a piece of graphite, say). We can see that light hitting the sunflower has an effect different from light hitting the rock. In particular, the light that hits the rock imbues some energy in the form of heat which is diffused uniformly through the rock. Sufficient light will result in a phase change or other reaction. By contrast, in the sunflower, upon being hit with the light a chemical cascade is initiated, in which energy from other sources is consumed and directed such that the sunflower moves. These reactions are different in kind. We can nitpick how exactly we want to define or express this difference, but I will take it at face value for our purposes here. Furthermore, the reaction of a photoreceptor cell in the eye is similar to the reaction of a photoreceptor cells in the sunflower (although the cell in the eye is more specialized, in the difference-in-kind between rock and flower the eye cell is clearly on the side of the flower). This seems like a non-arbitrary distinction between some portions of the causes effects and the totality. Yes, there is a cause and effect relationship between the rock and the light, but it is different-in-kind from that between the flower/retinal cell and the light.

Brains are effectively networks of this latter type of causal connection. The causal relation between the light and the photoreceptor cell is similar to the causal relation between the photoreceptor cell and the neural cells with which it is connected. One conceptual building block I'm using is chains of these causal connections. But these chains aren't only neurons in series, but also in parallel: each neuron passes a signal to many other neurons, and these subsequent neurons may be interconnecting, including to neurons earlier in the chain.

In brain architecture, we can identify more or less discrete subnetworks composed of such chains, e.g. the occipital lobe. The occibital lobe consists of many millions of these chains, all trained to parse the signals from the photoreceptors into information about the world as represented by the light that strikes the retina.

Consciousness enters the picture each time some part of the network is causally influenced by a different part of the network, such that the former part is trained to recognize patterns within the latter part. When this occurs, the former part is "observing" the latter part, in the same sense that the occipital lobe is "observing" patterns in the retinal photoreceptors. It's pattern matching, in the same way that AlphaGo pattern-matches on the arrangement on playing pieces on a Go board.

Consciousness is the mental experience of observing mental experience, which is what we would expect a system that is wired to pattern-match to patterns in its own pattern-matching to report. At lower levels, the network pattern-matches on photoreceptor cells firing. At high levels, other parts of the network pattern-match to collections of neurons firing in the photoreceptor-pattern-matching area. This layering continues, with collections of cells reacting to collections of cells reacting to collections of cells etc. This self-observation within the network is isomorphic to the self-observation of conscious experience.

And again, this is all distinct from the rock because the causal chain isn't merely energy from light diffusing through this causal cascade, but the light starting a causal cascade that uses separate energy, and indeed depends on excluding diffusive energy (most often by residing inside a solid sphere of bone).

From this rough sketch, we need only abstract up to emotional or intellectual reactions, where the layers of self-referential causation permit highly abstracted patterns to be recognized, e.g. (in a super gross simplification) "ideas" made of "words" made of "sounds" made of "eardrum vibrations".

Silhouette wrote:This is why I still tend towards the notion that mind precedes body (matter and the physical).

This does not seem to fit with the observable ways in which purely 'body' causes can affect mind. For example, brain damage changes not only the intensity of mind, but the contents and fine-grained functioning. That makes sense if mind is just the working of the brain, but not if mind precedes the brain.

Carleas wrote:Consciousness enters the picture each time some part of the network is causally influenced by a different part of the network, such that the former part is trained to recognize patterns within the latter part. When this occurs, the former part is "observing" the latter part, in the same sense that the occipital lobe is "observing" patterns in the retinal photoreceptors. It's pattern matching, in the same way that AlphaGo pattern-matches on the arrangement on playing pieces on a Go board.

Consciousness is the mental experience of observing mental experience, which is what we would expect a system that is wired to pattern-match to patterns in its own pattern-matching to report. At lower levels, the network pattern-matches on photoreceptor cells firing. At high levels, other parts of the network pattern-match to collections of neurons firing in the photoreceptor-pattern-matching area. This layering continues, with collections of cells reacting to collections of cells reacting to collections of cells etc. This self-observation within the network is isomorphic to the self-observation of conscious experience.

And again, this is all distinct from the rock because the causal chain isn't merely energy from light diffusing through this causal cascade, but the light starting a causal cascade that uses separate energy, and indeed depends on excluding diffusive energy (most often by residing inside a solid sphere of bone).

From this rough sketch, we need only abstract up to emotional or intellectual reactions, where the layers of self-referential causation permit highly abstracted patterns to be recognized, e.g. (in a super gross simplification) "ideas" made of "words" made of "sounds" made of "eardrum vibrations".

To me the blue is all fine, but then I see no justification in the redded portions above for a creeping in of assumed consciousness. More complex 'phototropism' happens in the brain, but why 'experiencing' should arise, I don't think you've justified. Computer programs can recognize patterns. Machines can do this. Basically any physiological homeostatsis, including the sunflower, is recognizing patterns. Are there gradations of consciousness? How do we know where consciousness arises in complexity? How do we know consciousness is in anyway tied to complexity or self-relation? We lack a test for consciousness. We only have tests for behavior or, more neutrally, reactions. How do we know which reactions, including the stones, have some facet of 'experiencing' in them or not?

And we are heavily biased to view entities like us as conscious or more likely to be conscious. But we have no way of knowing if this is correct. The work in plant intelligence, decision-making, etc. that is now seeping into mainstream science is a sign that some of that bias is wearing off, just as the biases against animal intelligence and consciousness were deeply entrenched - one could say in a faith based religious way - well into the second half of the 20th century.

My aim here is to tie the outside description of the brain (neurons, photoreceptors, netorks) to the inside description of consciousness. So that first section you highlight in red is a description of what the experience of consciousness is, rather than something that follows from my argument. My intent there is to frame consciousness in a way that makes the mapping to brain function plausible. "The experience of experiencing" (a trimmed and, as I mean it, equivalent version of the first red section) seems both a reasonable description of consciousness, and a reasonable description of a network that is trained on itself.

When we look at the brain, we see a network configured to receive information about the external world and identify patterns in that information, and also to receive information about its own operations as it does so. When we look at our own conscious experience, we feel ourselves feeling ourselves feeling the world. The subjective "experience of experiencing" and the objective "pattern-matching on pattern-matching" are two descriptions of the same process.

(The second red section is more to what I took to be Iambigous' point, i.e. that the more abstract parts of experience are more difficult to explain. More abstraction only requires more layers of network. And if my clarification of the first red section is successful, I think it follows that more self-experience entails more abstraction.)

The test of my position here would be to create more and more sophisticated artificial minds that function in basically this way. AlphaGo and its successors are a significant breakthrough in this direction. There is a temptation to compare them to DeepBlue, but they operate in importantly-different ways. DeepBlue was spectacular because it was able to read out so many moves ahead, which was novel at the time (though it's less that 1/10th as powerful as a modern smartphone). But humans aren't very good at reading ahead, certainly not compared to computers; that's not how humans play. Rather, humans look for patterns, they abstract based on experience. And that's what AlphaZero does. It still analyzes a ton of moves, but many fewer than other engines (1/875 from this not-super-awesome source). Reinforcement learning and neural networks are modeled on the human brain and surpass humans in very human endeavors.

If I'm right, this should be the field that results in an artificial general intelligence for which the consensus is that it's conscious. And such an advance should not be too far off.

Reducing mind to body is as easy and employing the old type/token distinction. We've only observed so many brain states, but we think there are so many more, (maybe infinite in number) that are possible, so we assume that behavior and brain states can supervene on each other or however you want to put it, but then someone comes along and says, but there are 2 behaviors that match the 1 brain state that we can observe, then you can either appeal to the whole idea that there are more states that are theoretically necessary and therefore must exist, (but you may get accused of speculating beyond what you can observe), or you can say that there are the brain states are tokens and the behaviors are types or whatever and use a bit of the old set theory to settle up your reductive theory of mind. Then you can say, "hey man I'm not saying I've solved the mind body problem, I'm just saying I've concluded that the best way to discuss them is by referencing the physical observable stuff and framing it as having a 1:1 correlation with the non physical stuff."

Silhouette wrote:This is why I still tend towards the notion that mind precedes body (matter and the physical).

This does not seem to fit with the observable ways in which purely 'body' causes can affect mind. For example, brain damage changes not only the intensity of mind, but the contents and fine-grained functioning. That makes sense if mind is just the working of the brain, but not if mind precedes the brain.

I would still say that it does.

Reasoning: how is it possible to know that the brain has been damaged without a mind to observe it? Certainly, a damaged brain can directly result in a damaged mind, but given that the brain is a product of the mind, it's still the mind being damaged causing a damaged mind. The middleman "brain" is a part of an observing mind that seems to directly represent a mind, it's a subset of mind that isn't actually a fundamental substance in itself (matter) but it is thought of as a substance that can be treated as fundamental for utility's sake (even though it isn't). Given what I said about human understanding requiring the object of understanding to differ in kind to the subject (to avoid tautology), it's necessary for utility's sake for us to treat the material conception in this way in order for us to attempt understanding of the mind "by proxy".

This may bring to mind thoughts of trees falling in forests when nobody is around, but as a matter of epistemology, the matter of a tree cannot be known to have fallen until a corresponding conception of a tree in someone's mind has occurred to confirm it. This is aside from the ontological question of whether "it actually makes a sound" if it falls. But even to the ontological question, I make the same argument that "the reality" of matter independent of an observer is the same product of utility, initially founded in mind and subsequently inverted in the mind such that the "reality" of matter precedes the reality of mind. And who can blame us for thinking in this way when evidence looks so much like things are going on even when nobody is around to perceive it? But does the quality of utility override the process that necessarily occurs before which utility is even conceived?

I'm still trying to get my head around your argument, maybe it's my fault for not being able to, or maybe my version is the correct reasoning why there is no hard problem. Or perhaps as you were hinting, the only hard part of the problem is the language to explain it or the lack of it

But humans aren't very good at reading ahead, certainly not compared to computers; that's not how humans play. Rather, humans look for patterns, they abstract based on experience. And that's what AlphaZero does.

Being good at reading/looking ahead can be measured in the either/or world. If you wish to achieve one or another goal and that involves calculating what you imagine will unfold given all the possible variables involved, you either achieve that goal or you don't. Or you achieve it faster and more fully than others.

But what machine intelligence is any closer to "thinking ahead" regarding whether the goal can be construed as encompassing good or encompassing evil?

It would seem that only to the extent that the human brain/mind/consciousness is in itself just nature's most sophisticated machine would that part become moot.

Carleas wrote:Ah, I think I see the gap in my argument that you're pointing out.

My aim here is to tie the outside description of the brain (neurons, photoreceptors, netorks) to the inside description of consciousness. So that first section you highlight in red is a description of what the experience of consciousness is, rather than something that follows from my argument. My intent there is to frame consciousness in a way that makes the mapping to brain function plausible. "The experience of experiencing" (a trimmed and, as I mean it, equivalent version of the first red section) seems both a reasonable description of consciousness, and a reasonable description of a network that is trained on itself.

I sort of understand. Perhaps it would be good to ask you: how is what you are arguing unique to mind body unity arguments? If it is. I feel like I am missing something, but perhaps I interpreted the title as indicating that you'd found a new angle - even simply new to the ones you've read.

That said: I don't think the phrase the experience of experiencing is useful and/or makes sense. If we are experiencing, iow that second part of the phrase, and then experience this experiencing, this cannot be an explanation of that first experiencing that we then notice. Now I assume you meant two different things by experience and experiencing in that sentence. We would be experiencing the reactions and effects in our brains. But I now see you are trying to make a model that is plausible which is different from making an argument that X is the case.

When we look at the brain, we see a network configured to receive information about the external world and identify patterns in that information, and also to receive information about its own operations as it does so.

Or one could, from a physicalist point of view, consider this definition excessive. There is nothing receiving information. Rather a very complicated, effective kind of pachinko machine is the brain, and when causes hit this incredibly complicated pachinko machine, the machine reacts in specific determined ways. There is no information, just causes and effects. It looks like information is being received because evolution has led to a complicated object that responds in certain ways. But that's an after the fact interpretation. (this is not my position, but I think it is entailed by physicalism, which your posts seem to fit within).

If I'm right, this should be the field that results in an artificial general intelligence for which the consensus is that it's conscious. And such an advance should not be too far off.

I think they will soon have things that act like our brains. Whether they will be experiencing is another story. And for all we know they already are. I think we should be very careful about conflating behavior, even the internal types focused on here, and consciousness. We have no idea what leads to experiencing. And we have a bias, at least in non-pagan, non-indigenous cultures, to assuming it is the exception.

I don't think you are solving the hard problem, you are just presenting a practical model of intelligence similar to ours and suggesting that this will lead to effective AIs. I agree. However the hard problem is not that....

The hard problem of consciousness is the problem of explaining how and why sentient organisms have qualia or phenomenal experiences—how and why it is that some internal states are felt states, such as heat or pain, rather than unfelt states, as in a thermostat or a toaster.

And like most formulations of the hard problem here it is assumed they know what is not conscious despite there being a complete lack of a scientific test for consciousness. All we have is tests for behavior/reaction.

There is one test that I co incidentally read which consists of the following and is quite recent.

Points of light are impinged at various intervals upon the eye using a multi colored scheme, consisting of red and blue. The duration of the test may be factored in as of primary relevance, but that has not been verified at this end.

The crux of the relevance of other considerations consists in the finding, that it takes the repetition of the incidental light exposures exactly half through, before a color change is reported by the study.The light change I believe results in a shift to green.

Does this not point to a quantifiable relevance to qualify a tool with which toeasure internal and external effects to variable input of visual change which relates inner and outer sources of experience?

Meno_ wrote:There is one test that I co incidentally read which consists of the following and is quite recent.

Points of light are impinged at various intervals upon the eye using a multi colored scheme, consisting of red and blue. The duration of the test may be factored in as of primary relevance, but that has not been verified at this end.

The crux of the relevance of other considerations consists in the finding, that it takes the repetition of the incidental light exposures exactly half through, before a color change is reported by the study.The light change I believe results in a shift to green.

Does this not point to a quantifiable relevance to qualify a tool with which toeasure internal and external effects to variable input of visual change which relates inner and outer sources of experience?

If so , can this be a model of measurement in a more general study?

I am not sure I understand the test. It seems to me that they can measure reactions. They see a reaction. Well, even a stone will react to light. What we cannot test is whether someone experienced something. And then this test seems to be for beings with retinas and we've pretty much already decided that creatures with retinas are conscious.

Meno_ wrote:There is one test that I co incidentally read which consists of the following and is quite recent.

Points of light are impinged at various intervals upon the eye using a multi colored scheme, consisting of red and blue. The duration of the test may be factored in as of primary relevance, but that has not been verified at this end.

The crux of the relevance of other considerations consists in the finding, that it takes the repetition of the incidental light exposures exactly half through, before a color change is reported by the study.The light change I believe results in a shift to green.

Does this not point to a quantifiable relevance to qualify a tool with which toeasure internal and external effects to variable input of visual change which relates inner and outer sources of experience?

If so , can this be a model of measurement in a more general study?

I am not sure I understand the test. It seems to me that they can measure reactions. They see a reaction. Well, even a stone will react to light. What we cannot test is whether someone experienced something. And then this test seems to be for beings with retinas and we've pretty much already decided that creatures with retinas are conscious.

In this case, reaction was reported by the test subject, versus in the reaction observed by the test giver, in the case of the stone, is the difference.

The test subject reported perceived changes he experienced, connecting the test with both the qualitative and quantifiable factors.

I think that does meet the criteria for a relative test to the problem.

Ierrellus wrote:Around the turn of the last century Colin McGinn suggested that consciousness is too complex to be explained by a mind. What have we learned since then that would make such an explanation possible?

One would need to address this question to the neurological community. And I suspect they would conclude that much is known now that was not known then.

But the hardest part about grappling with "Hard Problem of Consciousness" is still going to revolve around narrowing the gap between what any particular one of us thinks we know about it here and now and all that can be known about it in order to settle the question once and for all.

Here of course some invoke God. But for others that just provokes another Hard Question: How and why and when and where did God come into existence?

He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles

In this case, reaction was reported by the test subject, versus in the reaction observed by the test giver, in the case of the stone, is the difference.

The test subject reported perceived changes he experienced, connecting the test with both the qualitative and quantifiable factors.

I think that does meet the criteria for a relative test to the problem.

I don't get it. How does the test demonstrate the lack of consciousness as opposed to the lack in the ability to report what one has experienced? IOW how would it demonstrate an animal, plant, rock is not conscious, rather than simply that they do not respond about their experience?

Ierrellus wrote:Around the turn of the last century Colin McGinn suggested that consciousness is too complex to be explained by a mind. What have we learned since then that would make such an explanation possible?

I think conscousness is actually rather simple. But it is complicated to explain how it arises, especially in a physicalist paradigm.

Mr Reasonable wrote:Reducing mind to body is as easy and employing the old type/token distinction.

I'm not 100% sure I follow your argument here. What are the 2 behaviors that match 1 brain state? I would reject that possibility as "speculation beyond what [we] can observe": the very idea that the same brain state could be observed to occur twice seems wrong.

I'm more open to the idea of 2 brain states matching the same behavior. Mental processes can probably be instantiated in different medium (e.g. in brains and silicon), which would mean that two very different physical brain configurations can produce the same output.

But maybe I'm getting gummed up in language again. When you say 'behaviors', do you mean macro behaviors, e.g. picking up an object? Or brain "behaviors", e,g, pathway xyz being activated?

Silhouette wrote:[G]iven that the brain is a product of the mind, it's still the mind being damaged causing a damaged mind.

Is this just solipsism?

One problem I have with this line of reasoning is that it erases what seems like a meaningful distinction that (assuming multiple minds exist) can be communicated from one mind to another, suggesting that it isn't totally illusory: we talk about the distinction between mind and not-mind, and that difference is understandable and seems useful. At best, aren't we just pushing back the question? Let's say we accept that everything is mind. We still have the mind things of the sub-type mind (i.e. ideas, feelings, sensations, emotions), and the mind-things of the subtype not-mind (brains, rocks, hammers, whatever), and we still want to explain the mind-mind things in terms of the mind-not-mind things. And my argument still works for that.

How much of this is a linguistic problem? I grant that the only things we have access to are mind things, e.g. we perceive the world as sense impressions of the world. But are you saying that there is no world behind those impressions? There's a difference between saying that my car's backup sensor only delivers 1s and 0s and saying that there's no garage wall behind me. I'd argue that the most coherent description of the world is one that isn't dependent on mind, even though, being minds, our experience of the world is always going to be made of mind-stuff.

I guess I do think utility is meaningful. I say "this is mind and this isn't", and we can take that statement and test the things against it, so that e.g. the things that are mind only exist to the mind experiencing them and the things that aren't exist to everyone. The fact that we can draw useful inferences from that distinction suggests the distinction is real.

iambiguous wrote:[W]hat machine intelligence is any closer to "thinking ahead" regarding whether the goal can be construed as encompassing good or encompassing evil?

As I mentioned in my previous reply to Karpel Tunnel, I think this is a difference of degree and not of kind. Good and evil are abstractions of abstractions of abstractions... I am indeed taking the "human brain/mind/consciousness [to be] in itself just nature's most sophisticated machine".

Karpel Tunnel wrote:how is what you are arguing unique to mind body unity arguments?

I am not sure what other type of arguments you intend here. Are there non-mind/body arguments that compare the structures of brains to the subjective experience of being a brain? Are there non-unity arguments that posit that two seemingly distinct things are in fact the same thing?

Karpel Tunnel wrote:I assume you meant two different things by experience and experiencing in that sentence.

I don't think I do, but I am open to arguments otherwise. I mean 'experience' in this context in a non-mental sense, e.g. "during an earthquakes, tall buildings experience significant physical stresses." There's absolutely a danger of equivocating, i.e. assuming that rocks and humans both 'experience' things, and concluding that that experience is the same thing. That isn't my argument, but I do mean to point to light striking a rock and light striking a retina as the same phenomenon, which only differs in the respective reactions to that phenomenon. Call phenomena like being hit by a photon 'experiencing a photon'. Similarly, we can aggregate these small experiences, and say that the sunflower experiences the warmth of the sun. In the same way, then, neurons experience signals from other neurons. Whole brain regions experience the activity of other regions. The brain as a whole experiences its own operations. The parts of AlphaGo that plan for future moves experience the part of AlphaGo that evaluates positions.

If I'm right, the internal experiencing and the external experiencing are in fact the same thing, and qualia etc. are the inside view on the brain experiencing itself experiencing itself ... experiencing a photon of a certain wavelength. Qualia are not the incidence of light on your retina, but the incidence of the whole brain processing that event on itself.

Karpel Tunnel wrote:I now see you are trying to make a model that is plausible which is different from making an argument that X is the case.

I disagree. If we were perfectly informed, only the truth would be plausible.

Karpel Tunnel wrote:There is nothing receiving information. Rather a very complicated, effective kind of pachinko machine is the brain, and when causes hit this incredibly complicated pachinko machine, the machine reacts in specific determined ways. There is no information, just causes and effects. It looks like information is being received because evolution has led to a complicated object that responds in certain ways. But that's an after the fact interpretation.

That's a bit like saying that the words I'm writing are really just collections of letters. And so they are, but that doesn't prevent them from being words.

Information is a fuzzy term, which can be used to refer to single molecules of gasses transferring energy ("information about their speed"), up to very abstract things like what we might find in the Stanford Encyclopedia of Philosophy page on physicalism ("information about the history of physicalism"). I don't think either usage is at odds with physicalism.

Karpel Tunnel wrote:I think they will soon have things that act like our brains. Whether they will be experiencing is another story. And for all we know they already are. I think we should be very careful about conflating behavior, even the internal types focused on here, and consciousness. We have no idea what leads to experiencing. And we have a bias, at least in non-pagan, non-indigenous cultures, to assuming it is the exception....And like most formulations of the hard problem here it is assumed they know what is not conscious despite there being a complete lack of a scientific test for consciousness. All we have is tests for behavior/reaction.

Given that we can't ever experience someone else's consciousness directly, we need to treat "acting like someone is conscious" and "being conscious" as roughly the same thing. I am assuming that other humans are conscious, and that rocks aren't. I would also assume that anything that can robustly pass the Turing Test would also be conscious.

If we don't make these assumptions, the hard problem is very different: the question would become "why do I have qualia", since I am the only consciousness I can directly confirm.

Karpel Tunnel wrote:I don't think you are solving the hard problem, you are just presenting a practical model of intelligence similar to ours and suggesting that this will lead to effective AIs.

Having "a practical model of intelligence similar to ours" must solve the hard problem at the limit where the intelligence is so similar to ours as to be identical, right? If we're not ready to say that, we need to establish why we're willing to accept all these similarly intelligent humans as conscious without better evidence.

iambiguous wrote:But the hardest part about grappling with "Hard Problem of Consciousness" is still going to revolve around narrowing the gap between what any particular one of us thinks we know about it here and now and all that can be known about it in order to settle the question once and for all.

This seems like defining the solution to the Hard Problem in such a way that it becomes vulnerable to Godel's Incompleteness Theorem. i.e., as a mind, it is impossible for us to fully contain a comparable mind within ourselves, so "all that can be known about a mind" can never be known by a single mind. If the Hard Problem is just Incompleteness (and that's not a totally unreasonable proposal), then we should call it the Provably Impossible Problem.

iambiguous wrote:[W]hat machine intelligence is any closer to "thinking ahead" regarding whether the goal can be construed as encompassing good or encompassing evil?

As I mentioned in my previous reply to Karpel Tunnel, I think this is a difference of degree and not of kind. Good and evil are abstractions of abstractions of abstractions... I am indeed taking the "human brain/mind/consciousness [to be] in itself just nature's most sophisticated machine".

Then it would seem to become a matter of how far one takes this. Taking it all the way this very exchange might be but an inherent manifestation of that which can only be. As would be the distinction between abstractions used to describe phenomenal interactions and the interactions themselves. They happen. They could not not have happend.

Next post. Next set of dominoes.

iambiguous wrote:But the hardest part about grappling with "Hard Problem of Consciousness" is still going to revolve around narrowing the gap between what any particular one of us thinks we know about it here and now and all that can be known about it in order to settle the question once and for all.

Carleas wrote: This seems like defining the solution to the Hard Problem in such a way that it becomes vulnerable to Godel's Incompleteness Theorem. i.e., as a mind, it is impossible for us to fully contain a comparable mind within ourselves, so "all that can be known about a mind" can never be known by a single mind. If the Hard Problem is just Incompleteness (and that's not a totally unreasonable proposal), then we should call it the Provably Impossible Problem.

There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are also unknown unknowns. There are things we don't know we don't know.

But how would one go about proving the problem is impossible to solve?

Instead, from my frame of mind, the truly hard problems of consciousness revolve around the question "how ought one to live?" in a world of conflicting goods?

For example: To build or not to build Trump's wall.

Taking an existential leap to autonomy.

He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles

Carleas wrote:Having "a practical model of intelligence similar to ours" must solve the hard problem at the limit where the intelligence is so similar to ours as to be identical, right?

No, because that's just function. We would know it could function, in general, like us. Does Deep Blue have some limited experiencing? I would guess consensus is no and further we cannot know. Just because we make something that can function like us in many facets of our intelligence does not mean it is experiencing. It might be. It might not be.

If we're not ready to say that, we need to establish why we're willing to accept all these similarly intelligent humans as conscious without better evidence.

Yes, physicalists who consider all contacted mediated and interpreted should have that concern. And some do.

Silhouette wrote:[G]iven that the brain is a product of the mind, it's still the mind being damaged causing a damaged mind.

Is this just solipsism?

One problem I have with this line of reasoning is that it erases what seems like a meaningful distinction that (assuming multiple minds exist) can be communicated from one mind to another, suggesting that it isn't totally illusory: we talk about the distinction between mind and not-mind, and that difference is understandable and seems useful. At best, aren't we just pushing back the question? Let's say we accept that everything is mind. We still have the mind things of the sub-type mind (i.e. ideas, feelings, sensations, emotions), and the mind-things of the subtype not-mind (brains, rocks, hammers, whatever), and we still want to explain the mind-mind things in terms of the mind-not-mind things. And my argument still works for that.

How much of this is a linguistic problem? I grant that the only things we have access to are mind things, e.g. we perceive the world as sense impressions of the world. But are you saying that there is no world behind those impressions? There's a difference between saying that my car's backup sensor only delivers 1s and 0s and saying that there's no garage wall behind me. I'd argue that the most coherent description of the world is one that isn't dependent on mind, even though, being minds, our experience of the world is always going to be made of mind-stuff.

I guess I do think utility is meaningful. I say "this is mind and this isn't", and we can take that statement and test the things against it, so that e.g. the things that are mind only exist to the mind experiencing them and the things that aren't exist to everyone. The fact that we can draw useful inferences from that distinction suggests the distinction is real.

It's not necessarily Solipsism, but it is Idealism.

It certainly seems like minds have no overlap: one's consciousness doesn't overlap with others', which could easily lead to Solipsism - but minds do appear to communicate with one another - the question is whether the separation borders perfectly or if there's a gap and there's some intermediary substance that allows the connection (since overlap is out of the question). The latter seems unfalsifiable, and the former seems a little convenient, but given the problems with all other suggestions about fundamental substance the suspiciously convenient seems to be the least contradictory regardless of any seeming lack of probability. Unless of course your argument is perfectly fine and I'm just missing it - which seems more likely than the convenience of what I'm suggesting.

Is it too hocus pocus to say there is no reality in terms of matter behind the impressions? They're unfalsifiable afterall, as useful as they are to propose. Could it be that this reality only occurs once minds connect (without overlapping and with nothing in between) like bubbles? But of course these bubbles burst eventually... Life as some fancy bubble machine, eh? Haha. Can you really propose a coherent description of the world without mind? Is that what you're attempting?

Carleas wrote:I don't think I do, but I am open to arguments otherwise. I mean 'experience' in this context in a non-mental sense, e.g. "during an earthquakes, tall buildings experience significant physical stresses." There's absolutely a danger of equivocating, i.e. assuming that rocks and humans both 'experience' things, and concluding that that experience is the same thing. That isn't my argument, but I do mean to point to light striking a rock and light striking a retina as the same phenomenon, which only differs in the respective reactions to that phenomenon. Call phenomena like being hit by a photon 'experiencing a photon'. Similarly, we can aggregate these small experiences, and say that the sunflower experiences the warmth of the sun. In the same way, then, neurons experience signals from other neurons. Whole brain regions experience the activity of other regions. The brain as a whole experiences its own operations. The parts of AlphaGo that plan for future moves experience the part of AlphaGo that evaluates positions.

If I'm right, the internal experiencing and the external experiencing are in fact the same thing, and qualia etc. are the inside view on the brain experiencing itself experiencing itself ... experiencing a photon of a certain wavelength. Qualia are not the incidence of light on your retina, but the incidence of the whole brain processing that event on itself.

Either there is an equivocation on the idea of 'inside' above or there is a lack of justification for the physical definition of insidejustifying the arising of subjective experience.

What is 'inside' to matter`?

The equivocation: just because some complicated interaction is happening 'inside' is not enough to say there will be the 'inside' of subjectivity. Inside the ocean, there are ecosystems of complicated interaction. But the inside of the oceanness does not lead to this being conscious - not in most physicalist models.

The lack of justification: why does matter start being an experiencer because causal interactions are inside. why would topology lead to to there being an experiencer? I don't see the argument, I see a simple flat statement, that conscoiusness is happening when it is inside.

For some reason interconnection inside something leads to matter not just engaging in certain processes, causal chains, but there arises a noticing. I see nothing explaining why this noticing arises. That to me is the hard problem. Not cognition, but awareness.

Another way to put it is this: sure, things within an organism affect eachother and can produce responses. But this can happen without an experiencer. In motors that have feedback for homeostasis. Would you argue that there are the beginnings of conscousness in those motors? I can see saying this is using information from one part of thing to modify processes in another. But I see nothing explaining an experiencer. Cognition, even, should not be confused with awareness.

Carleas, it's either way. But I mean picking up object kinds of behaviors. Maybe someone says there are more of those than brain states, or that there are more brain states than those, but either way you just think of the 2 categories in terms of sets and settle them up that way. So that way even if it's practically impossible to take snapshots of 2 identical brain states, as well as it being practically impossible to observe the level of nuance necessary to account for all possible brain states, you can resolve the language by talking about them in terms of types of states and types of behaviors. Basically, you just generalize a bit to be able to reduce one to the other. Then if you want you can talk about how this behavior goes with that brain state, or vice versa. This is basically what the pharmaceutical companies want to do. It's all just chemical states of the brain! That's why little Johnny won't stop abusing animals and taking drugs. I see where it gets tricky when we refer to mind and body and start reducing mind to body in ways that reference the brain as if it weren't part of the body. But the same move you make to reduce behaviors like picking up objects to brain states, which is to generalize over brain states or behavior enough to be able to identify them with one another is the move you'd make to reduce "mind" to brain. This is what religious people don't want you to do, because then you don't need God's will or any of that stuff to explain what's being observed. If the mind, or the soul is separate from the body/brain, then there's magic out there that can be used to appeal to all sorts of nonsense. But if you can match behavior with brain states, and then you can explain that all we can know about the mind/soul is whatever we can observe by looking at the effects that it has on the brain, then you can throw it all out and just look at the brain since that's where all your observable shit is anyway.

You see...a pimp's love is very different from that of a square.Dating a stripper is like eating a noisy bag of chips in church. Everyone looks at you in disgust, but deep down they want some too.