This is the second of four posts about key ideas from my book The Shadow of Consciousness. This one looks at how the brain points at things, and how that could provide a basis for handling intentionality, meaning and relevance.

Intentionality is the quality of being about things, possessed by our thoughts, desires, beliefs and (clue’s in the name) our intentions. In a slightly different way intentionality is also a property of books, symbols, signs and, pointers. There are many theories out there about how it works; most, in my view, have some appeal, but none looks like the full story.

Several of the existing theories touch on a handy notion of natural meaning proposed by H.P.Grice. Natural meaning is essentially just the noticeable implication of things. Those spots mean measles; those massed dark clouds mean rain. If we regard this kind of ‘meaning’ as the wild, undeveloped form of intentionality we might be able to go on to suggest how the full-blown kind might be built out of it; how we get to non-natural meaning, the kind we generally use to communicate with and the kind most important to consciousness.

My proposal is that we regard natural meaning as a kind of pointing, and that pointing, in turn, is the recognition of a higher-level entity that links the pointer with the target. Seeing dark clouds and feeling raindrops on your head are two parts of the recognisable over-arching entity of a rain-storm. Spots are just part of the larger entity of measles. So our basic ability to deal with meanings is simply a consequence of our ability to recognise things at different levels.

Looking at it that way, it’s easy enough to see how we could build derived intentionality, the sort that words and symbols have; the difference is just that the higher-level entities we need to link everything up are artificial, supplied by convention or shared understanding: the words of a language, the conventions of a map. Clouds and water on my head are linked by the natural phenomenon of rain: the word ‘rain’ and water on my head are linked by the prodigious vocabulary table of the language. We can imagine how such conventions might grow up through something akin to a game of charades; I use a truncated version of a digging gesture to invite my neighbour to help with a hole: he gets it because he recognises that my hand movements could be part of the larger entity of digging. After a while the grunt I usually do at the same time becomes enough to convey the notion of digging.

External communication is useful, but this faculty of recognising wholes for parts and parts for wholes enables me to support more ambitious cognitive processes too, and make a bid for the original (aka ‘intrinsic’) intentionality that characterises my own thoughts, desires and beliefs. I start off with simple behaviour patterns in which recognising an object stimulates the appropriate behaviour; now I can put together much more complex stuff. I recognise an apple; but instead of just eating it, I recognise the higher entity of an apple tree; from there I recognise the long cycle of tree growth, then the early part in which a seed hits the ground; and from there I recognise that the apple in my hand could yield the pips required, which are recognisably part of a planting operation I could undertake myself…

So I am able to respond, not just to immediate stimuli, but to think about future apples that don’t even exist yet and shape my behaviour towards them. Plans that come out of this kind of process can properly be called intentional (I thought about what I was doing) and the fact that they seem to start with my thoughts, not simply with external stimuli, is what justifies our sense of responsibility and free will. In my example there’s still an external apple that starts the chain of thought, but I could have been ruminating for hours and the actions that result might have no simple relationship to any recent external stimulus.

We can move thinks up another notch if I begin, as it were, to grunt internally. From the digging grunt and similar easy starts, I can put together a reasonable kind of language which not only works on my friends, but on me if I silently recognise the digging grunt and use it to pose to myself the concept of excavation.

There’s more. In effect, when I think, I am moving through the forest of hierarchical relationships subserved by recognition. This forest has an interesting property. Although it is disorderly and extremely complex, it automatically arranges things so that things I perceive as connected in any way are indeed linked. This means it serves me as a kind of relevance space, where the things I may need to think about are naturally grouped and linked. This helps explain how the human brain is so good at dealing with the inexhaustible: it naturally (not infallibly) tends to keep the most salient things close.

In the end then, human style thought and human style consciousness (or at any rate the Easy Problem kind) seem to be a large and remarkably effective re-purposing of our basic faculty of recognition. By moving from parts to whole to other parts and then to other wholes, I can move through a conceptual space in a uniquely detached but effective way.

That’s a very compressed version of thoughts that probably need a more gentle introduction, but I hope it makes some sense. On to haecceity!

Share this:

Posted by Peter on April 12, 2015 at 2:01 pm.
You can leave a response, or trackback from your own site. Follow any responses to this entry through the RSS 2.0 feed.

91 Comments

1. Jochen says:

Very good, I’m not too late for installment no. 2! I’ve been looking forward to this.

However, I must admit that this part is probably what I had the most trouble with in your account. If I understand you correctly, the spots point to measles by virtue of being a part, in some sense, of the larger entity ‘measles’. I think that’s a very interesting notion, sort of a ‘meaning by synecdoche’. But I’m not sure I understand how this can be used to ‘ground’ meaning without at least courting circularity.

‘Measles’, as an entity, seems to me most aptly described as the sum of its parts, in a generalized sense, encompassing its symptoms, progress, prognosis, infectiousness, etc.; but then, attempting to ground the meaning of any of these parts in the larger entity, which is ultimately defined in terms of these very parts, doesn’t seem to get us anywhere.

More simply, consider an entity C, with parts A and B. A and B together form C, and, again if I understand you right, A points to, hence means, C. But then, A means C only by virtue of C being A and B, and B means likewise C, so ultimately, we end up at A again, and the whole thing just circles round and round.

Then, there’s also the matter of being a sort of ‘further fact’ involved in the constitution of this ‘pointing’ relationship, and using it to ground meaning. A points at C by virtue of being part of C, but if all intentional content of some agent is given by such pointing relationships, then the fact that ‘A points at C’ would, in order to be available to the agent, need to be pointed at in some way, itself—but then, this just regresses. Spots would not point to measles if I didn’t know that they are a symptom of measles—so it must both be the case, in order for the spots to point in the right way, that they form a part of the entity ‘measles’, and that it is known (say, to me) that they are such a part. Merely ‘being part of’ in this way seems not to be enough, then.

A final worry is that your account of meaning is, essentially, relational, and mere relation underdetermines things, in a sense. The relation you consider is ‘being part of’, which is really the most basic kind of relation—every other relation can be formulated in terms of being part of some set. But a relation doesn’t suffice for reconstructing the relata that stand to each other in that relation.

Consider the basic example of chirality: the fingers of the left hand stand to each other in the same set of relations as the fingers of the right hand do; nevertheless, both are different (consider that you could change one into the other by performing a rotation through a four-dimensional space, just as you could change a flat cardbord cut-out of a right hand into that of a left hand by turning it over in three-dimensional space: the relations haven’t changed, but the object has).

More generally, for any set of objects standing to each other in a given relation, one can find (innumerable) sets of different objects standing to each other in another relation in an isomorphic way. (This is known as ‘Newman’s problem’ in accounts of structural realism.)

Consider the relations ‘distance’ and ‘age’: A being farther from B than C can be modelled by x being older than y, and z being older than y but younger than x. Both sets fulfil a relation in an equivalent way, and if all you know about that relation is that the entities stand to one another in that relation, you can’t tell one from the other.

So it seems to me that your account could only ever give us the relational structure of things, without giving us the things themselves—but in conscious experience, it seems to me that the intentional object is precisely that upon which the structure supervenes.

Good criticisms, Jochen, well justified and very much to the point. However, I think what you’re basically saying is that the proposed mechanism of ‘pointing’ doesn’t provide a coherent method which is capable of yielding the required results. That is correct; after all, the fact that we can arbitrarily make a ‘whole’ out of any two or more items (we can make a ‘pair’ out of anything) means that the whole thing collapses into incoherence once we try to apply it systematically.

But it’s not systematic and there can’t be a method, in the sense of an algorithm or defined procedure – that’s explicitly part of the case. The whole thing works on the basis of a faculty of recognition which is necessarily rule-free.

I think a lot of people will find that very hard to accept. Many will take it that if there are no proposed rules there’s no explanation being offered. You could fairly justly chracterise me as presenting a radically sceptical case, of saying that ‘it just works and we can never describe how or why’. If explication requires that I give a set of rules, then indeed I can’t explicate.

But I think reflection suggests that cognition really does work by anomic recognition, and in fact that so long as we accept the unavoidable limitations the process can be described and explained, perhaps even reproduced.

3. Jochen says:

Well, I guess the obvious route to take here would be to note that whatever process is behind recognition, it should possess some minimum degree of reliability, and that’s difficult to create with something anomic; conversely, whatever has some degree of reliability, also seems lawful in some sense.

But let’s not go down the obvious route! I think there is, in fact, a notion of ‘anomic’ processes that need not be mysterious in any fundamental way, and that’s again tied up with the notion of computation. Take a random sequence of digits: it is, in a sense, anomic—there is no law according to which it is generated; otherwise, this law would serve to eliminate the randomness, by making each digit exactly predictable (in fact, a common notion of mathematical randomness, derived from algorithmic information theory, rests on a formalized version of exactly this intuition). Thus, a random sequence is anomic in this sense, and any faculty involving such a sequence would be likewise anomic.

But the notion of randomness is relative—one could imagine agents to whom a given random sequence appears perfectly lawful. I already gave one example of such an agent in my post on the last installment of this series: an agent outfitted with an oracle for the halting problem (and thus, having powers exceeding that of any Turing machine equivalent) would trivially be able to predict the digits of any sequence encoding the answers to the question ‘does program p halt?’, while to any Turing equivalent, this sequence would be random and anomic.

And in fact, having access to such a sequence can actually enable faculties that go beyond those of any computational process—I sketched how in the first response to the post ‘Quanta and Qualia’ (I tried to make this a clickable link, but for some reason, this apparently prohibited the post from showing up).

However, the only way this would appear anomic to us would be if our minds, or at least their explanatory faculties, were limited to the algorithmic, which I take you to deny… In this case, I think, one could make a rational case that there are capacities of the mind (and of the world more generally) inexplicable to the mind, and hence, anomic; recognition might be one of them (though I have my doubts: I don’t think that the easy problems really call for such exotica; they only need to be called in when the going gets really hard!).

I think there is promise here, in that basic indicators (neuronal states with natural meaning in Grice’s sense) are likely the biological scaffolding upon which more complex cognitive abilities are built. Dretske made a career out of trying to show how this could be done, and his books are full of great insights and ideas (especially ‘Knowledge and the Flow of Information’).

Primitive recognition is basically activation of such neuronal indicators which, as you suggest, are very closely linked to behavior in simple organisms.

More complex nervous systems have many many more layers of processing between the early indicators (e.g., in the retina) and the behavior producing neurons that synapse onto muscles.

It’s those middle parts that things get really complicated. 🙂

Paul Churchland has basically made a career out of seeing how much water he can squeeze out of treating recognition/classification as a basic operation, and the results are pretty impressive.

5. Hunt says:

Non-rule based processes do kind of stick in the craw of those with the engineering mindset, which means most of us, since we’ve been conditioned to think that way. It kind of brings to mind a black-box style formulation, irreducible, or at least one that is organicly complex beyond our ken. For one reason or another, this brings to mind things like holographic memory, which is distributed and not easily reduced to rules, though accessible through mathematics. I note that evolution often favors rule governed systems. Returning again to the discovery of DNA and biological inheritance, it turned out to be almost absurdly mechanistic and rule-based. DNA, it’s transcription and translation, is strangely reminiscent of a Turing machine, with it tape and reading head. And it didn’t have to be that way. Inheritance could have been conveyed by some amorphous process, vaguely defined by gooey protein chemistry, as was hypothesized at the time. But it wasn’t. At its core life is a discrete information process. Evolution tends to favor rules.

7. Jochen says:

Peter, a question, if I may (and of course, anybody else feel free to deposit your two cents!): it occurred to me that we might have a difference on what ‘anomic’ and ‘rule-governed’ mean. So, would you consider something like a neural network that performs facial recognition after having been trained on some appropriate dataset to be, basically, anomic, or rule-governed?

I suppose I could see a case for the former, in that in a sense, there’s no exactly specifiable rule regarding which face gets sorted in one group, versus which goes into another. There’s no real explanation (not a simple, appreciable one anyway) for why the network does what it does. If the task were to, say, order faces according to whether they look similar to Richard Nixon’s, then in a sense even after it has performed this ordering, we wouldn’t really know why a particular face got assigned a particular degree of similarity, which a rule-based approach should probably give us (say, by specifying criteria such as distance of eyes, size of nose, height of ears, general shiftiness etc.).

Additionally, if you consider this to be rule-free, is it the right kind of rule-freeness (anomicity…?) to underlie the sort of processes you have in mind?

9. Jochen says:

Yes—after all, neural networks are computationally universal, and hence, as strong as Turing machines; in particular, any given neural network can be instantiated on a Turing machine, which would then perfectly reproduce its performance regarding, say, face-recognition. Since I’d view Turing machines (and by extension, their equivalents) as paradigmatically rule-driven, I had in mind a notion that places the anomic either on some higher rung of the arithmetical hierarchy, or maybe even off that hierarchy altogether, if that makes sense.

In principle, such an instantiation could give you the rules and reasons for why a given face is sorted into a particular group, although those would be formulated in a highly abstract fashion, perhaps not pertaining to faces and their features in any obvious way; so maybe one might say that those aren’t ‘the right (kind?) of rules/reasons’. I need to think about this some more…

10. Hunt says:

Arnold,
“Evolution doesn’t work by rules. Evolution works by biophysical *mechanisms* which we can *describe* by rules.”

I don’t mean how evolution works; I mean what evolution produces, which is structure and rule-driven systems, right down to postal-system like molecular routing using protein tags. I’m not trying to convince anyone of biological teleology or intelligent design. I think there are perfectly valid naturalistic explanations, but it can’t be denied that evolution seems to generate very design-like systems, including layered hierarchy and rule-driven structured mechanisms. The fundamental biological unit, the cell, is a tiny little machine. It’s rules and structure all the way down.

11. Hunt says:

Hunt: “The fundamental biological unit, the cell, is a tiny little machine. It’s rules and structure all the way down.”

There is surely structure and dynamics in the cell, but there are no rules to be found in the cell. The only place that rules are found is in the propositional mechanisms of an evolved brain, a brain such as yours.

Jochen and Peter #7-9
Thanks for persevering, and apologies for disappearing (busy times).
With respect to my long ramblings on the previous installment (Inexhaustibility) you have finally managed to clear out most of my doubts.

Peter, I’ll try to stretch your clarification a little further, we may end up agreeing after all:
If you answer “Yes” to Jochen’s question in #7, then we might conclude that also my disagreement is just a matter of definitions.

The (simplified) computational systems that we call (artificial) neural networks are indeed rule-based and computational by definition. So if you call what they are able to do “anomic”, “not rule-based” and “not algorithmic” it follows that:
a. your definitions of “anomic”, “not rule-based” and “not algorithmic” are confusing to say the least.
But also means that:
b. once one accepts/clarifies your definitions, your argument starts looking very solid to me.
Thus, my own instinct urges me to try digging in the source of confusion, to see if something valuable is hidden therein (or, more honestly: I wish to discuss what I think hides there because it looks valuable to me).
You may know exactly where I’m going, as I’ve tried to explain my position already. What I would agree with, and what I think is a very strong argument, solid on philosophical grounds and very promising on the empirical side (it has promise on both explanatory and technological fronts), is:

Key aspects of what brains do, what you call “recognition” and “pointing” are radically difficult to model/understand in algorithmic terms. So much so, that it seems very reasonable, almost undeniable, to assume they may be not-computational.
However, we have:
1. strong theoretical reasons to believe this can’t be the case. [because it should be possible to simulate any physical mechanism to any desired degree of accuracy and because we have reason to believe that brains are very complex physical mechanisms]
2. we also have somewhat preliminary, but well established and convincing examples of “designed”, physical and entirely algorithmic systems that are able to perform in-silico recognition and pointing.

Therefore, we can conclude the following (provisionally, but with good confidence): despite intuition and the (suspiciously abstract) “proof” provided by Penrose, the proposition above (in bold) is false. Thus, we (you, actually) explain why understanding consciousness is so difficult: it requires to understand recognition, intentionality and meaning, and realise that, despite all appearances, they can, and indeed are, implemented in Turing-computable ways.
The difficulty in this restricted sub-domain comes from the deceiving state of appearances. Which is of course a direct consequence of our own cognitive limitations and therefore should please Scott as well as me.

Do we agree on the above? I’ll be surprised if we do, but I’m guessing it’s worth checking.

15. Charles Wolverton says:

I’ve pursued the dictionary meaning of various “nom” words, and it appears that they are routinely misused. Eg, as I understand it, Davidson’s anomalous monism has to do with the absence of natural laws whereas according to dictionary.com, anomic means without standards, ie, man-made rules. Also, “laws” in the legal sense are really man-made rules whereas in the scientific sense they are not. Which suggests some distinction other than “rule” vs “law” would be preferable.

Better yet, one can avoid ill-defined jargon and instead spell out in detail what idea one is trying to capture – almost always less time consuming than having to explain subsequently what one really meant by an ambiguous word.

After a long and tiresome week, the honest answer is “I don’t know”. I can try a tentative answer, with the caveat that I might be making little sense. Please let me know either way, I’ll try to clarify when my brain feels more trustworthy.
I see two points where hidden arguments/premises may be creeping in, one is my own shortcoming, the other may be coming from you.

My own shortcut comes from the word “simulation”. The unwritten premise is that, (P0) in very theoretical terms*, one could model the brain as a computer that processes input (sensory information from the peripheral nervous system) and produces output (motor signals).
Thus, if we perfectly (as in “with enough precision”) simulate the brain functions in a computer and replace a brain with such a computer, no-one “by definition” will notice the difference, not even the original person (now cyborg). That’s because the simulation is ex-hypothesis perfectly equivalent (“enough precision”): the resulting cyborg will tell us, and genuinely believe, that “No, nothing happened. I’m still me”. If that weren’t the case, the “enough precision” premise would be false (the outputs would differ, there would be no functional equivalence).
This allows us to conclude that if all the brain functions can be “simulated” in a computer, then whatever happens in the brain is likely to be computational.

Your own possible source of confusion comes from the “implementation” word. It is indeed theoretically possible that the brain “implements” computational functions via some awkward non-computational thing (I don’t think I can use the word “mechanism” here). Therefore, if the thing is nevertheless only implementing computational functions, we would be theoretically able to perfectly simulate it, but still lack the definitive proof that its internal implementation is in fact computational. True, but irrelevant, in my view. We should be happy with the simpler explanation: the implementation is computational. We are left with nothing else to explain, so we should stop there (without adding odd, non computational, suspiciously metaphysical ingredients).

Do let me know if the above isn’t helping…

*I’ve said this before but it may be worth repeating: the assumption that we can neatly separate brain from input-output channels, without taking arbitrary decisions about where to draw the line, is probably false. I don’t think this caveat invalidates my argument, though.

17. Jochen says:

I think that Arnold’s making a good point (if I understand it correctly): rules are abstract, structural entities; in effect, they are what gets ‘lifted off’ some process, in order to transfer them to some different process, which can hence be used as a model of the first process. This sets up a correspondence between the states and temporal evolution of both processes, such that we can use the second one (over which we have, presumably, more control) in order to draw conclusions about the first. This is, basically, what happens in scientific modelling, or computer simulations.

If, now, the properties of the original system are exhausted by the structural, then we should expect the two processes to be identical in every respect (that matters). Conversely, any structural property should be freely transferable from one process/system to its model. Such structural properties are for instance the relational ones: if one, for instance, models a planetary system, then e.g. the ratios of distances between the planets will be the same as the ratios of distances of, I don’t know, foam balls one uses to represent the planets.

But if there are some non-structural, ‘intrinsic’ properties, then it’s at least not obvious that they make it across to the model/simulation. The classic example is that simulated rain isn’t wet, or that you can’t burn your fingers on the recording of a fireplace. But I think this is something of a level confusion: if you were in the simulation yourself, you might well experience getting wet, if such a thing is possible; but certainly, asserting that it’s not is mere question begging.

So, bottom line: a mere instantiation of the same rules doesn’t necessarily make a simulation the same (kind of) thing—it might, but at least I can’t see any argument that it’s necessarily the case.

Certainly, there’s a strong intuition for simulatability: any thing in the world is known to us only as a mapping from causal inputs to causal outputs. If one were to replace some part of the world with something that reacts to the same causal probes in exactly the same way, how’d you tell the difference? But this seems to be just the kind of thing—a mapping from inputs to outputs—one can implement computationally. But still, it might be (and I happen to believe) that this alone only gives you the structural properties, without the intrinsic ones—the form without the content, if you will.

That’s not necessarily in conflict with the idea that a ‘RoboMe’ could be built that would react, in every way, exactly as I do, and that would be conscious—because in doing so, it invariably needs contact to the external world in the same way as I have, and that’s, I think, where the intrinsic properties come from, via sense data (ultimately in the form of nonalgorithmicity present in the real world). It’s not the computation alone that gives rise to consciousness, but the computation plus (or maybe ‘acting on’, ‘manipulating’ in the same way that a computational control circuit can control the noncomputational property ‘heat’) the real world resources.

Sorry for rambling; please disregard if this doesn’t seem to make any sense…

18. Tom Crispin says:

In discussions of pointing and pointers it is worthwhile to keep in mind the experience of computer programming: de-referencing a NULL pointer is an invalid operation – like dividing by zero only worse.

“The present King of France is bald” is conventionally not FALSE but meaningless, unless one supports (say)

No, we don’t agree, I’m afraid. I think I’ve given a misleading impression and perhaps I should not have assented to Jochen’s proposition.

To take it from the top, I assert that the brain does its thing by pointing (nothing to do with pointers in computer code, which are a bit different, though I was pleased to have the Duke of Anjou introducing a bit of aristocratic tone into the discussion 🙂 ). In my system, pointing does all the work that in older theories would have been done by association, and more besides.
In literal pointing, I extend my finger; people recognise the line through space which my finger is aligning with, and then they spot another thing aligned with the same line. The assumption that I’m pointing at something interesting allows them to pick out the right thing, but actually there’s an inexhaustible ambiguity in pointing: I might not be pointing at the picture on the screen, but at a tiny fly in the air between me and the screen; or at the screen itself, or at the wall behind it, or Westphalia, or an asteroid, or Betelgeuse: or some anonymous volume of space at some point.
The kind of thing the brain does is generally a much wider form of the same thing in which we’re not using lines through space but any kind of higher level entity; again the choice of entity here is limitless. The recognition of these entities and their point is not computable because you cannot list or define the options in advance. The brain copes with this by using a kind of pattern matching, presumably underpinned by neural networks, in which nothing is set up in advance as a pattern, but an indefinitely large number of latent possibilities can become salient patterns when they receive an appropriate stimulus.
Now, I think you might come back and say, ah but the brain is a physical thing and we can computationally simulate anything physical to any required degree of fidelity. Not, of course, including perfect fidelity. Simulations necessarily leave something out, or if you prefer, they simulate only the properties which the simulator has judged relevant. But with this kind of pattern-matching you’re never going to be able to tell in advance which potential patterns are functionally relevant, and if you model only a finite set you’re going to run into problems – quite quickly, I think.
Now you may go a step further and say, well dammit then, I’ll simulate the motion of every molecule, and then I’ll catch all the potential patterns. I don’t know whether that is even theoretically possible, but if it is, then yes, that might do it; but I’d argue that doesn’t make it a computational process in any useful sense.

20. Jochen says:

Well, then I have a follow-up question: What would you consider to define a computational process “in a useful sense”? To me, computational is simply anything that can be performed by some Turing machine, but to be honest, I don’t really see anything in your description that doesn’t fit that bill—in particular, novelty and inexhaustibility aren’t in-principle obstacles for a computational realisation, and you don’t need in any sense to be able to characterise all possible circumstances that the program might meet in order to guarantee its performance (within reasonable limits; humans themselves are far from perfect performers).

This seems to be a critique that could be directed at GOFAI-style implementations, things like expert systems, where you indeed basically went and had to hard-code all possible situations into giant if-then-else code blocks. But modern implementations aren’t necessarily limited in the same way. Take again the example of facial recognition: you don’t need to specify all possible faces in advance, but only some limited training set; performance of the recognition algorithm won’t be perfect, but neither is human performance, and I’m not aware of any in-principle argument that says that an automated recognition process can’t match human-level performance (indeed, I seem to vaguely remember a recent claim that there now exists recognition software that outperforms humans, but usually with this sort of thing, terms and conditions apply).

And the general problem of AI is not too different: the machine receives some input data, which can be phrased in terms of some pattern of zeroes and ones; its task is to produce an appropriate output pattern, which can then, e.g., be construed in terms of commands to actuators or the like. An output pattern might be appropriate when it minimizes harm to the agent, for instance. Determining which output patterns to produce in response to a certain input is, then, in effect just a pattern classification problem, which will be very challenging in practice, but does not seem to introduce any in-principle insurmountable obstacles.

Indeed, the AIXI agent I mentioned in the previous comments thread is known to be capable of asymptotically matching the performance of the ideal agent for any given such problem. Of course, it is in itself noncomputable, but there are computable approximations, and there performance reaches AIXI’s in the limit of infinite computing power, thus, it is at least not obvious to me that something with the computing capacity of a human brain, running some approximation of AIXI, wouldn’t perform as well as a human being if faced with novel environmental situations.

21. Hunt says:

I guess I still don’t get what others seem to be considering the rule-based/anomic distinction. If the distinction is that rules mean a formal propositional system and another system that follows them, isn’t this kind of sneaking in something a bit Cartesian theater-ish? A rule system would always have to exclude the system that follows the rules, whereas if you just integrate the two, you would simply have a system that operates according to rules, which a cell certainly does. (When a cell’s operation strays too far from normal, it becomes cancerous.) A useful conception might be that a non-rule based system would be one for which no set of rules could be listed short of a description of the system itself. In effect, there would be no way to algorithmically compress the operation of the system any further; it must be described in rote fashion.

Peter,
I suspect we are hitting some communication barrier: like Jochen (#20) I don’t see anything in what you describe that can’t be reasonably approximated algorithmically.

I keep banging this point because the implications, in terms of the possibility of generating very significant explanatory bridges between disciplines seems very concrete to me.

You are tackling the most abstract level (the philosophical domain of pure abstractions) and talk of:
– Recognition, which can be described as a kind of fuzzy pattern recognition, paired with generative algorithms that are able to generate new patterns/labels to accommodate novel patterns.
– Pointing, which I see as practically identical to a prevalent interpretation at the psychological level of explanation (below).
Furthermore, the two together (it’s actually three mechanism: new model generation (1), pattern recognition (2) and pointing (3)) then have remarkable bridging potential to theories on the neural level, and these are significantly complemented by artificial neural networks.

I’ll try to proceed with order.

We know that artificial neural networks (even if they radically simplify what real neurons do) can learn to classify new patterns, and that they will apply their classifications to new stimuli in a fuzzy way. Furthermore, the approaches in the Bayesian brain family may eventually be able to plug-in your “recognition” concept precisely because they also lump together model generation and pattern matching (1 & 2), so it’s actually very easy to pair your high level descriptions with biology-based theories and existing artificial implementations that exhibit very similar behaviours. This isn’t really news, and we discussed connected details at length in the past few months.
Things become very interesting when you introduce pointing.
At the psychological level, there is a very related, and pretty popular (uncontroversial?) concept: associative memory. See for example:
Morewedge, C. K., & Kahneman, D. (2010). Associative processes in intuitive judgment. Trends in cognitive sciences, 14(10), 435-440.
What psychologists talk about is that the act of recognising makes it easier for related concepts to surface as well, but not in a neatly organised way (concept A brings concept B to mind, each and every time), but by making a vast family of related “concepts” more available, a little more likely to spring to mind (depending on what happens next). Thus, the brain is kind of pointing in (more than) one direction, but to say it with your own words, it up-regulates “an indefinitely large number of latent possibilities [which] can become salient patterns when they receive an appropriate stimulus” (e.g. they get “recognised” themselves). Bingo!

Thus we have: a purely conceptual theory (yours) which, in an effort of explaining consciousness as we experience it, proposes a minimal set of mechanisms that would need to be performed by our brains. This theory can be solidified further by translating it into psychological (pre-existing, and at least partially validated empirically) psychological theories. In turn, these psychological theories allow to start looking for actual neural patterns: they provide the interpretation-key to figure out what the neural signals we measure may actually be doing. Your own contribution adds the possibility of relating a “pattern of neural activity” all the way up to what we experience. Yes of course, it doesn’t prove that the whole conceptual architecture is correct, but fills in a vacuum without the need of stretching our imaginations too far, it is somewhat validated by the lower-level sciences, and in turn adds more credibility to the underlying empirical theories. Of course, the whole thing depends on whether the patterns of neural activity we measure will indeed turn out to be sufficiently interpretable in the light of such a multilevel theory, but at the very least we can say that we are not completely clueless, and that’s big news, if you ask me.

However, you are (perhaps out of sheer prudence, but more likely because of your familiarity with philosophy and your relative unfamiliarity with neuroscience) actively saying “No, no, all of the above is not viable, not even in principle, because the mechanisms I propose are anomic and address inexhaustible possibilities”. If you are right, then neurobiology has no chance of bringing definitive answers. On the other hand, I say that the mechanisms you propose do look anomic, but of course, they are “mechanisms” so they do obey some set of identifiable (and still not fully identified) rules. Also, I say that the known properties of associative memory, paired with the astronomical number of potential associations that are made possible by the unimaginable complexity of the connectome – not to mention that neural connections change all the time, map very well on your suggestions of how pointing allows to tame Inexhaustibility.

One key point remains to be addressed, you touch it when you anticipate me saying “well dammit then, I’ll simulate the motion of every molecule, and then I’ll catch all the potential patterns”, and Hunt points to it directly:

A useful conception might be that a non-rule based system would be one for which no set of rules could be listed short of a description of the system itself.

This is a very important issue in practice. Do we have any idea of what “level of precision” would be necessary to create a “good enough” simulation? The short answer is “No”. But the considerations I’m doing here suggest that we may be able to go a long way by simulating “just” neural activity, without bothering with the single molecules. In this respect, I don’t think that we should jump to this conclusion, or at least, we shouldn’t conclude that a full-brain simulation is within reach (I’ve tackled this in previous comments so I won’t repeat my argument), but the crucial consideration is that the limitations are practical, not theoretical.

The other thing we haven’t even touched is whether all this theory helps with the hard problem: I’ve almost finished your book, so I still don’t know if you think they do, but I sense that we’ll find more reasons to disagree! I’m sure you will get to this point also here on CE so I’m happy to wait.

PS Peter, Jochen, Arnold, Hunt, Scott and all: what a pleasure it is to read your comments and discuss with you all. I feel privileged (with special thanks to Peter for making it happen).

23. Jochen says:

Hunt: “A useful conception might be that a non-rule based system would be one for which no set of rules could be listed short of a description of the system itself.”

But this is true of any computationally universal system—in order to find out what such a system does, there is (in general) no shortcut to explicit simulation (or just letting the system perform its evolution).

Ah wait, I think I’ve misunderstood: do you mean something like a system for which there is no other way of describing its evolution than just listing all the states it assumes? If so, then I’d tend to agree (and I was groping for a similar notion in my post #3 in this thread).

Sergio, I’m likewise very happy about the discussion on here; and I’ll second the special thanks to Peter—I’ve been a longtime avid reader of CE, and I’ve had basically every visit repaid with some new idea or insight.

Sergio, you write
Key aspects of what brains do, what you call “recognition” and “pointing” are radically difficult to model/understand in algorithmic terms. So much so, that it seems very reasonable, almost undeniable, to assume they may be not-computational.
Are you sure you didn’t mean “non-computable”?

The distinction between “computational” and “computable” should be made very clear. Most physical systems may be computable but are not computational. For example enzymes that cut complex sugars into glucose and other simple cabohydrates can be simulated (thus they can be said to be computable), but they don’t perform computations, thus they are not computational.

Doesn’t the ease of changing over to the grunt suggest how insubstantial meaning is – that it’s not a thing, but more like a reflex (like the way we blink if someone swings a fist at our face, even when we are behind bullet proof glass)? A particular behaviour in us that we extend from or even work around (like the blink)?

26. Jochen says:

Ihtio, whether a system is computational—in the sense that it performs some computation—is not a property of the system itself, but of the way it is interpreted. A system performs a computation if there is a mapping between the states it traverses in its evolution and the logical states of some computation; and such a mapping (at least for systems with a given minimum complexity) always exists. That is, every system can be viewed as instantiating any given computation, as famously argued by Hilary Putnam (in ‘Representation and Reality’).

The key point here is that the mapping between the states of the physical system and those of the computation is an arbitrary one, and such a mapping exists as long as the two sets of states are of the same cardinality. Typically, we use certain systems, like view screens, loudspeakers and other peripherals, that represent the states of the computing system in such a way that the mapping is ‘obvious’ to us, but it’s still a necessity in order to meaningfully claim that the system performs computation. Indeed, your own example of simple biological processes is used as a computational substrate in DNA-based computing.

The arbitrariness and external nature of this mapping is, in my opinion, the best argument against a computational theory of the mind: no system computes in and of itself; that is, there is no computation that could be associated to a given physical system in a preferential way (other than the preferences dictated by our own specific set of sense organs). Thus, computation is not an objective property of a system. But then, in what sense could our brains be said to ‘compute’ our minds? What fixes the interpretation? It seems that any such theory either smuggles in some kind of homunculus that ‘fixes’ this interpreation, or it must accept the consequence that in fact every conceivable computation is performed by every physical system—meaning that if there is a program that instantiates your mind, then it runs simultaneously on every rock, every chair, every planetary system; and thus, that it is not in any way tied to your brain.

What ‘fixes’ interpretation is that the computation can’t take input from that process of computation (can’t see the bits moving, can’t see the synapses charging and firing) – thus to work with this question make it generates a simplistic model. The inability to have anything more than a simple model ‘fixes’ the model in place.

28. Jochen says:

Callan:
“What ‘fixes’ interpretation is that the computation can’t take input from that process of computation (can’t see the bits moving, can’t see the synapses charging and firing) – thus to work with this question make it generates a simplistic model. The inability to have anything more than a simple model ‘fixes’ the model in place.”

Sorry, but I’m afraid I’m too dense for this. Could you rephrase it? You’re starting with some computation, which ‘can’t see the bits moving’. But what computation is this? As I explained, any given computation can be associated with some evolving physical system, by choosing the right implementation relation. So are you saying that that computation is selected that can’t take input from itself? And what is the question you want to ‘work with’?

the term “computation” once meant calculation (of numbers, mathematical functions defined on numbers), then manipulation of arbitrary symbols. Now it seems that if we say that computation is “a mapping between the states it traverses in its evolution and the logical states” then the whole concept is essentially meaningless. Everything can be said to be computation, an apple tree doesn’t grow apples, but computes apples from from sunlight, water, minerals, etc. A meteor doesn’t create a crater on Earth, but the crater is computed from Earth, meteor, and some other stuff.

Or, the term “computation” is used when in fact the term “process” should be used. There would be much less confusion as there would not be any connotations with “information processing”, computers, symbols, etc.

ihtio #24
The first two paragraphs of Jochen #26 are all I need to reply. The distinction between “computational” and “computable” is, at face value, non existent. Jochen’s third paragraph is more interesting.

The arbitrariness and external nature of this mapping is, in my opinion, the best argument against a computational theory of the mind: no system computes in and of itself; that is, there is no computation that could be associated to a given physical system in a preferential way (other than the preferences dictated by our own specific set of sense organs). Thus, computation is not an objective property of a system.

Jochen, your take is interesting, because it’s the arbitrariness itself that, for me, leads to the exact opposite conclusion: it is the best argument for a computational theory of the mind. When we try to explain/understand the brain, the mind and the relation between the two, our aim is to make whatever brains/minds do understandable to us. Thus, the computational metaphor seems apt:
1. we can (almost neatly) see the brain/mind as what’s between input and output.
2. we can easily (this time pretty neatly) interpret certain neural activity (membrane potential changes, synaptic activity) as signals, signal transmissions and signal transformations.
3. our theoretical understanding of computation is pretty mature, well developed, and arguably one of the most useful theories we’ve ever created.
Thus, the computational metaphor seems to be a very promising way to make brain/mind functions tractable, it may not be the best of all possible approaches, but it really looks like the best approach we currently know of. I would also add that this stance underlines all of the neuro-disciplines, they all make the assumption that the best way we have to interpret the mechanisms we see is in terms of signals, signal transmissions and signal transformations. As a little challenge: the orthodox way to understand what synapses do is “transmit a signal to the post-synaptic neuron”; if we would like to ditch the computational metaphor, how else would we describe the functionalities of synapses? I am far too embedded in the computational frame of mind, so I am utterly unable to propose any answer, but it would be great if one of us has something intelligible to offer!

If I was to propose a critique of the computational approach, it would be very different. For me, the real problem lies with the theoretical purity of our understanding of computation. Our computers are designed specifically to be a physical implementation of lofty abstractions, they are carefully engineered to make their physicality irrelevant, as long as they don’t break down. Neurons and all biological substrates are quite the opposite: messy, stochastic, probably rely heavily on redundancy, concurrence and parallelism.

From the evolutionary perspective, this messiness makes perfect sense:
a. on the implementation side, evolution is a blind and very imperfect designer. It produces things that work (most of the time), not elegantly organised implementations of abstract concepts.
b. since brains are in the business of understanding (modelling?) the outside world, in order to produce useful behaviours, they need to accommodate the messiness of the outside world. It is no surprise that their inner working reflects the chaotic nature of outside contingencies – it needs to. Plus, in chaos theory the amount of noise/variability of a system is frequently scale-independent, so it seems reasonable to me to expect that the “inside workings” can be able to parallel the noise/variability of the outside world in meaningful/exploitable ways (I’ll stop the wild speculations here, before I make more of a fool of myself).

Thus, for me, it is possible that the computational metaphor is hiding one of the things we ought to understand: how the internal messiness manages to produce reliably useful outputs (behaviours). The role of chaos and what we normally dismiss as noise may be crucial (I believe it is), but as far as I know, we can’t neatly translate chaos and noise in the language/theory of Turing machines. I see this as a challenge for the next generations, though. I’d be happy if we’ll manage to make some progress via the computation metaphor for now.

Finally, Jochen poses the question “what fixes the interpretation?” Callan answered something that I myself struggle to understand. My take is: (with apologies for the sketchiness, the full argument is about 12K words)
Metacognition, acting on memory, fixes the interpretation. We evaluate what stimuli are relevant to us and in what way they are relevant. This evaluation drives what gets stored in memory, so we can then re-evaluate our first evaluations in hindsight (smell the Bayesianism?). When we perform this second operation, the secondary effect is that the stored evaluation now appears to be a given, its inherent arbitrariness/subjectiveness is lost along the way (and doesn’t matter in terms of survival/useful outputs, because we only care about what stuff means to us).

32. Jochen says:

Ihtio, I feel you: the whole thing certainly poses the danger of trivializing the notion of computation. The problem, however, is coming up with something better: as you say, computation means to compute the answer to some problem, say, an arithmetical one. To do so, you need three things: the problem statement, and the implementational mapping between the states of some system and the states carried out during the computation, and that physical system itself. But it’s difficult to see whether, and if, we should curtail the type of mapping—because, given those three things, you can perform the computation—that is, after the physical system has arrived at some final state (if it ever does so), you will know the answer to your problem, no matter what the system was, and what form the interpretational mapping took. To me, the only reasonable interpretation of this is that the system has, indeed, performed the computation. (Otherwise, you’d be asserting that you arrived at the answer of some computational problem without doing the computation, which seems kind of absurd to me.)

Sergio, let’s turn the problem around. I give you some physical system—say, a desktop computer without any peripherals attached, or a rock, or a brain, or anything else. The question is: what computation, if any, does that system perform? And that question simply does not have a unique answer. For all three, we can fix some interpretation under which it continually computes the sum of all prime numbers; but likewise, we could for each of these systems claim that it instantiates the computation giving rise to your mind (should there be such a thing). It’s basically like translating a code for which you have no reference at all: it could mean anything.

That’s what the arbitrariness entails, and I think it’s unacceptable for yielding a theory of the mind, since there needs to be some external factor fixing the interpretation; but then, the question just becomes what kind of external entity conceivably could fulfill this role. It couldn’t be a computational one, since it would suffer from the same problem. Thus, your move of appealing to metacognition only kicks the problem up one rung: certainly some higher-level program can interpret input and output of a lower level one, thus fixing its interpretation; but then, the lower level program is interpreted only with respect to a particular interpretation of the higher level process—and what supplies that?

Note that this does not mean giving up on computational metaphors for the mind, nor does it entail giving up describing the physical processes between neurons, etc., in signal-processing terms, any more than I can describe the underlying hardware of a computer in terms of signal-processing, while being wholly ignorant about the program instantiated by the hardware—in a sense, you could have a computer, carrying out the same kind of physical processes, while plugging in two different kinds of screens result in two different computations being displayed. In other words, you can say that neuron A sends a signal to neuron B, but you can’t say uniquely what this signal means.

33. Jochen says:

Arnold, to me, a mechanism produces a change in the physical world, while computation takes place in an abstract, conceptual space. The result of a mechanism might be providing power to move a car, or hoisting a weight to some given height, while the result of a computation is perhaps an answer to some problem. Computations are shuffling around symbols, while mechanisms are the concrete physical instantiations of things that may, under the appropriate interpretation, underlie those symbols. At least that’s what comes to mind without having thought about it in depth…

I see the distinction between computation and mechanism as
generality) the term “mechanism” is much more general. We can say that any multistep process is a mechanism. Computation, on the other hand, is a type of a mechanism, understood in an abstract way. Computation in general is shuffling and transforming of symbols, whatever they may be.
For example cutting of a carbohydrate by an enzyme is a mechanism, but not a computation.

Jochen,

indeed it is hard not to trivialize the meaning of the term “computation”. If we assign to general a scope to the term, then we will end up talking about kids computing pirate ships out of Lego blocks, Sun computing the amount of water in the air (from oceans, rivers), etc. “Computation” is a term that has a very good definition in computer science, information theory. Any attempts at using it outside of those domains should be cautious and include justification for doing so.

The rest of your post, directed to Sergio, seems to be a general question for the meaning of “meaning”. And as of yet, the same problem could be applied to words we use, to mental “images” from our dreams, to our thoughts and emotions, etc.

It seems to me that mechanisms are always physical-dynamic structures, never abstract. Computation, it seems, is a symbolic, rule-governed, abstract description of events that might be physical or purely symbolic.

36. Jochen says:

Yes, that’s basically my take, too. Computation comes in only through interpreting some mechanism in an appropriate way; hence, trying to explain that which does the interpreting in terms of computation would seem to be putting Descartes before the horse (sorry).

But what computation is this? As I explained, any given computation can be associated with some evolving physical system, by choosing the right implementation relation.

Whatever one you want. I presume the interest of others is what’s going on in human skulls.

You’re idea of computation being everywhere doesn’t preclude a particular focus on a particular amount of (bio)matter.

So are you saying that that computation is selected that can’t take input from itself?

Not selected – I assume were talking about skull contents. It’s just a property of the processing in there (or so is the hypothesis I’m put forward)

And what is the question you want to ‘work with’?

Well it’s more a matter of Darwinism – if the organism doesn’t deal with certain questions, it dies/ceases to breed or continue.

Some of those questions hinge on the nature of the processing system itself. Assume that if it doesn’t have even a simplistic idea of what it is (or it doesn’t even have a false but effective idea of what it is), it is more likely to die. Thus further generations tend to develop such an idea (as those who didn’t cease to live/breed successive generations).

What would you consider to define a computational process “in a useful sense”…

One where the fact that it’s computational is relevant. I’m talking there about simulating the motion of every molecule in a brain: I doubt that’s even possible, but the fact that it’s being done with a computer ceases to be interesting. Consider that we could (maybe!) simulate the motion of every molecule in a tennis match. That wouldn’t prove that tennis is essentially computational. We wouldn’t say ‘Aha, all tennis matches are clearly running a tennis algorithm’. That’s one problem with the supposed universality of computation.

…you don’t need to specify all possible faces in advance…

No, but that’s only because you’re getting the software to write up its own list of features to check during an initial stage. I don’t see a big difference in principle.

Generally, can I say that although I realise we were bound to end up on the old chestnut of ‘is the brain a computer’, what it isn’t is less interesting to me than what it is and my main point is to offer a positive account of how the thing works.

40. Jochen says:

Ihtio #37:
The problem remains: if I give you a black box, connected to which there are a screen, a keyboard, maybe a mouse—i.e. some input/output peripherals. Putting an appropriately formulated problem to the box via the input devices, you receive some appropriate output. You may think of this as being perfectly analogous to an ordinary desktop computer. Why should it matter what’s going on in the box to determine whether that system performs computation?

To me, computation is operationally defined in roughly the above way: to some input, a system supplies an output equivalent to that which an appropriate Turing machine would provide. But in the black box, there could be anything—silicon chips, neuronal networks, DNA, a slime mold exploring a labyrinth, even miniature people playing tennis. All you need is for the input peripherals to set the system up appropriately, and for the output to translate its states using the right implementation relation. Why should the substrate used for the computation matter at all?

Callan #38:
I think we’re talking past each other somewhat. The problem I see is that given a physical system, there appears no way to associate a computation to it and say, ‘this is the computation performed by the system’. Thus, if there is some computation a brain performs, then one could equally well claim it is that correponding to your mind, as to mine, as to something else entirely—say, just tallying up the sum of all prime numbers. There’s nothing about the physical state of the brain that privileges any of these interpretations. Conversely, any physical system can be considered to implement any given computation, under the right interpretation.

This is just the same problem as that of deciphering a page of coded text, without any additional hints—it could mean absolutely anything, because the meaning is not in the ciphertext, it is decided by the mind that—using the appropriate key—decodes it. Likewise, the meaning of any hardware process does not inhere in that process, but is imbued upon it only once it is interpreted as carrying out a certain computation. That’s why ultimately I think the ‘mind as software, brain as hardware’-metaphor is misleading: the only way to associate a given software, a given computation actually being performed to the physical process carried out by the brain is by appealing to a preexisting mind in order to fix the interpretation.

Peter #39:
But what we would say if we were to carry out that computation of all the molecules in a tennis match is that clearly, tennis is computable; likewise, if we were to carry out a simulation of a brain and (by whatever means) were able to convince ourselves that this simulation possessed consciousness, we would then know that consciousness, too, is computable, and that thus, in principle, the strong AI program could be carried out.

As for the face classification I keep harping on about, it’s to me just the kind of thing that you seem to be claiming no computer can do—in principle, the set of all possible faces is inexhaustible, yet nevertheless, we know it can be classified using computational means. Extending this to the general problem of classifying all conceivable external situations, in order to find appropriate actions, seems to me to be nothing but a (steep!) increase in quantity, but not in the quality of the problem.

And I do empathize with your aim to find a positive account rather than just talking about the—probably infinitely many—things that consciousness is not; but it’s, some distractions notwithstanding, still the attempt to understand your account that I’m engaged in. In particular, I’m still not at all sure about what your notion of ‘anomic’ means, whether you believe that there are computable processes that are anomic, for instance, or whether you’re ultimately arguing for some sort of fundamental noncomputability of the mind. Both of which are interesting, but, to me, crucially different points of view.

Also, in the absence of any clear positive account (you may have reached one, but I haven’t yet been so lucky!), I think it’s not the worst strategy to emulate the sculptor, chipping away at all the marble he knows is not going to be part of his statue… 😉

what we would say if we were to carry out that computation of all the molecules in a tennis match is that clearly, tennis is computable…

But it isn’t. Running a computation of all the molecules in a tennis match tells us nothing about what tennis is; and if we conclude that tennis matches get their tennishood because they are running a tennis algorithm, we’re just wrong. You can’t define tennis in molecular terms.

I don’t know enough about cutting-edge facial recognition technology to exclude the possibility that some network-based systems are doing something interesting. But the ones I know about seem to me to reduce ‘face space’ to a manageable domain before addressing it.

the only way to associate a given software, a given computation actually being performed to the physical process carried out by the brain is by appealing to a preexisting mind in order to fix the interpretation.

I think that’s just going half way with the idea I gave. I’ve described a system, with limited info on itself, using a rough idea of itself in order to answer survival questions that involve the thing it is and how it behaves.

Currently you’ve looked at that and said the computation attributed there has to be applied by a pre-existing mind.

You haven’t applied the idea, hypothetically, to yourself (as viewer) as well. Doing so would resolve such an issue, as your ‘computation’ would be explained by the ‘rough idea of itself/others’. In half applying the idea to only what is observed and not self, yes, you’re left with a pre-existing mind to account for, rather than a second system with a rough idea it has so it can survive.

The problem in communicating this is probably due to recursion. Ie, ‘stepping back’ to look at something, yet the thing to actually be looked at steps back with you. Sure, it seems like a pre-existing mind needs to have the interpretation fixed.

Then maybe you read this and try, by stepping back, to see the subject AND yourself as performing computation – but again it’ll seem like the interpretation needs to be fixed.

That’s because when you step back, the idea ‘it needs to be fixed’ is going with you. You need to step back from that as well. The recursion of this can happen time and again – keep stepping back to look at where you stepped back from before and still the idea of fixing could travel back with you, instead of coming under the notion of it simply being a rough idea a system has in order to not die out. But an idea that follows recursion doesn’t make for true, it simply means carrying something unexamined that frames ones thoughts.

Well, ‘need to’ step back in as much as to grasp the model. Even if the model is all an elaborate fiction and not true*, it’s what you need to do to grasp the fiction of it.

* Note on ‘true’: ‘True’ being a rough idea that keeps an organism/a system alive/reproducing.

43. Jochen says:

Peter:
“Running a computation of all the molecules in a tennis match tells us nothing about what tennis is; and if we conclude that tennis matches get their tennishood because they are running a tennis algorithm, we’re just wrong. You can’t define tennis in molecular terms.”

It tells us one very important thing about tennis, which I think is the thing on which the whole discussion turns: once you have the computation down pat, you know that there are no further facts to be fixed in order to create something that has tennishood. Doing the same thing for a mind would be a conclusive vindication of computationalism.

It doesn’t matter that there’s no (obvious) reduction of the rules of tennis to the simulated movements of molecules; that we produce a game of tennis simulating only those movements seems, to me, to directly imply that in principle, such a reduction is possible—because that’s exactly what the computer does. In Chalmers’ terms, once God fixed the computational facts, there were no more facts left for him to fix—everything else follows from there.

Callan:
Then I would like to issue the same problem I posed to Ihtio to you: I give you a black box, with in it, some physical matter undergoing some evolution. You have complete access to the states this matter traverses, and (should you in your experimentation damage the original copy, or if it just runs its course, or to test hypotheses about its behaviour, etc) access to a limitless number of copies of the box. What does it compute? What program does it run?

Equivalently, I give you a sheet of ciphertext. What does it say? How would you find out, without even any hint at the possible key?

You seem to be saying that the computation could, somehow, look at itself, to fix the interpretation. But this presupposes an interpretation: as a computational process that has the power of looking at itself. I could equally well produce an interpretation that assigns the computation of successive prime numbers to the black box, which doesn’t have that property. It’s a vicious homunculus regress.

Besides, even if there were an interpretation that has the right properties, I can just as well assign that interpretation to any other given physical system, as long as its state space has the right cardinality. That is, not only would my brain give rise to my conscious experience, it would likewise give rise to yours, or to any possible conscious experience; and not only my brain would do this, but so would any tree, any geological formation, any planetary system, virtually anything in the physical world, depending on some details of how, exactly, you slice the state space. This radical version of panpsychism would, amongst other unpalatable consequences, for instance imply that with an overwhelming probability, none of our experiences are veridical—I would be much more likely to be, say, a dreaming tree than the entity I take myself to be.

Jochen #40:“The problem remains: if I give you a black box, connected to which there are a screen, a keyboard, maybe a mouse—i.e. some input/output peripherals. Putting an appropriately formulated problem to the box via the input devices, you receive some appropriate output. You may think of this as being perfectly analogous to an ordinary desktop computer. Why should it matter what’s going on in the box to determine whether that system performs computation?”
I’m not sure I understand the question 🙂
Computation is characterized by “what is going on in the box”, so obviously it is important.

“To me, computation is operationally defined in roughly the above way: to some input, a system supplies an output equivalent to that which an appropriate Turing machine would provide. But in the black box, there could be anything—silicon chips, neuronal networks, DNA, a slime mold exploring a labyrinth, even miniature people playing tennis. All you need is for the input peripherals to set the system up appropriately, and for the output to translate its states using the right implementation relation. Why should the substrate used for the computation matter at all?”
In fact, computation is understood as a series of steps of some logical operations.
“Why should the substrate used for the computation matter at all?” But matter to whom? I linked the articles from Nautilus to show that there are many systems that can be turned into computers, but most systems aren’t computers and some systems could be used as computers, but they would be very slow (e.g. chemical reactions). But in general terms, substrate can be abstracted away. So what? 🙂

You ask Callan – in #43 – something similar:“I give you a black box, with in it, some physical matter undergoing some evolution. You have complete access to the states this matter traverses, and (should you in your experimentation damage the original copy, or if it just runs its course, or to test hypotheses about its behaviour, etc) access to a limitless number of copies of the box. What does it compute? What program does it run?”
My answer is this: we first define representation of symbols (meaning of items, e.g. high voltage – 1, low voltage – 0) and operations that can be performed on them (simple logic gates: NOT, AND, OR, XOR). We build structures that perform these operations and wire them together. We provide input. The machine then proceeds. It stops. We read out the output. The program the machine is running is defined by symbols and rules of transformation of those symbols. Knowing the code of the program (a series of operations) and input data, we know what the machine computes. We know this, because we defined all the elements ourselves.

45. Jochen says:

“In fact, computation is understood as a series of steps of some logical operations.”
Well, computation can be formlized in a number of ways. One would be partial recursive functions; from this point of view, anything that implements such a function deserves to be called a computation. Thus, anything that gives an appropriate output, given some input, indeed ought to be considered a computer.

I mean, honestly: if you put the problem ‘3 * 8′ to my hypothetical box, and received as an answer ’24’, would you really want to maintain that it has only computed this answer if what went on inside the box follows some pre-defined model? And what would you call the process by which it arrives at the answer without computing it?

But I had indeed in mind something much more in accord with your definition. Consider, as in Putnam’s original version of the argument, a finite state automaton—that is, a machine that has some set of states, with transition rules between them. Then, the claim is that for any physical system with a set of states of sufficient cardinality, an implementation relation can be found such that the system implements the FSA.

Just very briefly, if the sequence of states in the computation is s3 -> s1 -> s2, then let’s consider a physical system which is in state p1 at time t1, in state p2 at time t2, and in state p3 at time t3. Fix the interpretational mapping to be f: (p1 -> s3, p2 -> s1, p3 -> s2). Given this mapping and the physical system—say, with the mapping implemented using a screen—the system implements the computation. Moreover, this is exactly the sense in which our computers implement some computation: every computer is a finite state automaton, and every computation just involves some cycle through a subset of these states, which are implemented in the hardware as some particular memory-configuration plus the instantaneous state of various logic units.

In this way, the computation—understood, as you said, as ‘a series of steps of some logical operations’—is mapped to some arbitrary physical system—given the system, and the interpretation, the computation can be readily read off, the same way it is read off (typically by something like a view screen) from our computational hardware.

But anyway, I sense that we’re probably not going to make headway on this matter. I don’t really want to clog up Peter’s comment section with going back and forth on this, so I hope you’ll forgive me if I don’t continue this conversational thread beyond this post, unless you bring up some key new argument that forces me to reconsider my opinion. Barring this, I’ll propose to agree to disagree on the matter?

Jochen, I don’t agree that we should disagree, because in fact we do agree 🙂

Indeed for many systems one can interpret its behavior as implementation of some program. So I agree with you.
The only things I wanted to make clear is that
– post-hoc interpretations such as these don’t make much sense, because they lead to meaninglessness of the term computation. Of course in a practical sense, not some deep ontological sense. So arbitrariness has to be thrown out of the window just so we can meaningfully talk about computation and computers.
– just because we could force ourselves to see everything as computations doesn’t mean that we can feed the whole world with our virtual computed bread.

Today machines are better at identifying faces than humans. Does that mean that they recognize faces? Do humans “compute faces”?
My position is that recognition is a lawful process (non-random), and we should – at least in principle – find some approximate laws, rules, mechanisms, algorithms that describe how people recognize things. Then we could implement this stuff on computers and see how good our approximations are. However, just as we can’t feed the starving people with our virtual apples we can’t, without good argument, say that our virtual brains recognize for example faces, in the same sense as humans do. That is how I could possibly interpret Peter’s ideas on the subject.

Part of the answer comes from who is asking. If you have questions coming from nowhere, it can’t be answered, seems pretty unfair a scenario and honestly I think it’s impossible for questions to come from nowhere, so not that relevant. So who is asking? That’s part of the examination – you can’t ask about the mind, but somehow from outside the examination. To do so it to presuppose something.

You seem to be saying that the computation could, somehow, look at itself, to fix the interpretation. But this presupposes an interpretation: as a computational process that has the power of looking at itself. I could equally well produce an interpretation that assigns the computation of successive prime numbers to the black box, which doesn’t have that property. It’s a vicious homunculus regress.

You could as much produce an interpretation of a car that dives and propels itself underwater too. I don’t know why you would if the subject between us had been about cars, just cars, no particular divergence from the subject of standard cars.

And there is no presupposing anyway – if I set up one automatic door so it’s sensor ‘looks’ at another automatic door for it’s area of detecting movement, you know it just works at a physical level and requires no presupposing of ‘looking’ or ‘detection’ as anything beyond the physical.

If I make a circle of such automatic doors, I’m not presupposing it can ‘look at itself’ – it just can. In the purely physical sense.

This is where you apply the model of using a rough idea of self to survive to yourself as viewer as well – to show how ‘look at itself’ is a rough idea. It’s not proving anything to take my words like ‘look’ and try and treat them as if I’m using words in more than just the purely physical sense. I’m just using ‘look’ as a rough idea.

Besides, even if there were an interpretation that has the right properties, I can just as well assign that interpretation to any other given physical system, as long as its state space has the right cardinality. That is, not only would my brain give rise to my conscious experience, it would likewise give rise to yours, or to any possible conscious experience; and not only my brain would do this, but so would any tree, any geological formation, any planetary system, virtually anything in the physical world, depending on some details of how, exactly, you slice the state space.

Sure. And I’m just saying it’s the physical structures, collected over billions of years of evolution, that are slicing that state space you refer to. Sure, if evolution had somehow lead to rocks having as diverse behaviouralism as us, you might very well be that rock over there.

That said, it’s not panpsychism – the patterns of life that evolved have not changed that rock over there. You’re just in between two ideas for the moment and in freefall as a consequence – the idea shifts ‘conciousness’ from its old home, but without a new particular home for it, you’re attributing everything as having ‘conciousness’ as an in between failsafe (and so reading it as panpsychism), not me. That’s the free fall.

To humour the hypothesis I have, just park the idea of conciousness in a series of automatic doors, arranged by natural selection to have their sensor ‘looking’ at each other, getting more complex so as to exploit a complex enviroment, enough to eventually become a plague species on their planet.

48. Jochen says:

Callan, as I’ve said in my comment to Ihtio, I’m just about done with this topic; I don’t really think we’re going to get anywhere, sorry. But just for trying to get clear on what I’m trying to say for one last time, consider the following: Basically, the question is ‘How does consciousness arise from a physical substrate, from neurons signalling other neurons, from whatever goes on in our wetware?’. And to this, the computationalist answers: ‘By computation! All that goes on inside the wetware is analogous to the processes in the hardware of a computer, thus running an algorithm, a program, which *is* the mind.’

So let’s look at your automatic door example. What does it compute? Well, it’s not difficult to show that it could compute absolutely anything at all: the physical state of the system is given by n doors, m of which are open; thus, there are n choose m possible states for the system. Different states are connected by transition rules: the state of the door at step t+1 is determined by the state of its neighbours at step t. The starting configuration—the input, if you will—is given by a particular pattern of open and closed doors; from there, the transition rules cycle through some sequence of states, until either some end configuration is reached (corresponding to an output state) or (if the number of doors is finite) the system repeats itself.

So, which computation is this system implementing? And the answer is, simply, there’s no way to tell. The reason for this is that you could set up the implementation relation in such a way as to implement any conceivable finite state automaton (with not more than n choose m states, if we want to limit us to this ‘natural’ graining of the state space, or with any number of states at all, if we don’t restrict ourselves like that). Let’s consider two FSAs as an example, without loss of generality: one which has two states, s1 and s2, and the other which has three states, r1, r2, and r3. The evolution of FSA1 is simply s1->s2->s1->s2…, while the evolution of FSA2 is r1->r3->r2->r1…

The system then, initialized to configuration c1, cycles through its states, as dictated by its transition rule: c1->c2->c3->c4->…, until it either finishes at some cn (say, when all doors are shut), or cycles back to c1. Now, which of the two FSAs does it implement? And well, this depends entirely on how we choose to interpret its states. One interpretation might be: f1: c1–>s1, c2–>s2, c3–>s1, c3–>s2, etc. So, you attach a viewscreen to the set of doors that implements this mapping: what does any user see? The evolution of FSA1. But likewise, one could attach a viewscreen implementing f2: c1–>r1, c2–>r3, c3–>r2, c3–>r1, and so on. Then, the user would see the evolution of FSA2!

This is, in a nutshell, what all of our computers do: the interpretational mapping is somewhat more complicated, taking configuration of electrical signals to a grid of pixels such that one configuration of signals elicits some particular configuration of pixels lighting up in a specified way, but essentially, this description is a completely general account of how computation works: you have a physical system cycling through some states, and an interpretational mapping assigning meaning to these states.

But not so fast, you might want to say. What mapping? For whose benefit? I have a set of doors, some of which might either be open or closed; this is all I need: based on which doors are open, which are closed, the system enacts behaviours—no need for any abstract level, it’s all right here. Some environmental stimuli induce a configuration of doors, other doors open and close, the system backreacts in a way that its evolutionary history has shown is likely to be minimally harmful. There’s no need for any interpretational mapping to some FSA or whatever!

And I’d say, fine. You’re completely right on that. But look at what question brought us here: we have, in broad strokes, the outline of how human neuroanatomy produces behaviour—this is the analogue of the description of the environment opening and closing doors, and the configuration of opened and closed doors eliciting behaviours. But the question was: so, how does consciousness come into play? Because if that’s all that goes on, then there’d be no need for something of that sort at all. We’re straight on the hardware level, here: physical states eliciting physical actions. The computation metaphor was supposed to bring us away from this, to show us how all this leads, via the right software, the right program being run, to the emergence of mind. But it turns out it hasn’t brought us any closer to that goal.

And the reason is, simply, that if you want to say that *this* brain computes *this* mind, you presume that there is an objective answer to what a given physical system computes—but there simply isn’t; every answer to this question is ineluctably subjective, and thus, the account assumes the very subjectivity it set out to explain at the outset.

And I guess this is as clean as I can get it. Either you’re just talking physiological behaviours—doors looking at one another and opening themselves accordingly; organisms who manipulate environmental stimuli to produce behaviours. Or, you’re talking about a certain computation associated to a physical operation—but that you only can do from a subjective viewpoint, making the metaphor useless for the explanation of subjectivity. The physiological account fails to get to the level of abstraction at which consciousness operates, only accounting for its behavioural correlates; and the computational account fails to pick out one computation that actually is being performed from all the myriads of computations that might be performed (with equal rights), or else, assumes what it seeks to explain from the outset.

If you’re interested to see this argument developed more formally, you can take a look at Chalmers’ ‘Does a Rock Implement Every Finite State Automaton?’, found here:http://consc.net/papers/rock.html
He tries to find a counterargument (unsuccessfully, in my view—no matter what ‘state-transition conditionals’ the physical implementation fulfills, what matters is that in the end, with the right implementation relation—the right view screen—and the physical system in hand, you’ll arrive at the outcome of whatever computation you wanted to perform), but his presentation of the argument itself is the best I’ve found for free on the web.

But the question was: so, how does consciousness come into play? Because if that’s all that goes on, then there’d be no need for something of that sort at all.

Why? However you define conciousness (and there are a ton of differing definitions from people who all say they have it, but can’t agree on what it is).

If it, whatever it is, helped an organism survive, why are you saying there’s no need for it at all?

Your commitment that it isn’t needed because…you say so – I can’t buy into that as a legitimate refutation.

The computation metaphor was supposed to bring us away from this

Who promised this?

to show us how all this leads, via the right software, the right program being run, to the emergence of mind. But it turns out it hasn’t brought us any closer to that goal.

Doesn’t this seem unreasonable of you? Let’s say we were talking about AI and the possibility of it – it’s like you not accepting AI is possible at all unless weve gone and invented the darn thing and explained every logic gate in it that emerges the AI. To say ‘I don’t think AI can exist at all unless you go invent one’ is a failure to speculate on the subject. It’s unreasonable to enter into a speculative conversation unwilling to speculate.

Yes, every little logic gate of the human mind has not been explained here. But the underlying principle is that the processor cannot detect all parts of it – to put it simply, computers wouldn’t admit to being computers. You can look at that as if that’s what the things over there, those computer things, are doing if you want. Or you can speculate and apply that to yourself that you are a computer who is unable (from raw senses, as it has few in relation to itself) to admit it is a computer. (note: Not that I like the word ‘computer’ being used here – it’s too simple. But it pretty much works. Atleast for speculation). What would it be like if you were in that position? That your sense of conciousness came from you being unable to detect you are a computer, from lack of information. Sure, it’s a speculative question, but we’re not about to invent AI here or describe every logic in the human brain.

Think about the edge of your vision. Can you see the edge of your vision? What if I asked ‘how does the edge of my vision arise?’. When it doesn’t arise – it’s a lack of information the brain (your brain, my brain) has. It’s not arising, it’s a fall of information.

So what if conciousness not so much arises, but comes from a computer with a lack of information?

The edge of your vision certainly ‘arises’ from the subjective experience. But from objective examination, it’d simply be from a fall in information. The computer can’t see where it can’t see anymore. That’s why vision suddenly just cuts off.

and the computational account fails to pick out one computation that actually is being performed from all the myriads of computations that might be performed (with equal rights), or else, assumes what it seeks to explain from the outset.

Part of why I engaged with you is that your account is so close (or atleast so close to one speculative idea). Yes, nothing is picked out from another. Having brains doesn’t make us the special creatures of the universe. Lots of computation in the universe – were just more, intermingled with the rest. You can see your idea of ‘conciousness’ involves you expecting us to come out special somehow by you basing your refusal on it as a requirement. Yet your very refusal explains things in naturalistic terms quite neatly. We aren’t special. It’s more matter – matter interacting or computing, whichever way you want to put it.

On the paper, he seems to be arguing with an argument from someone else I can’t relate to at all. Clearly a rock wont let me post this post to the internet, or I would have just gotten a rock rather than paying money for a computer. Atleast from my biased example as an organism trying to get it’s needs or even wants in life, a computer is clearly different from a rock. I’m sorry, putting up an argument against Putnam’s case is like putting up an argument against the existance of Zeus – I’m not arguing for Zeus or Putnam’s thing.

If you’ve been putting me in the same basket as someone who says a regular rock can do the same functions as a $$$$ computer – well, that’s just a category error rather than a dismissal.

If you were an amnesiac and I tried to tell you who you were, would your lack of feeling you were that person mean I am wrong? Or does your lack of feeling come from lack of information?

In a way were all amnesiacs, having forgotten what we are the product of.

51. Jochen says:

Sci, you’re very welcome. However, I can’t claim to find it ‘obvious’ that a computationalist paradigm is insufficient for explaining the mind—in fact, it was quite a hard won realization to me, since I believed (and still believe, in fact) that it would have offered the simplest route towards a natural explanation of the mind. But of course, nature is under no obligation to honor my preconceptions of simplicity, and by now I’m thinking that things are quite a bit more subtle than my erstwhile naive ideas.

Callan:
I think you’re misrepresenting my stance somewhat. I’m not saying ‘I can’t see how computation would give rise to mind, so I believe it can’t’. I’m also far from saying that consciousness isn’t needed, I’m saying that it plays no role in explaining behaviour from physical stimuli in the model that you provide, and that hence, the model leaves something out.

It’s true that I can’t see how mind could emerge from computation—but I can’t see how mind could emerge from anything, yet, barring some sort of irreducible mysticism, it evidently does. Also, I’m quite used to working with things I can’t imagine, since my day to day work is in quantum theory. 😉

What I’m doing is providing an argument that runs counter to the computational functionalist’s intuition, and that seems, conclusively to my mind, to render this stance indefensible. Barring a response to the argument, the burden of proof is then in the computationalist’s court, showing either that the argument is flawed, or that it doesn’t preclude computationalism after all.

Briefly, the computationalist is committed to the following position:
1) There exists a program, C, such that anything that runs this program is conscious.
2) Brains run program C.
3) Hence, by virtue of running program C, brains produce conscious experience.

Putnam’s argument undercuts this by establishing that there is no objective way in which proposition 2) is true—what program a physical system runs is not a property of this physical system. He shows this by providing an explicit mapping that allows you to ‘decode’ any program at all from any physical system. If it were thus true that program C suffices for conscious experience, we would have to either conclude that every physical object is conscious in every possible way, since it implements C in every possible way, or that the physical state of some object is not, in fact, enough to fix what computation is occurring.

You’re confused regarding the suggestion that a rock could compute in any meaningful way; indeed, a rock wouldn’t let you post to the internet. But neither would a computer, or a microprocessor, without the right input/output peripherals! The role of these peripherals is to translate the state of the computer’s core into something human-intelligible. In fact, a computer’s processor is nothing but a kind of rock, a (extraordinarily pure and carefully machined) tiny bit of silicon, whose special properties make it very easy to keep track of its states. You could substitute an ordinary rock, albeit such a computer would be orders of magnitude more complex and expensive, and infeasible with present technology, needing to continually monitor the state of the rock at a molecular level—feasible in the case where one can use autocatalytic reactions to amplify the molecular state to the macroscopic level, as in, for instance, DNA computation.

Try imagining the sequence of states the CPU traverses (together with auxiliary devices, like memory, e.g.) laid out on a long strip of paper, say in the form of signal diagrams: it’d be completely meaningless garbage to you. But in comes the peripheral machinery of your desktop device, which translates this meaningless garbage into something you can readily understand—say, patterns of lit pixels on a screen. Note that this is, in principle, not any more intrinsically understandable as the original; it’s just, by dint of our specific neurophysiological makeup, a code we happen to intrinsically understand (with how we do this being precisely the question that brought us all here, or at least one part of it).

So, imagine both the sequence of states of the CPU and the sequence of pixel patterns laid out next to one another: there you have, basically, a frozen picture of the steps of the computation, one after another. Some state of the computer is associated with some pixel pattern, which you can readily recognize as being meaningful. But what’s important now is to realize that there’s nothing special about this association: there are innumerably many different patterns, and different entities, that could be associated with the states of the computer, in particular those that have a one-to-one association with the sets of pixel patterns.

So, you could just as well associate a stream of pixel patterns with the color of all pixels reversed: say, if you’re watching a movie on your computer, you would then get to see the negative. This is equally well what the computer ‘computes’ as the original movie. Or, if you have the computer execute the calculation ‘3 + 5 = 8’, the screen could show, by a trivial transformation of multiplying by 2, just as well ‘6 + 10 = 16’. So, which of the two is the computer computing? Which calculation actually occurs? And of course, there is no way to pick one. And in fact, there are infinitely many more possibilities.

Compare this to a computer allegedly instantiating a human mind, with particular qualia, etc. The mental states are, in some way, associated to the underlying physical states. Let’s again imagine both laid out side by side, on the left, a picture of the neuronal activation of the brain, and on the right, represented in whatever appropriate way, the phenomenology of the mind being ‘computed’.

Now, again, if the association between brain states and states of the mind is that between hardware and software, the mental states accompanying the neurophysiological ones are just one possibility of associating a computation to its underlying hardware. We could equally as well invert some part of the phenomenological spectrum, leading to kind of ‘inverted qualia’ phenomenology. We could imagine more wild transformations, with the resulting phenomenology unrecognizably different.

Any of these computations can be attached to the physical brain state, with none being preferred—that is, having nothing but the brain state, you could not say whether it was instantiating program C, with your phenomenology, or program C’, with the inverted qualia. And what’s more, any of these phenomenologies can be attached to any physical system, provided it traverses enough states, which can basically always be ensured by a sufficient fine-graining of the state space.

It really boils down to this question: given nothing but the states of the computer, nothing but, say, their transcript rolled out on the floor before you, what computation is performed by the computer? Which is the same question as: given a text in code, what does it mean?

If this question can be solved, then computationalism becomes viable. If it is not solvable, then computationalism is not viable, because there is no way to say ‘this brain computes program C’ in an objective way. (But of course, this question is equivalent to the cryptographical security of the one-time pad, so we know it’s unsolvable, as one-time pads can’t be cracked.)

52. Charles Wolverton says:

This allows us to conclude that if all the brain functions can be “simulated” in a computer, then whatever happens in the brain is likely to be computational.

That just repeats the logical argument that I’m questioning.

Then you go on to assume that a computational implementation is simpler than a non-computational implementation and make an Occam’s Razor argument in favor of the former. First, what’s the measure of complexity and can you justify that assumption? Second, even if the assumption were correct, it isn’t clear to me that the OR argument applies. As I understand OR, it applies to hypotheses – fewer is better. But here we’re simply comparing implementations. Computational algorithms can be tailored by humans to reduce “complexity” in some sense, but why should we assume that brains evolved with that objective? Using, for example, energy conservation as a measure, less is arguably better, but it seems that one would still need to justify an assumption that brains evolved with minimizing that measure as an objective.

I’m not arguing either way, just questioning that the answer is as obvious as you seem to be (or after all the intervening anti-computational comments, perhaps had been) assuming.

Thanks for the link! It is extremely weird that haven’t known about this argument before. Hmmm…

Jochen & Callan,
It seems that computationalism could be valuable in studying behavior. Having “computations” linked with effectors (e.g. limbs) we can easily imagine that such an approach could yield some simple solutions to some problems.
Of course, there is no way that this same approach could be applied to consciousness.

Daniel Dennet seems to be saying that once we solve the most relevant “easy problems” we will see that the “hard problem” or the “hard question” simply dissolves.
Similarly we could ask “what is life?” or “what makes matter alive?”. Of course we now know this, but only because many simpler questions were answered. Maybe the problem of consciousness will face the same fate.

Alternatively, we could entertain the possibility that the metaphysics that we (sometimes even unknowingly) use to understand the world may need to be replaced. See for example Process Philosophy and Alfred North Whitehead in particular.

Back to topic:
I really can’t see how could one use the idea of computation to Peter’s notions of recognition and pointing (and how can we understand abstract concepts with pointing?) as they are used in explaining consciousness.

I’m also far from saying that consciousness isn’t needed, I’m saying that it plays no role in explaining behaviour from physical stimuli in the model that you provide

How do you know it plays no role? You seem pretty adamant on that and I don’t know why. What’s your metric for ‘conciousness was included in that description’? Obviously to just say ‘you didn’t explain it’ over and over isn’t by itself a reputable refutation.

You could substitute an ordinary rock, albeit such a computer would be orders of magnitude more complex and expensive, and infeasible with present technology, needing to continually monitor the state of the rock at a molecular level—feasible in the case where one can use autocatalytic reactions to amplify the molecular state to the macroscopic level, as in, for instance, DNA computation.

I’d say no, you can’t substitute a rock. At best your ‘monitoring’ systems are essentially wired to do all the things the computer from before did, but the person who made them pretended to themselves it all comes from the rock while making it. Rocks all the way down, for that maker.

No, you can’t substitute a regular rock to do a semiconductors job. That’s just physics. I’m really nothing to do with that Putnam argument.

I think what you want in your refusal is a series of dominos (and ones that can be mechanically reset by the fall of other dominos that can also be mechanically reset) and question whether that could have ‘conciousness’ running on it. This static rock stuff is missplaced – you want the examples to involve physical interaction if you want to aim at my argument. But this is a lot like my automatic doors example, anyway.

These ‘and a monitoring system reads the rock’ just handballs the physical interaction to the monitoring system without acknowledging that. That doesn’t mean the rock does anything, it just means you’ve shifted the physical interactions to something you’re not talking about.

Note that this is, in principle, not any more intrinsically understandable as the original; it’s just, by dint of our specific neurophysiological makeup, a code we happen to intrinsically understand

I don’t think I intrinsically understand, perse. I missread things all the time.

Going back to my model I describe, as much as you can’t see where your vision runs out, it would often fail to see where it does not process something properly. But lacking any capacity to detect it’s failure to process, it would treat it as an intrinsic understanding. Unless it engaged doubt instead (however you implement doubt as an algorithm).

At the very least, I think ‘intrinsic understanding’ is part of the illusion. I’m sure there are many cognitive science studies showing various failures to understand texts out there. Enough for me to continue to feel doubt about intrinsic understanding.

If you feel no doubt about it, then atleast an accomplishment of our dialog is to have discussed that commitment and my own differing commitment to various amounts of doubt on the matter.

Or, if you have the computer execute the calculation ‘3 + 5 = 8?, the screen could show, by a trivial transformation of multiplying by 2, just as well ‘6 + 10 = 16?. So, which of the two is the computer computing? Which calculation actually occurs? And of course, there is no way to pick one.

Why is there no way to determine that?

Do you just mean with regular human senses? I could agree with that being largely true – but you make my point for me – your own human senses are not good enough to detect what you are. Thus you, along with the rest of the species, (because it was needed to survive) makes up some kind of self model…and regardless of how inaccurate the model, if it helped you survive, that model would be reinforced.

Consider the cognitive science test where people were told to hold a joystick under a glass panel and manipulate it – but actually they hold one just a little lower and the hand is a fake one that mimics the real hand and the scientists eventually have it move counter to the users arm. Subjects literally reported their hand became ‘possessd’

That’s how easily fooled you can be about yourself. Another example is a man who’s corpus collosum had been cut (the connection between brain hemispheres) as treatment for epilepsy. A scientist would hold up a card saying ‘please get up and walk over here’ that only one eye could see. Then he’d be asked why he got up – he’d reply ‘I wanted to get a coke’. One part making up rationalisations for what the other part did.

Can you see the begining of trail of rationalisations being treated as self understanding?

You ask me to humour doubt about which calculation the computer is running – I’m equally just asking you to humour doubt about what ‘conciousness’ is doing.

the mental states accompanying the neurophysiological ones are just one possibility of associating a computation to its underlying hardware. We could equally as well invert some part of the phenomenological spectrum, leading to kind of ‘inverted qualia’ phenomenology.

Why would you do that? How is messing with your interpretation proof of something?

And what’s more, any of these phenomenologies can be attached to any physical system, provided it traverses enough states, which can basically always be ensured by a sufficient fine-graining of the state space.

As I said above, you’re just brushing the actual physical interaction processor to the side, by just calling it a ‘monitor’
You “See, even this rock ‘processes’!”
Me “Say, isn’t the way you measure how it ‘processes’ by that big device over there?”
You “Yes”
Me “And doesn’t it seem to have all the components of a computer?”
You “Yes”
Me “And the computer really seems to be making up results from a static entity – the computer’s doing all the active physical interactions.”
You “I guess…”

I’m sorry, it just seems sleight of hand – kind of a mechanical Turk, except making it look like a rock is playing chess when really it’s a computer playing it that is tucked underneath a blanket with ‘rock monitor’ draped over it.

It really boils down to this question: given nothing but the states of the computer, nothing but, say, their transcript rolled out on the floor before you, what computation is performed by the computer? Which is the same question as: given a text in code, what does it mean?

Consider if these computers (with manipulator appendages) and humans were on an island. They all have to survive in the wild. As it turns out, over generations, the computers survive quite well and the humans die out.

Who would ‘fix’ the interpretation of computation then?

Would it matter that, by your measure, no one is there to fix it? Yet the computers and their physical interactions keep happening and in as much, keep surviving. All without the need of a fix from their old dealers.

What if…our ‘fixing’ of interpretation is really just our guess and estimate of something, do so as a means of attempting to continue to survive?

What if the humans on the island would have survived if they could have just admitted that, instead of kept on insisting they ‘fixed’ the interpretation?

Oh, same for text in a code – if it were attached to a machine that needed to survive in the wild, it’d essentially be a computer anyway. And if the code was one that survived and bred, it’d continue. If not, the code would vanish (to dust).

It’s part of the way were built to think our interepretation somehow is significant (after all, were so close to our interpretation that you’re bound to stockholm syndrome to some degree). But if something can keep merrily living on after your species has died, how significant was your interpretation and it’s ‘fixing’, after all?

56. Jochen says:

Callan:
“How do you know it plays no role? You seem pretty adamant on that and I don’t know why. What’s your metric for ‘conciousness was included in that description’?”

Quite simply that I need to mention the conscious state of the organism in order to explain its behaviour. Otherwise, you just get a chain of cause and effect that need not be accompanied by any subjective experience at all—like what happens in your automatic door model. In other words, you get something that might just as well be a zombie, and act indistinguishable. Hence, consciousness plays no role.

“I’d say no, you can’t substitute a rock. At best your ‘monitoring’ systems are essentially wired to do all the things the computer from before did, but the person who made them pretended to themselves it all comes from the rock while making it.”

But if you took away the rock, then the computer would cease to function—it is solely the evolution of the rock through its physical states—say, by whethering, chemical erosion, whatever (I never intended for the rock to be static, I was always very clear about the physical system undergoing some evolution!)—that powers the computation. Without the rock, the computer is just a dead, inert machine.

And yes, you are right that there are a lot of complex operations being performed by the interpretative machinery; Scott Aaronson has even tried to mount a defense of computationalism using computational complexity: basically, if the complexity of the reduction of the problem you want to compute to the evolution of the physical system is equivalent to that of computing the problem ‘directly’, then the system does not do any relevant computational work. And I’m ready to concede as much, thus bringing down the number of potential interpretations. However, this does not suffice to fix the interpretation: there are still innumerably many trivial maps that chance one computation into another (take just the ‘inverted qualia’-example).

“At the very least, I think ‘intrinsic understanding’ is part of the illusion.”
That may very well be the case, but like all ‘illusionist’ accounts of conscious experience merely shifts the problem around: from explaining why I have the conscious experience I seem to have, to explaining why I only think that I do. Take the dictum at the top of this page: “If the conscious self is an illusion—who is it that’s being fooled?”

The experiential fact is that I take myself to have exactly *this* experience, not *that* experience (and most of all, not no experience at all). I may be deceived—though I am not sure what deception means in this context: what’s the difference between having a headache, and merely believing yourself to have a headache?—but still, the explanandum remains that of me taking myself to have *this* rather than *that* experience.

Computationalism tells me that the explanation is that my brain runs computation C. But there is simply no fact of the matter regarding what computation my brain runs; therefore, in particular, it can’t be said to be true that my brain runs computation C.

“Why is there no way to determine that?

Do you just mean with regular human senses?”

No, I mean in principle: the computer, physically, is just shuffling signals around according to certain rules, or, more abstractly, traverses a sequence of internal states. That certain of these signals correspond to, say, numerals, is pure, arbitrary convention. From simply knowing the signals, without knowing the convention, it is logially impossible to deduce what we decreed them to mean; and without such a decree of meaning, they simply mean nothing.

Again, my old, still unmet, challenge: I give you a box, promising you that inside, there is a computer running a program. How do you find out which one?

“Why would you do that? How is messing with your interpretation proof of something?”

Simply because it shows that there is no implication from the state of my brain, or the sequence of states it traverses, to the comptuation it performs.

Maybe this is easier for you to understand if you take the other direction. Say you are running some end-user software on a computer, for instance, a word processor. Let’s say it is written for operating system X, running on processor architecture Y. Do you, therefore, know that you are using a system running operating system X on processor architecture Y?

No, of course not. You might well be sitting at a machine using processor architecture Z (say, a RISC- instead of x86-processor), with an appropriately modified operating system X running on it. Or, your machine might be running a wholly different operating system W, with X running in a virtual machine (say, a Windows machine running an OS X-emulation, or a Linux-box running Wine). That is: from within the program, there is no way for getting at the underlying hardware. This is a simple consequence of computational universality, the fact that any universal Turing machine can emulate any other TM.

So it’s the case that simply because you’re running program XY does not mean that the underlying hardware architecture is of some particular kind. Knowing the computation does not entail knowing the hardware it runs on. Do you follow me this far?

But the same thing holds the other way around. Just because you know the hardware, doesn’t mean you know what software it runs. But then, you cannot claim that a given brain runs program C, and thus, has (or takes itself to have) *this* particular phenomenal experience, since you could, with equal justification, claim that it runs program C*, and has *that* particular conscious experience. Thus, the idea that a brain runs a program takes us no step closer to an explanation of how it gives rise to conscious experience.

“Consider if these computers (with manipulator appendages) and humans were on an island. They all have to survive in the wild. As it turns out, over generations, the computers survive quite well and the humans die out.

Who would ‘fix’ the interpretation of computation then?”

Well, nobody—and noone ever has. But of course, there’s also no reason to think that those computers are conscious… This is just what I’m saying: all of their behaviour can be explained simply by chains of cause and effect, with no need for any experience creeping up alongside. They could just as well be zombies.

@All, including but not limited to Charles #52
I’ve been silent for three reasons:
1. I have too much to say, which makes it difficult to produce an ordered response and feels dangerously close to hijacking CE for my own purposes.
2. Your discussion is full of food for thought, feeding into 1 and ensuring that whatever response I had in mind was always a tad outdated.
3. Observing the discussion without interfering was very interesting, but you’ve also done most of the work for me, at least this is what it looks like now, with what Jochen #56 is saying.

Result: I am still taking my time to think, digest, synthesise your arguments and my reactions. This discussion is not over, but before resuming it (and after having cleared my mind) I’ll check with Peter. He might be glad to host the next round on CE, or might not, and I wouldn’t dare posting a 3000K words essay (or more??) as a comment here without checking with him first.

I can only confirm that this thread has been one of the best intellectual experiences I’ve had in recent times. I think everyone is contributing something useful and hope I’m not the only exception!

The Z word. But…isn’t it a bit like being Luke when he finds out Vader is his father? Just a rejection of cold fact?

If conciousness turns out to be no more than a chain of cause and effect that each cause and effect unit could not see as such, isn’t that explaining conciousness? I mean, I’m detailing a hypothetical here, but what’s wrong with a hypothetical where conciousness gets reduced to a cause an effect series – each cause and effect unit certainly gives signals out that report its own pinhole sensory feedback from on the world. And that’s as close as you get to subjectivity.

I really don’t like the Z word. But, how is ‘that can’t be right because it’s a zombie’ any kind of valid refutation? If it’s a zombie, then it is.

Remember, if you insisted I prove dragons were in the hills, but let’s say dragons don’t exist – I’d have to ignore your proving question and actually explain away dragons. Or explain them away to some genuinely existing counterpart, like komodo dragons or something.

Surely, atleast hypothetically, ‘conciousness’ can fall under that as well – a question of proving something that doesn’t exist? Maybe that’s why it’s so hard to explain or prove, just like dragons are.

But if you took away the rock, then the computer would cease to function—it is solely the evolution of the rock through its physical states—say, by whethering, chemical erosion, whatever (I never intended for the rock to be static, I was always very clear about the physical system undergoing some evolution!)—that powers the computation. Without the rock, the computer is just a dead, inert machine.

That really doesn’t work. If I hit ‘submit comment’ now on that computer with a rock heart, then you would have to have the rock just co-incidentally decaying just at the same time for this computer to read that change as when it should send the comment. That isn’t acting as my computer does, who sends when I want to send, not when a rock wants to send!

Or are you trying to rig the rock as the user of the computer, rather than me? That’d make more sense to me.

That may very well be the case, but like all ‘illusionist’ accounts of conscious experience merely shifts the problem around: from explaining why I have the conscious experience I seem to have, to explaining why I only think that I do.

I did (barring giving a full blueprint layout of your mind) – I described a system that computes (or we don’t even need ‘computes’, just matter that interacts) that doesn’t have enough information about itself to form much of an accurate model of itself, but for survival reasons needs to form some kind of model.

But if you’re a system who only gets X amount of information on yourself, you need more information to understand any illusion about yourself like the coin. Surely you agree that without more information, then he did pull a coin from your ear? That you’re stuck with that conclusion – it’s ‘fixed’.

You can see that when you are perpetually short of information then an illusion simply becomes ‘how things are’?

It depends on whether you will accept information from another angle – the one that can see the coin concealed in the magicians hand.

But when I show you the linked automatic doors, you don’t relate to it – it lacks ‘conciousness’. You don’t trust it precisely because it lacks a coin pulled from an ear and instead just a bunch of hidden coins.

You either trust a view from a perspective not your own (one that can see the concealed coin) or you don’t trust, but are perpetually lacking certain information.

Yes, you’re having *this* rather than *that* experience. And you’re taking it all you see is all there is – that you have the full set of information. Which is why the magicians trade works.

Until you trust *that* experience as your second position for triangulating further conclusions, your refutation is based essentially on the fact that you wont trust. You could never prove the sleight of hand of the ‘coin pull’ to someone who wont trust your own different, conceiled coin holding perspective. Does that make them right that it was pulled from their ear, that they wont trust you?

Take the dictum at the top of this page: “If the conscious self is an illusion—who is it that’s being fooled?”

If you’re going to use the Z word, then maybe the question constrains the answer (much like ‘have you stopped beating your wife’ constrains the answer)? Here you constrain the answer to a ‘who’. Why not asking ‘Who or what is being fooled?’. Not that I’m an advocate of the Z word.

Again, my old, still unmet, challenge: I give you a box, promising you that inside, there is a computer running a program. How do you find out which one?

Again you simply evoke a sense of priviledge as your proof. If the computer has articulators and it survives in the wild, then it survives.

It doesn’t need my special meaning blessing to (either) survive (or die). Why do you insist ‘it lacks meaning’ has meaning itself, when the computers live/continue and your own species goes extinct?

Tell me, what is the ‘challenge’ in acting as if ‘I am needed to bless things with meaning’? What is the hard thing about your ‘challenge’?

Simply because it shows that there is no implication from the state of my brain, or the sequence of states it traverses, to the comptuation it performs.

Quoting you again:the mental states accompanying the neurophysiological ones are just one possibility of associating a computation to its underlying hardware. We could equally as well invert some part of the phenomenological spectrum, leading to kind of ‘inverted qualia’ phenomenology.

I’m not seeing how it shows that – inverting anything would be messing with the sample. That doesn’t show anything in itself.

Knowing the computation does not entail knowing the hardware it runs on. Do you follow me this far?

No. Building a deliberate question mark space into the example isn’t convincing.

You’re trying to build on some distinction based on what is just colloquialism. There is no such thing as ‘software’. There is only hardware and more hardware interacting with the prior. The idea of ‘software’ is a simplification for us, which really isn’t valid. It’d be like a physicist saying there is matter and there is softmatter – ‘software’ is a simplification used in casual talk, not a principle to rest an argument on.

There’s no question mark on what ‘software’ is being run here, except for the question mark you’re forcing in simply because you’re forcing it in. If we can’t know in the example, it’s because you’re not letting us. All that is is ‘Thus, since I’m leaving a big question mark area in it, the idea that a brain runs a program takes us no step closer to an explanation of how it gives rise to conscious experience.’

Me: Who would ‘fix’ the interpretation of computation then?”

You: Well, nobody—and noone ever has.

I feel a certain crash in enthusiasm at this point – You asked before “But then, in what sense could our brains be said to ‘compute’ our minds? What fixes the interpretation?”

Were you genuinely asking what fixes the interpretation, ie you’ve commited to the idea that it could be done (or atleast humouring it could be done)?

It feels like you had a commitment, then when problematism was shown with it, you’ve acted as if you were never commited to it.

Someone commited to the idea an interpretation can be ‘fixed’ might see with the computer survival/human extinction example some reason to question their commitments.

59. Jochen says:

Callan:

The Z word. But…isn’t it a bit like being Luke when he finds out Vader is his father? Just a rejection of cold fact?

What is the supposed ‘cold fact’, here? That consciousness is implied by the state of the physical substrate? By its functional characteristics? Each of these is hugely controversial. So it’s just the opposite: the acknowledgement that, so far, we are not in possession of an explanation for conscious experience. Just dogmatically holding to the position that some variant of functionalism/physicalism/computationalism must hold true gives the game away.

I really don’t like the Z word. But, how is ‘that can’t be right because it’s a zombie’ any kind of valid refutation? If it’s a zombie, then it is.

Because a zombie is, ex hypothesi, different from a conscious being in that it lacks conscious experience. So if your theory is compatible with there being zombies, then you haven’t explained consciousness, quite simply: additional facts need to be fixed in order to introduce conscious experience, and your theory is mute on those facts.

Now, some people hold that ultimately, we’re all zombies. I think that’s a valid opinion. But still, what even those people need to explain is why we take ourselves to be conscious—why we believe there is meaningful subjective experience when, in fact, there isn’t. And the trouble is, this is just the original question with a slightly changed slant, and not a priori any easier to solve. Again, what’s the difference between having a headache and being merely deceived into having a headache? You can’t explain that by merely saying that the subjective feeling of pain in your head is an illusion, because this does no work at all towards explaining why there is a subjective feeling of pain in your head.

That really doesn’t work. If I hit ‘submit comment’ now on that computer with a rock heart, then you would have to have the rock just co-incidentally decaying just at the same time for this computer to read that change as when it should send the comment.

The input hardware sets up a particular state of the rock, and the output hardware translates the ensuing evolution into something human-intelligible. This is exactly what happens on a ordinary computer.

Really, the argument by Putnam is, mathematically speaking, trivial: for any two sets of equal cardinality, there exists a bijection between them; thus, you can always interpret the evolution of a system with enough states as a computation with the same number of states. Putnam originally confined himself to inputless finite state automata, which are capable of implementing any computation that our PCs can implement, but this is mostly a cosmetic restriction.

Or consider John Searle’s formulation (The Rediscovery of the Mind, p.208):

For any program and for any sufficiently complex object, there is some description of the object under which it is implementing the program. Thus for example the wall behind my back is right now implementing the WordStar program, because there is some pattern of molecule movements that is isomorphic with the formal structure of WordStar. But if the wall is implementing WordStar, if it is a big enough wall it is implementing any program, including any program implemented in the brain.

As far as simple computation goes, this is not a controversial view; in fact, it lies at the bottom of the strength of computation: that it can be implemented on diverse fundamental substrates, and is thus substrate-independent.

I did (barring giving a full blueprint layout of your mind) – I described a system that computes (or we don’t even need ‘computes’, just matter that interacts) that doesn’t have enough information about itself to form much of an accurate model of itself, but for survival reasons needs to form some kind of model.

But merely forming a model, a self-representation, is of course not enough for consciousness. Such a theory is a straightforward example of the homunculus fallacy.

You don’t trust it precisely because it lacks a coin pulled from an ear and instead just a bunch of hidden coins.

No. I don’t like your example because it fails to account for the coin! Certainly, if I have no further knowledge about how the magician accomplished his trick, I can’t disprove the possibility that he used magic (though I would be sceptical about it). But whatever explanation is offered—magic or sleight of hand—certainly needs to account for the fact that at some point, there is a coin. This is what your explanation fails to provide.

When I ask, how come a coin popped up suddenly?, you answer, well, it’s just an illusion. So I ask, OK, how does the illusion work?, you come back again with, but I’ve told you, it’s only an illusion. And so on. This explains nothing: it merely endlessly defers explanation.

This is why your automated doors, or surviving robots, explain nothing: all of their features can be accounted for without ever making reference to conscious experience. So, in your purported explanation of the trick, you simply leave out what the trick achieves!

The question I want to know the answer to is, basically, how the coin comes about. Of course, I expect it to be a trick—for some definition of ‘trick’—but I want to know how it works. But you’re not talking about the coin. Rather, you give me an account of something that involves no coin at all, whether brought about by magic, or by legerdemain. No word about coins need be lost when talking about your system of doors, etc.

What I want to know is, is the system of doors conscious? Does it take itself to have certain experiences? And if so, how?

The most you’ve provided is that it is deceived about its own workings by not having full access to its underlying state. But why would a lack of information conjur up the positive illusion of possessing something, namely experience, that is, in fact, not present? What information about myself am I lacking when I hallucinate a headache? Indeed, how can I hallucinate anything, if I, i.e. my conscious self, the thing that does the hallucinating, is itself an illusion? Again, who (or what!) is being fooled?

Lots of things lack information about how they work. A rock, for instance, knows nothing, and hence, nothing about how it works. Neither do plants. An insectile neural net is not sophisticated enough in order to represent its full functioning to itself.

And of course, there is the larger question: who (or what!) lacks that information? If you’re saying that I lack information about myself, and thus, hallucinate some model of myself that is incomplete, you must first posit that there is some I that can do these things in the first place; but how that comes about is exactly the question. If you’re saying that it’s simply my brain that lacks this information—then again, how does that lead to (the illusion of) conscious experience? A computer can lack information about itself in a perfectly unconscious way.

Again you simply evoke a sense of priviledge as your proof. If the computer has articulators and it survives in the wild, then it survives.

No doubt about that, but it’s not the question: the question is, does the computer have conscious experience alongside with it? Plants survive; are they conscious? What about insects, octopuses, and so on? The mere fact of survival, or of any kind of behaviour, does not imply any conscious experience. There’s a reason behaviourism is dead!

I’m not seeing how it shows that – inverting anything would be messing with the sample. That doesn’t show anything in itself.

I’m not changing anything about the physical substratum; I’m merely changing the (arbitrary) mapping between physical and computational states, which, if computationalism is right, leads to a phenomenological change. But then, the physical facts don’t entail the phenomenological facts.

There’s no question mark on what ‘software’ is being run here, except for the question mark you’re forcing in simply because you’re forcing it in.

Well, then I would urge you to consider the concept of computational universality more deeply. Think about it: how could you tell, for instance, if you are ‘running’ on a brain, or rather, on a dusty computer in some old university basement as part of some forgotten simulation? And the answer is, you can’t—because of the substrate independence of computation, you can never conclude from the fact that some computation is executed to the underlying physical realization of that computation; and likewise, you can’t conclude from the physical substrate which computation is being implemented.

It feels like you had a commitment, then when problematism was shown with it, you’ve acted as if you were never commited to it.

Someone commited to the idea an interpretation can be ‘fixed’ might see with the computer survival/human extinction example some reason to question their commitments.

What problem do you think you have shown, and where exactly do you think I’ve reneged on my account? Seriously, it feels like you are either crassly misunderstanding my position, or deliberately misrepresenting it.

My account of computation (which is pretty much the standard account) is that a computation is implemented on a physical system via a mapping between states of the computation and states of the system. Roughly speaking, the computation is a sequence of symbols, and the physical system is chosen to be the concrete representation of these symbols—thus, each state of the system is mapped to some (set of) symbols. If you change this mapping, you change what computation is being implemented. If you see some pictures on your viewscreen, and crack open your computer, you won’t find those pictures in there, but just some incomprehensible pattern of voltage levels; the pictures are generated from this via the mapping.

But, from merely knowing the physical system, or the computation that is being implemented, you have not enough information to reconstruct that mapping; that is, the physical facts do not fix the computational facts, and vice versa (otherwise, you could, for instance, find out that you’re just a simulation). Thus, the mapping must be fixed additionally; only afterwards can you say ‘this system is executing this computation’. In particular, the mapping can’t be fixed by the system itself, because in order to do so, it would already have to carry out a computation that fixes this mapping (if computationalism is true and this computation could give rise to a mind capable of interpreting something).

So, your robots in the wild—which I agree can do anything you claim they can do—can be described in a way that wholly brackets out conscious experience. But this very fact implies that there is no explanation of this experience to be found in this hypothetical: because any explanation would entail some story about how this experience arises from (presumably) non-experiential constituents.

I have been thinking about the counter argument against computationalism that you had provided here, namely that we can easily (re)interpret states and behavior (transitions through states) of virtually any physical system as computation of arbitrary function. This argument states that to “change” the computation all we need to do is to change the interpretation (the meaning) of the states and the evolution of the system.
Now I see that it may have something to do with a “simple” function, some finite state automatons, some other computational devices. However the most important thing is omitted entirely. Namely that computationalism states that mind/brain is a computer, in the sense of general computational device, Turing complete, (re)programmable.
You cannot reprogram a rock or most physical system. That is what distinguishes them from computers. Computers are not simply systems that compute, but system that can compute anything that is computable (they are equivalent to Turing machines, lambda calculus, etc.) provided the program.
We cannot provide a program to a rock. Therefore reinterpreting states of is not enough to say that a rock is a computer.

Still, I don’t buy the gospel that the conscious mind is computational (in the sense that computations explain all relevant and interesting aspects of the mind).

61. Jochen says:

Ihtio, a ‘program’ just sets up a specific initial state of the physical system you’re using to implement the computation. You can either alter the initial state of the system appropriately, which you can do with a rock, just as well; or, you can simply declare, in your interpretation, the state the system is to be the initial state, i.e. to be the state the computation starts with.

Basically, a single computation—one instance of a program—is (if it is finite, which certainly, our brains are) equivalent to an FSA; everything out brains conceivably could do, computation-wise, can be captured by a FSA. Programmability simply means to be able to set up a given, arbitrary FSA; you can do that by keeping the interpretation fixed, and choosing an appropriate initial state, but you can just as well change the interpretation. Remember, the point is that a physical system can be consider to implement any FSA—but this is programmability in a nutshell.

Jochen, exactly: any physical system can be considered to be an implementation of an arbitrary FSA (or a computation) but only by changing the interpretation.
A computer can run any FSA (or computation) without changing the interpretation. That is what makes a computer computer.
Programmability means that we can set up various computations (or FSAs) in a system without changing the interpretation, but by arrangement of agreed upon items (e.g. logic gates, instructions).

That is why we can try to imagine that a rock computes something, but it is not possible to say that a rock is a computer.
It is just a matter of definition. Computer is a general purpose calculation (arithmetic and logic) device that we can program. That is lay as well as professional understanding of the term “computer”.

Back to topic: I don’t know if we can consider our brains computers (they may be too noisy, unprogrammable, nondeterministic), but I think the relevance of this question is overestimated in the context of understanding consciousness.
Even if we assume that brains are some type of computers, what does it tell us about consciousness? That it is rule governed? Well, we already know that – consciousness is not random.

63. Jochen says:

Ihtio, so, would you say that something like measurement-based quantum computing, in which the ‘computer’—the quantum state—is destroyed, is thus not computing at all, just because it’s not reusable? I’d find that too restrictive, but tastes may differ. Ultimately, I don’t see any relevant difference between changing the (state of the) physical substrate, and changing the interpretation (and, as noted, you could even with a rock keep the interpretation, and change the state of the rock suitably, so it’s certainly not an in-principle difference).

Regardless, it’s of no importance to the argument I’m making: computationalists claim that consciousness corresponds to a computation, and as you acknowledge, we can interpret any rock as computing anything, whether or not you want to call it a ‘computer’. But then, if computationalism were true, any rock, any physical system at all, would be conscious in every way possible, including the way you are conscious right now; thus, you’d for example be much more likely to in fact be a kitchen table, or a white dwarf, or a weakly bound cloud of interstellar hydrogen, than the entity you take yourself to be, and any veridicality of your experience would at best be accidental.

I won’t pretend to understand “measurement-based quantum computing, in which the ‘computer’—the quantum state—is destroyed”, so please let me skip this part.
However the term “computer” used to mean a device that can run any algorithm/program. With limitations being processing power, memory, the availability of data, and the like. You may find the term restrictive, but that is how computer scientists use this term. Restrictive definitions are good definitions :). That’s why a calculator is not a computer. It cannot, for example, sort numbers in an array of numbers. A rock is also a limited tool, as it cannot do what Turing machines can do.

The difference between changing the program (reorganizing the structure of the program, changing instructions) and changing the interpretation is fundamental. Only by having an agreed upon conditions (what we will call 1s and 0s, etc.) we can reprogram computers and use it.
The same thing applies to mathematics, logic, biology, all of science, natural languages, etc. We first define axioms, rules of derivation of formulas, then we can write proofs in mathematics and logic. First we define what we mean by “force”, “energy”, “angular momentum”, and only then we can meaningfully talk about spinning gyroscope.

We can of course interpret rock as doing anything we want, but it doesn’t mean the rock actually does it. We can interpret the rock as consciously thinking, but our interpretations don’t constraint the rock per se. I’m stating the obvious only to shed light on the following: if I understand the computationalists’ view correctly, then interpretation and meaning are irrelevant. There is no meaning in the brain, because there doesn’t need to be. There are only mindless, meaningless events happening (that we call for example spikes, action potentials). So, according to this line of thinking, we don’t need to know “what” the brain computes: there are only actions that happen according to rules of computation (e.g. two neurons fire, the next one stops firing; or three neurons fire, but if the fourth one is also active, then only the first two fire and the third one becomes silent).
Therefore, meaning, intentionality are mistakes as great as Élan vital.

65. Jochen says:

Ihtio:
The thing about MBQC (also called ‘one-shot quantum computation’) is that it’s a computation being performed in just the way you seem to claim is, for some reason, not ‘computation’ at all: you set up a system—a quantum state entangled in a specific way—then you perform a sequence of measurements on it (which destroys the entanglement), which provides you with the outcome of the computation. It’s not reusable or reprogrammable, other than in the sense that you could set up a new quantum state, but it still provides you with the answer to computable problems (and it’s known to be universal).

You could do just the same thing with rocks, even though it would be enormously more complex: prepare a rock in a suitable initial state, then let it evolve, monitoring that evolution, which gives you the outcome of the computation.

However the term “computer” used to mean a device that can run any algorithm/program.

By that definition, then, the rock is a computer.

A rock is also a limited tool, as it cannot do what Turing machines can do.

No machine that’s ever been built—or that, as far as we know, can be built—can do what Turing machines can do.

We can of course interpret rock as doing anything we want, but it doesn’t mean the rock actually does it.

Well, the point is that from the physical substrate—rock, CPU, brain—we can’t infer the computation. But then, what’s to say that our brains run *this* computation?

And if you say you don’t need to know what the brain is computing, then you’re not, to the best of my understanding, advocating computationalism—because what the brain is computing is what gives rise to conscious experience. Think of it like a conscious being in a simulation: what program is being run, which computation is being performed, is exactly the question that needs to be answered in order to determine what is being simulated, what ‘goes on inside’, so to speak.

67. Sci says:

@Jochen – I would agree with you, as I don’t think computationalists – and arguably materialists in general – have provided a real explanation for what Sam Harris criticizes as a something-from-nothing miracle. (Of course all the other paradigms have flaws as well, for better or worse.)

There seems to be an appeal to what Clifton calls “cryptic complexity” in his Empirical Case Against Materialism. Just seems like computationalism/materialism of the gaps to me, though again I’d say all the other paradigms suffer similarly.

IMO what this suggests from a scientific, ontologically neutral POV is that we need to know more about the world and our own biology before we can tackle the Hard Problem. Does quantum tunneling have something to do with olfactory locks as some suggest? If yes then we know the class of smell-related quales are generated/invoked/whatever at the QM level which would (AFAICTell) allow us to prune a large set of theories from the current ones available.

MBQC may very well be called computation. Certainly it is not a computer. A system that would set up initial entangled state and then perform additional operations could be called a computer. I don’t quite see the relevance of this for the present discussion 🙂

So could you describe how you would program a rock to do some simple things, like a) implementation of a linked list, b) a quick sort algorithm, c) implementation of a priority queue? Of course on a single rock.

I am very much sadden that you quote only single sentence from which you draw incorrect inferences.
I have explicitly stated that a computer is a device that we can reprogram to do various things.
I have also pointed to the limits of computers (memory, processing power, etc.), so it is obvious that they are not perfect instantiations of abstract mathematical structures known as Turing machines.

Why do you need do infer the specifics of the computation (in the context of a brain)? It doesn’t matter how we would describe or interpret the “computation” in a brain. Some neurons fire, some other neurons fire. Bang! Consciousness. Consciousness is being “produced” no matter what is being computed. Therefore, many systems are conscious. Even a government is conscious, if we subscribe to computationalism. In my opinion computationalism is even worse than panpsychism.

You write:
“Think of it like a conscious being in a simulation: what program is being run, which computation is being performed, is exactly the question that needs to be answered in order to determine what is being simulated”
If we wanted to know what is being simulated, then knowing the program and the data is of course a must :), however under the assumption of computationalism, then even simulated rocks are experiences (because they are a part of a computation).

“I think computationalism is where you insist the machinery is equivalent to the meaning, while eliminativism of the Rosenberg variety is where you use AI as proof that meaning is illusory.”
Hmmm… From what I understood of Dennett it seems to me that meaning is a high level description that is useless when talking about the brain. If there was any meaning, then we would still have to have a homunculus that would need to understand the meaning of symbols, neurons firing, etc. and based on this understanding determine what to do. I think what Dennett is saying is that meaning plays no significant role in explaining the mind (how it works).

70. Jochen says:

Ihtio, OK, I think the discussion of the difference between computers and things which compute is largely beside the point. To me, anything that performs a computation—anything that implements a computable function—is a computer; what you call ‘computer’, I’d maybe consider a universal computer, or a programmable computer, but if you want a different definition, fine, I’ll acquiesce. It’s a difference in terminology, not one of substance.

I disagree, however, with your definition of computationalism:

Consciousness is being “produced” no matter what is being computed.

On any computationalist view I’m familiar with, this is wrong; in fact, what is being computed is the mind. Wikipedia seems to agree:

The computational theory of mind holds that the mind is a computation that arises from the brain acting as a computing machine.

So ultimately, what computation the brain implements is exactly the key question in a computational account. (It would also seem that this is Putnam’s view—after all, he is perhaps the original proponent of computationalism, and the originator of the argument against it I’ve been defending; it would be odd if he’d gotten his own erstwhile view wrong.)

And as for the fancy quotes etc., here’s a list of allowed html formatting in wordpress comments.

If the spatio-temporal perspectival nature of subjectivity is a necessary feature of consciousness, then one has to demonstrate how computation can yield subjectivity. If it is claimed that this perspectival nature of subjectivity is not a necessary feature of consciousness, then all bets are off.

OK, for you anything that implements a computable function is a computer, but a) that makes most of the things in the universe computers, and b) people who work with computers wouldn’t agree with this. Of course, before WW2 there were many computers – people who were performing mathematical calculations. However today the meaning is different, but a very specific one.
That is important only to state that we may try to imagine brain being a computer, but it wouldn’t even make sense to do the same with a rock.

Hmmm… You may be right that I confused the gist of computationalism, adding functionalism to it.

From the Wikipedia quote (and the rest of the page) I cannot read that mind is a specific computation, vs “[any] computation = [some] mind”.

Also, the quote is about mind, not about consciousness. And there is a grand difference. We have a lot of knowledge about nonconscious processes of minds, so we could imagine a simple nonconscious mind that allows an organism to survive.

73. Jochen says:

OK, for you anything that implements a computable function is a computer, but a) that makes most of the things in the universe computers, and b) people who work with computers wouldn’t agree with this.

Well, I’m not too sure about this. Certainly, at least in theoretical information science, people quite happily call any Turing machine a ‘computer’, and most of those aren’t programmable—take, for instance, the TMs that only output the same symbol, regardless of what’s on their input tape, or that always output the same symbol sequences, etc. Only the universal TMs would be computers on your definition. But anyway, if it suites your tastes better, I will from now on use the term ‘thing that computes’ instead of ‘computer’, although the idea that a thing that computes may not be a computer seems borderline nonsense to me, akin to the idea that a thing that lives may not be a living thing. I’d be more inclined to agree with this definition from wikipedia:

Any device which processes information qualifies as a computer, especially if the processing is purposeful.

But ultimately—you like potayto, I like potahto.

And it might be the case that any computation gives rise to some form of mentality, in a sort of ‘computational panpsychism’, I suppose, but of course, this is wholly orthogonal to the question of what computation a given physical system instantiates—and it’s clear that the assumption is that there is some specific computation is being run, giving rise to this (and only this) mind, thanks to statements such as this one

the brain is a computer and the mind is the result of the program that the brain runs.

scattered throughout the wiki article. Also, while consciousness is not the same as mind, certainly, the mind in any intelligible sense includes the conscious mind; it may involve more (although the terms are often used interchangably), but it’s clear that the aim of the computational theory is to explain how the brain gives rise to phenomenology, qualia, subjectivity, etc.

What is the supposed ‘cold fact’, here? That consciousness is implied by the state of the physical substrate? By its functional characteristics? Each of these is hugely controversial. So it’s just the opposite: the acknowledgement that, so far, we are not in possession of an explanation for conscious experience. Just dogmatically holding to the position that some variant of functionalism/physicalism/computationalism must hold true gives the game away.

I’ve used the word ‘hypthesis’ a fair bit – is calling me dogmatic really fair? I’ve refered to your use of the Z word here, explicitly. I find the use of the word quite a rejection. It’s not a rejection of fact, should the fact fall the ‘zombie’ way?

Because a zombie is, ex hypothesi, different from a conscious being in that it lacks conscious experience.

How do you know that? I mean just defining a zombie by it’s absent traits (“it doesn’t have concious experience”) surely doesn’t count as proof – I can define a car as absent an engine – it simply becomes a strange fiction or a reference to the flintsones at that point, not proof.

additional facts need to be fixed in order to introduce conscious experience, and your theory is mute on those facts.

How do you know that? Back to the coin trick, it’s like saying I’m mute on how he pulled matter from thin air behind your ear. Yes, I’m mute on that because it didn’t happen. Atleast in my coin trick example, it didn’t happen.

Asking ‘How did he pull a coin from thin air?’ is actually a question that goes nowhere (in this example, atleast)

Could you consider that you are asking questions which just as much go nowhere? That’s why I keep refering to what I’m saying as a hypothesis, to try and take myself as asking questions which maybe go nowhere as well.

So how do you know your additional facts must be fixed, rather than them simply being a question that goes nowhere?

You can’t explain that by merely saying that the subjective feeling of pain in your head is an illusion, because this does no work at all towards explaining why there is a subjective feeling of pain in your head.

A small child can step on a sharp stone and cry out – but they wouldn’t say ‘I am having a subjective feeling of pain!’. They don’t use your theory of mind, yet something is clearly happening – so why do I have to account for your particular theory of mind in this regard?

The input hardware sets up a particular state of the rock, and the output hardware translates the ensuing evolution into something human-intelligible. This is exactly what happens on a ordinary computer.

It really isn’t. Computers take input from the user (or a keyboard impact, atleast) – either we are bypassing the rock in terms of input, or we are bypassing the user/keyboard impact in terms of input.

Why would you say a rock can do what a computer does, when a computer responds to your keystrokes but a rock can’t?

Describe a wall that changes its computations based on input of some kind and you’ll have me onboard!

But merely forming a model, a self-representation, is of course not enough for consciousness.

And how do you know that? Your position seems to rest on certainties like this – it is ‘of course’ not enough for conciousness.

Sure, you’re certain. But even some of the top greenhouse scientists humour a thin chance that the greenhouse effect isn’t real. Maybe only after a brandy or two they’ll admit that, but yeah, they will. Conscious entities doesn’t have enough brandy? I agree!

All I can say is being certain does not automatically mean it’s correct there is any ‘of course’.

No. I don’t like your example because it fails to account for the coin! Certainly, if I have no further knowledge about how the magician accomplished his trick, I can’t disprove the possibility that he used magic (though I would be sceptical about it). But whatever explanation is offered—magic or sleight of hand—certainly needs to account for the fact that at some point, there is a coin. This is what your explanation fails to provide.

I think you could measure the exact dimensions of the coin with a ruler (after it’s produced) and run various experiments to determine the type of metal, etc.

Do you think you’re talking about the coin when you can’t do those measurements? Or are you actually talking about the producing of the coin (which I am explaining)?

If you can apply a ruler or some other emperic measure to conciousness, I’ll be more onboard that you are talking about the coin.

When I ask, how come a coin popped up suddenly?, you answer, well, it’s just an illusion. So I ask, OK, how does the illusion work?, you come back again with, but I’ve told you, it’s only an illusion.

I don’t think this is fair – but maybe I’m missing something. You tell me how an illusion works? I’m sure you acknowledge illusions exist. Would you say it’s something to do with a lack of information? A lack of input? A lack of stimuli? Or something else?

This is why your automated doors, or surviving robots, explain nothing: all of their features can be accounted for without ever making reference to conscious experience.

Yes, chilling, isn’t it?

Or I guess validating, if you find a lack of direct disconfirmation to be a confirmation of personal theories.

I mean, Occam’s razor, if you can explain something without X…

I just explained things as they are without (as it is commonly conceived) conciousness.

I mean, the explanation that is evolution, it works the same way – it doesn’t disprove the many gods that have been spoken of. It just explains how things are without gods making all the animals. But no one shoots down evolution for failing to explain gods – well, except for the religious.

Here, will I find the simpler explanation shot down again? I’m not saying to not believe it is to shoot it down, but utterly dismissing is shooting it down.

So, in your purported explanation of the trick, you simply leave out what the trick achieves!

I’ve refered to Darwinism several times. It’s the trick that survived better than other tricks.

But you’re not talking about the coin.

Nor you, I think. But as I said before, show how you can emperically measure conciousness and I’ll get more onboard that you’re talking about a coin rather than it’s producing.

But why would a lack of information conjur up the positive illusion of possessing something, namely experience, that is, in fact, not present?

Why would not knowing about your wifes cheating conjur up a positive illusion of possessing something?

Assumption, I guess.

I explained it previously – if we assume forming a model of ‘self’ for those things that are outside the range of physical experience is good for survival, then an organism would develop that model.

I gave the example of the fake hand holding the joystick and people, thinking it was their own hand when it went against their impulses, saying their hand was possessed.

And sometimes they say they are possessed of experience.

Hypothetically.

C’mon, I throw this ball to you but it’s like you keep your arms at your side and let it bump against your chest, fall to the ground and roll in the gutter. Surely play with the ‘they say their hand is possessed’ ball for a bit, try out how it could (not DOES, just could) apply in a broader sense and throw it back.

Imagine a world of possessions.

Sure, you don’t have to speculate. But that’s not because ‘you’ve failed to prove it to me, Callan!’, it’s just because you don’t have to speculate.

And of course, there is the larger question: who (or what!) lacks that information? If you’re saying that I lack information about myself, and thus, hallucinate some model of myself that is incomplete, you must first posit that there is some I that can do these things in the first place; but how that comes about is exactly the question. If you’re saying that it’s simply my brain that lacks this information—then again, how does that lead to (the illusion of) conscious experience? A computer can lack information about itself in a perfectly unconscious way.

I’m running short on time (shared computer!) and I’ve written way too much anyway (as I seem to have a habit of doing) so I’ll wrap up on this paragraph and try and get back soon.

I’m not trying to be pedantic, but could you write out ‘you must first posit that there is some I that can do these things in the first place’ again – I’m not quite getting what you mean to convey. 🙂

If you’re saying that it’s simply my brain that lacks this information—then again, how does that lead to (the illusion of) conscious experience?

Well, how do phantom limbs come about? Where people can ‘feel’ a limb that is simply not there? Does that seem like an example that could be stretched to a broader application?

A computer can lack information about itself in a perfectly unconscious way.

The thing is, if we had an AI and it started reporting its feelings and experiences, I think you’d be highly skeptical about its self reporting.

Yet because we were here first, no skepticism about our own self reporting.

Tell me how you’d be skeptical about the computer – perhaps how you might track down the simple causal chain in a series of structures that caused it.

75. Jochen says:

And how do you know that? Your position seems to rest on certainties like this – it is ‘of course’ not enough for conciousness.

I explained how I know this in the very next sentence: it’s a vicious homunculus regress.

I mean, Occam’s razor, if you can explain something without X…

I just explained things as they are without (as it is commonly conceived) conciousness.

But the thing to be explained is consciousness, our subjective experience, our phenomenology—or of course, if you will, the illusion thereof. It’s no secret that you can explain behaviour without mentioning consciousness, but this is not a feature, it’s a bug—because we do have (or take us to have) conscious experience. This is thus a fact to be explained entirely absent from your account.

Yes, of course, I could imagine anything to be accompanied by conscious experience, your doors, computers in the wild, and so on. I could speculate; I could hypothesize. But I’m not under the illusion that thereby, I’d be explaining anything! This merely amounts to a flat denial that there’s any problem at all.

And your examples of fake limbs, or missing limbs: those are experiences, and thus, in need of explanation; they can’t figure in the explanation of experience. Sure, they happen to be false experiences, or mistaken, at least, but you can’t have false experiences without having experiences. You assume the very thing you’re purporting to explain.

As for evolution, it’s an explanation for how the variety of life comes about, and thus, an alternative to the hypothesis of some creator being, or some other form of design; but here, we don’t want to find an alternative for conscious, we want to explain it—that we have consciousness (or, again, take ourselves to have it) is our datum, our explanandum. An ‘explanation’ that simply accounts for behaviour in terms of cause and effect amounts to nothing but a flat denial of this datum. Saying ‘it’s an illusion’ likewise explains nothing, because the question is exactly how the illusion works.

Jochen, you are right. I have mispresented “computation” and I was wrong about “panpsychic computationalism”. Thanks for helping me in getting a clearer picture 🙂

while consciousness is not the same as mind, certainly, the mind in any intelligible sense includes the conscious mind; it may involve more (although the terms are often used interchangably), but it’s clear that the aim of the computational theory is to explain how the brain gives rise to phenomenology, qualia, subjectivity, etc.

I’m not so sure about that. Computationalism, cognitive psychology are definitely interested in explaining how the mind works by postulating processes, mechanisms, representations, etc. However the case of consciousness seems to still be on the periphery.
It is interesting that you say that any mind must have some conscious aspects. I doubt most researchers and philosophers would agree, even though it makes sense.

Peter, I had an adventure in blockquote and am mortified at stuffing it up! I’ve posted my post again as it might be hard to fix (and my responsibility anyway), so could you please delete my prior post? 🙂 Hopefully the HTML on this one is correct! [Deleted – if you need any adjustments let me know. I was just saying to Ihtio that I ought to try to upgrade the commenting facilities here to make this stuff easier – Peter]

Thanks for the info, ihtio!

Jochen,

I explained how I know this in the very next sentence: it’s a vicious homunculus regress.

I missed that you’d used a ‘self’ reference in there that I didn’t use. I didn’t refer to ‘self’ in your way (that leads to your regress), I’ve used it as a spatial reference (the matter in location X,Y). When you don’t slip a selfie in, it doesn’t become a regress.

But the thing to be explained is consciousness, our subjective experience, our phenomenology—or of course, if you will, the illusion thereof.

It’s not. More to the point, you wont humour it’s not – if someone in Plato’s cave insisted I explain the black mark on the wall and how it exists, I’d say it doesn’t, it’s a shadow of something else entirely. If they insisted I was not addressing their question, then it’s them not humouring they have asked the wrong question.

Given an illusion is a B that seems like an A, an insistance I talk about A exlusively to show how A is an illusion is just super frustrating.

Yes, of course, I could imagine anything to be accompanied by conscious experience

I didn’t ask that at all – I asked you to imagine something surviving without concious experience (and yet showing complex behaviouralism, like we do). To show how it’s possible. To show an alternative.

And your examples of fake limbs, or missing limbs: those are experiences, and thus, in need of explanation; they can’t figure in the explanation of experience. Sure, they happen to be false experiences, or mistaken, at least, but you can’t have false experiences without having experiences. You assume the very thing you’re purporting to explain.

Jochen, again you’re adding words to what I’m saying. Are there people out there who report feelings from limbs they no longer have? Yes. THEY are the ones reporting this – I’m not saying it and by doing so supporting those claims. Only they claim.

I’m not assuming anything by repeating their reports. I’m not saying they are having an ‘experience’ (however you might define that word).

You’re the one saying ‘those are experiences’ then you’re crediting me as saying it too. No, the jury is out with me in regards to those claims.

So if you don’t make me say they are experiences and you don’t say it, do we start to get a sense about how self reportage can be missrepresentative of the actual situation?

What if conciousness is just a phantom limb? One the creature you are never had, but you would and do report ‘feeling’ it all the same?

but here, we don’t want to find an alternative for conscious, we want to explain it—that we have consciousness (or, again, take ourselves to have it) is our datum, our explanandum. An ‘explanation’ that simply accounts for behaviour in terms of cause and effect amounts to nothing but a flat denial of this datum.

As I read ‘denial’ here, it means to deny fact.

It doesn’t seem very open minded to engage in ‘argument’ yet have already decided your position is correct – correct enough to call someone else a denier.

Go back in time not too far and to tell someone geocentricism is wrong would to be in ‘flat denial’ of their ‘datum’ as well.

I’m getting the feeling this whole discussion has been taken by you as one of ‘someone else shakes Jochen loose from his position (if at all)’. Do you have any interest in shaking yourself loose from your position, just in case you are wrong? To make an attempt yourself to disconfirm your own position? If so, where have you attempted this during our conversation? Or have you waited on me, by myself, to attempt it time and time again?

I’ve run into people who have literally said ‘It’s not up to me to disprove my position! That’s everyone elses job! I can say whatever I want and it’s everyone elses job to police that if it’s wrong, not mine.’. Well, actually they didn’t say the last sentence, but it pretty much sums up the attitude.

Saying ‘it’s an illusion’ likewise explains nothing, because the question is exactly how the illusion works.

Every time the illusion gets explained in physical terms you say I’m not talking about conciousness anymore. Every time I say conciousness is an illusion, you say the illusion isn’t explained.

That you can’t relate to conciousness explained in physical terms seems to be the major issue. That you call it a zombie – a major rejection word – seems to show how you wont relate to conciousness explained in physical terms.

but here, we don’t want to find an alternative for conscious, we want to explain it

Then perhaps that should be the tag line for concious entities? That the option of finding an alternative has been (for some reason) dismissed?

79. Jochen says:

I missed that you’d used a ‘self’ reference in there that I didn’t use. I didn’t refer to ‘self’ in your way (that leads to your regress), I’ve used it as a spatial reference (the matter in location X,Y). When you don’t slip a selfie in, it doesn’t become a regress.

But then, it also isn’t an explanation of consciousness.

It’s not. More to the point, you wont humour it’s not – if someone in Plato’s cave insisted I explain the black mark on the wall and how it exists, I’d say it doesn’t, it’s a shadow of something else entirely. If they insisted I was not addressing their question, then it’s them not humouring they have asked the wrong question.

If it’s a shadow of something, then it exists, no? And saying that it’s a shadow of something—provided you can explain how the shadow is formed, what is the thing casting the shadow, etc.—then provides an explanation of the black mark. This is what I’m looking for. But what you’re saying is ‘there is no shadow’, thereby flatly denying the thing to be explained.

I didn’t ask that at all – I asked you to imagine something surviving without concious experience (and yet showing complex behaviouralism, like we do).

I have no trouble imagining that. But the problem is that we do have conscious experience, or are under the illusion that we do. Thus, if I imagine beings that don’t, I imagine beings that differ from us in the one crucial property you claim your theory ‘explains in physical terms’. What you need to do in order for that claim to have any content is to explain how consciousness, or the illusion thereof, arises in these beings—how it is that they take themselves to have a subjective point of view, to have an experience of there being something it is like to see red, to there being some ineffable painfulness to pain, and so on.

But you’re just saying: imagine they didn’t have that. Then we can explain everything about them! And that’s true, but only because we’re ignoring that which was to be explained. It’s like performing an experiment that throws up data your theory doesn’t explain, and throwing out the data—then, of course, your theory still explains everything. I’m not holding steadfastly to my position about consciousness—in fact, I haven’t talked about that yet. I’m holding to the data we have about consciousness, that any theory purporting to explain it must account for: that it seems to us as if we had a subjective point of view; that there is something it is like to have experiences. How these experiences arise from the physical substrate is the thing to be explained; but you’re asking me to imagine that they didn’t.

Let me ask you straightforwardly: do you believe you have a subjective point of view? Does it seem to you that there is something it is like to see red (something you could not, for instance, explain to a blind man)? That there is a painfulness to pain? If so, how does your theory account for these facts?

I’m very much allowing for the possibility that these things are, in whatever sense, ‘illusory’. And I’m not expecting for their explanation to be framed in experiential terms—that would be circular. But what I expect any successful theory of consciousness to do is to close the loop: to start with whatever fundamental entities it admits, and from them, explain how it is that it seems to each and every one of us that we have experience. Otherwise, if you just want to flat-out deny the existence of experience, you’re simply not explaining the phenomenon you claim to be explaining.

What if conciousness is just a phantom limb? One the creature you are never had, but you would and do report ‘feeling’ it all the same?

But don’t you see how blatantly circular this is? What is to be explained is precisely how it can be that one has any ‘feeling’ at all. Merely ‘feeling’ to be conscious is still having conscious experience: that of feeling to be conscious!

Every time the illusion gets explained in physical terms you say I’m not talking about conciousness anymore. Every time I say conciousness is an illusion, you say the illusion isn’t explained.

Because you don’t offer any explanation of the illusion! Let’s take your example of the magician’s coin trick. Merely saying ‘it’s an illusion’ does not amount to an explanation of how there comes to be a coin in front of me somewhere. That the magician palmed the coin, and then misdirected my attention in order to make it seem that his hand was empty when he reached behind my ear, in order to present it suddenly holding a coin—that’s an explanation. Then, I will say, ‘ah, so that’s how it works’. Then, for instance, with some practice, I might be able to perform that trick myself.

And that’s the kind of explanation that’s needed for conscious experience. The one after which you can say, ‘ah, so that’s how it works’. The one that makes me understand how it comes to be that I take myself to have subjective experience (even though maybe I don’t have it in some fundamental sense). The one that makes me grasp how the feeling of having qualia comes about (even though maybe there is no such thing as qualia). An explanation of mere behaviour does none of these things.

Then perhaps that should be the tag line for concious entities? That the option of finding an alternative has been (for some reason) dismissed?

It’s perfectly sensible to look for an alternative explanation. It’s an error of methodology to look for an alternative to the data. The data’s the data—and that data is that we take ourselves to have conscious experience. A successful explanation will not start with this; but it will end there, with how this experience comes about. If it’s an illusion, it will explain the illusion—it will explain what throws the shadow, how the magician palms the coin, etc. But it won’t throw away the data—because then, it simply doesn’t pertain to what was to be explained.

80. Jochen says:

Hi Ihtio, sorry, I missed your earlier comment. First of all, I’m very glad to hear that you seem to have found some use for my ramblings—makes me feel I’m not merely venting hot air down the internet pipeline. 😉

Second, you’re probably right that the main focus of computational theory is not on consciousness/qualia/other hard problem stuff per se; but still, I would include in the quest for ‘strong AI’ the question of whether such a machine would be conscious (Searle certainly seems to understand it this way).

As for the relation between mind and consciousness, I’m going to be lazy again and assume that the wiki definition at least has some correlation to the consensus:

A mind /?ma?nd/ is the set of cognitive faculties that enables consciousness, perception, thinking, judgement, and memory…

So it would follow from there that if you have a mind, you also have what it takes to enable consciousness.

But anyway, since it seems we’re not disagreeing along any major faultlines, and both consider the computationalist account to not be the right sort of explanation for the workings of the mind, and how it hangs together with its physical substrate, I’m not going to press the point.

If it’s a shadow of something, then it exists, no? And saying that it’s a shadow of something—provided you can explain how the shadow is formed, what is the thing casting the shadow, etc.—then provides an explanation of the black mark. This is what I’m looking for. But what you’re saying is ‘there is no shadow’, thereby flatly denying the thing to be explained.

Shadows don’t exist! Ask any old scientist! There is no ‘black mark’ (unless we get some black paint out). What is happening is our mind, rather than thinking ‘light is everywhere except for this area’ instead has the much easier, less calorie using thought ‘there is a shadow/an actual thing’

It’s a direct example of how we turn things that don’t exist into existant things in our minds, simply for ease of mental processing.

It’s a direct example of how you and I are regularly fed false information.

You say you want the coin explained, but when the thing to be explained is actually the absence of something (what ‘a shadow’ ‘is’), that’s where we talk past each other.

But you’re just saying: imagine they didn’t have that.

I’m absolutely not – specifically because that ‘that’ refers to the thing we are trying to define, that ‘experience’ thing. If I define something by a reference to what I’m defining, then I fall into the infinite regress you mentioned before.

I instead talked about a heard of behaviourally complex machines living on. And that just working out. If you want to say you can only imagine them acting really robotically or acting really zombieish, ok, hit me with that.

Please, it seems you’re frequently rewording me so as to drop into an infinite regress trap.

Let me ask you straightforwardly: do you believe you have a subjective point of view? Does it seem to you that there is something it is like to see red (something you could not, for instance, explain to a blind man)? That there is a painfulness to pain? If so, how does your theory account for these facts?

I think my definition of subjective differs. I think that as a life form with some commitment to survival (and ‘commitment’ isn’t some ineffable subjective thing – an amoeba could be said to have such a commitment as well), I see from ‘my’/this view because I’m the life form who dies if I don’t (though sometimes I try to see it from the other guys shoes – though arguably this is also a survival trait (mutally assured compassion, one might say)).

Let me ask you – if the blind from birth man is not having these ‘experiences’ that you attribute to yourself, does that mean you think he is less human (less human than you)? If not, how much does ‘experience’ count for, here?

In regards to pain, I’ll refer to vomiting – I know I don’t want to when the urge comes, yet the animal reflexes of it drags my thinking along with it. All pain is is one where your intellect agrees with your reflex, instead. I’m sure a machine with various sensors could give quite a feedout of damage done to it. And with a reflex to avoid further damage dragging it’s intellect/processing structure with it (with ‘reflex’ here refering to what feedback importance is given to (with importance simply being some numbers attached to certain sensors are set to higher and so they reinforce a synaptic simulations reaction far more than ones with a lower number)). Sure, the machine, should it have to invent a language to give feedback, might refer to the painfullness of pain. For the low resolution of the language in question (or low resolution of its sensors).

explain how it is that it seems to each and every one of us that we have experience.

You, as a whole, don’t. Seriously, why are there so many philosophical theories out there if the idea of ‘experience’ is singular and you have the one true grip on it?

Ask thousands of people what experience is and you wont get the same definition. At best you get a venn diagram where you can point at a generalised overlap.

Why, at all, did it seem like you could use ‘experience’ and refer to the very same thing thousands of other people would report?

Yet it seemed convincing to pose? Maybe because your mind is rigged to tell you you’ve got the right understanding of self. Because it saves calories to think that – either you’ll have a model that, regardless of accuracy, works. Or Darwinism steps in so it’s a moot point. Maybe that’s why it seemed a convincing question to pose.

What if conciousness is just a phantom limb? One the creature you are never had, but you would and do report ‘feeling’ it all the same?

But don’t you see how blatantly circular this is? What is to be explained is precisely how it can be that one has any ‘feeling’ at all.

Because again your argument relies on reference to a sympathetic support that I simply did not give. I didn’t say you ‘feel’. I said you report ‘feeling’ it, all the same.

Please consider it again in light of this, this time as you reporting feelings (not as me saying you had a feeling).

I know the vomit example is gross, but I think it shows an example of seperating animal reflex and intellect/thought process/processing.

I think here you are going to say ‘But I DO feel! And you don’t explain that!”

And I’ll say “Yeah, but a computer can report that it feels. You’d be skeptical of that, wouldn’t you? You’d hunt down its claim and carve it into brute causal explanation”

The vomit example shows animal reflex out of harmony with intellect.

The problem is with something like the reported ‘pain’, intellect (apart from being conditioned by animal reflex to do so) agrees with animal reflex. Thus you don’t easily seperate of one from the other.

I’ll agree that if your examples are ones where you are inclined to agree with the reflex, it’ll be hard to seperate reflex from attendant processing.

The data’s the data—and that data is that we take ourselves to have conscious experience.

People from around four hundred years ago, watching the sun revolve around the earth, would also agree the data is the data when they say it’s plainly true the sun revolves around the earth.

Turns out their data lacked the perspective necessary and was complete rubbish.

Also perhaps don’t have to go back four hundred years, if this report has validity.

No, the data isn’t always the data. Things don’t have to, necessarily, be explaind from your perspectives data. That your data can be as false as geocentricism is either the path you explore or the path you wont explore.

My last comment seems to still be in moderation? I’ve written too many walls of text? I’d agree!

I was thinking the issue of the reported ‘experience’ is something like this:

A is the pin pressed into the nerves of your finger tip.

B is the brain registering this, rather than just some nerves in your finger registering their input.

The thing is, this seems to be whole picture, yet it isn’t the whole picture. Like, how do you know you’re having an experience?

What if there is a C – this is the brain registering that B is occuring.

This would actually be a very useful survival trait – sometimes to get food you have to suffer a little pain/negative feedback. Instead of reflexively avoiding all pain,if the brain can monitor it’s pain and create strategies to overide the withdrawal reflex (in the name of gaining, say, honey while bees sting you)

The big issue here is that there is no D. D would be the brain registering that C is occuring.

Without a D, it seems B is just happening.

But even with D, it might seem that C is just happening – it’s a recursive issue. Every time you put on a new monitoring level, from the perspetive of the subject that the monitoring level before it ‘just happens’. It’s inescapable.

But if we keep it simple and treat it that without a D, B just seems to happen – then the thing to think about is is ‘experience’ just happening, or are you failing to register your own perception/seeing of it? Your seeing that you are having an experience – is that a C? And you lack a D to see the C in action? Thus C becomes invisible and off the radar – there only seems to be A and, apparently most importantly, B?

85. Jochen says:

Callan, I don’t think there’s really any moderation here; but if you’ve been using some html coding in your last post, it might be an issue with that—I had a comment fail to post because of some ill-formed url reference.

As for the rest of your post, you seem now to be defending something akin to a higher order thought approach: some experience is conscious if there is a higher order thought whose content is that very experience. Of course, the problem with that is that it yields no mechanism how a higher order thought produces this consciousness. Merely its existence seems to be insufficient: a thermostat, or perhaps some higher-order control circuit, has a higher order thought in the sense that it represents its own state, or that of its sensor; but claiming that this leads to subjective experience is simply a bald-faced assertion without explaining how it does so.

Or, in other words, we have some explanandum E: there seems to be subjective experience of something (say, the pain induced by the pinprick). You’re now saying that the brain registers itself registering that stimulus (your C). This may well be the case; and it may even be the case that this is all that occurs. But still, you need to provide a demonstration how C is sufficient for E, or how C not being itself registered is sufficient.

For another analogy, say that I ask you how a lightbulb works, and you answer that there’s electrons travelling through the wire. This is of course completely correct, and it’s in fact all that happens in some sense, but it’s not an explanation for why the lightbulb does what it does—for this, you’d have to supply the connection between electrons travelling through the wire and the production of light, which is due to the wire’s resistivity, which heats it up, inducing thermal motion in the lattice of atoms the wire’s made of, causing the atoms to emit electromagnetic radiation, which enters the visible spectrum at a certain temperature.

So while it’s correct that all that happens is electrons travel through the wire, this does not yield an explanation for why there’s light; likewise, while it may be correct that all that happens is A, B, and C, this does not explain why this leads to E, the appearance of subjective experience. It seems to me that A, B, and C could very well occur without there being any E—say, in some thermostat monitoring its own state—, just as there could be electrons travelling through a wire without there being light. The thing to be explained is that the presence of A, B, and C implies the presence of E—that’s the hard part.

Well it has ‘(Comment awaits moderation)’ underneath. If there isn’t much moderation here, as you say an auto system may have flagged it and perhaps Peter doesn’t check the folder the post has gone to because it usually doesn’t come up. I’m thinking of posting my comment to my blog if nothing changes soon, if only for the sake of my keyboards wear and tear while typing it!

some experience is conscious if there is a higher order thought whose content is that very experience.

I’m not saying ‘X is concious’. I’m just explaining how a system could fail to detect how it detects, at a certain point. This would leave a big question mark for that system. The system might invent something like the idea of ‘conciousness’ to fill in that question mark.

And that idea it makes up and uses to fill in the question mark could well be absolute rubbish.

I’m not sure why it doesn’t click with you merely in the sense that ‘well, that’s how this story goes’. The robot has gaps in what it can detect – it makes ideas to fill in those gaps. The ideas it has, it calls conciousness and subjective experience.

Why doesn’t that click? I doubt you have trouble imagining a robot making up a false idea about itself?

“But we’re talking about me! MY subjective experience!”

Well, we’re not. Were trading in theories here – all that’s involved is you imagining yourself being that robot for a second and considering parallels between the two ideas.

Surely you can’t be saying ‘Convince me of the theory before I’ll consider the theory hypothetically’?

It seems like you wont consider yourself as the robot, just for a second. Even after, presumably, watching bladerunner? Or is Rachel always the other person – never oneself? I mean, that’s what I got from the movie – not “Oh, I thought she was human…but now she’s not!” – it was simultaniously “Whoa – what if I thought I was the human, but then….”. It wasn’t just a ghoulish bit of ‘turns out it’s a machine! ewww!’ fun of feeling freaked out at something elses betrayal of ones senses.

From the looks of it the upcoming movie Ex Machina will touch on similar themes.

Sorry, Callan, the comment should be there now. It was held for moderation – this is triggered by some factors that might indicate spam but might also be legit. I cannot see what the problem was in this case, and unfortunately I’ve been a bit tied up for the last couple of days – my apologies.

88. Jochen says:

Callan, I’ll respond to your two posts in one, and hope it won’t exceed all reasonable length limits:

Shadows don’t exist! Ask any old scientist!

Of course shadows exist: they’re regions of comparatively lower illumination with respect to their surroundings, just as there may be regions of lower pressure, lower temperature, etc. (I can ask an older scientist later at work, if that would please you.)

Regardless, by exist, I merely meant: there is something that I can point to and say, ‘that’s a shadow’. And the fact that there is a shadow is what needs explanation.

You say you want the coin explained, but when the thing to be explained is actually the absence of something (what ‘a shadow’ ‘is’), that’s where we talk past each other.

No, I’d be just as happy with an absential explanation of consciousness, if it indeed does explain how the absence creates the illusion of there being conscious experience. I’m reading Terrence Deacon’s ‘Incomplete Nature’ at the moment, and he appears to try and create just such an account. However, your account seems to exhaust itself in there being some lack, and the system hence ‘making up’ consciousness to paper over it; but it’s exactly how this ‘making up’ works that I would want to see explained.

The crux is, in order to make up something, the system must be capable of having beliefs; however, having beliefs presupposes intentionality, and there being something it is like to have those beliefs presupposes phenomenality, so on any common reading, a system that can ‘make up’ things in the relevant sense already is conscious. Thus, making something up can’t feature in the explanation of why a system takes itself to be conscious.

I think that as a life form with some commitment to survival (and ‘commitment’ isn’t some ineffable subjective thing – an amoeba could be said to have such a commitment as well), I see from ‘my’/this view because I’m the life form who dies if I don’t.

But ‘seeing from this point of view’ is not the same as having subjectivity. A camera sees from its point of view (by virtue of being ‘that camera’), but this is fully accountable for in objective terms, as is your being the lifeform who dies. But the fact that certain things seem some way to me is exactly what’s not (straightforwardly) described in objective terms—that’s why, for instance, you can’t explain to a blind person what seeing is like. This is the subjectivity I mean; if it were to be graspable objectively, then I should be able to explain to a blind person what it’s like to see. Can you propose an account regarding how this could be done?

Let me ask you – if the blind from birth man is not having these ‘experiences’ that you attribute to yourself, does that mean you think he is less human (less human than you)?

Please, don’t be ridiculous. I have at no point said or even implied that experience has anything to do with some form of gradation of humanity.

I’m sure a machine with various sensors could give quite a feedout of damage done to it. And with a reflex to avoid further damage dragging it’s intellect/processing structure with it (with ‘reflex’ here refering to what feedback importance is given to (with importance simply being some numbers attached to certain sensors are set to higher and so they reinforce a synaptic simulations reaction far more than ones with a lower number)). Sure, the machine, should it have to invent a language to give feedback, might refer to the painfullness of pain.

But it’s not the verbal reports we are trying to explain, but the subjective experience. Just because something says it experiences pain, it does not follow that it, in fact, does. One can write a simple computer program that gives out all kinds of reports of agony; but this doesn’t entail that there is any actual pain. Or if it does, you need to make at least an argument regarding the connection—how these reports lead to the subjective feeling of painfulness.

If I shoot an alien in some ego shooter, and it finds a screaming end writhing in apparent pain, then does it actually feel pain? It would seem that your account entails a commitment that yes, it does, and that hence, the gaming industry is possibly the most monstrous institution humanity has ever given rise to.

I mean, you basically seem to be saying that conscious experience can be deduced from behaviour. But then, you must have an answer to the question how the two are connected, no?

You, as a whole, don’t. Seriously, why are there so many philosophical theories out there if the idea of ‘experience’ is singular and you have the one true grip on it?

Ask thousands of people what experience is and you wont get the same definition.

Possibly. But they would all maintain that they have it; and that’s all that I asked an explanation for.

Because again your argument relies on reference to a sympathetic support that I simply did not give. I didn’t say you ‘feel’. I said you report ‘feeling’ it, all the same.

But again, reporting to feel something is not what we set out to explain, but appearing to actually feel something. Take again the shot alien: it clearly reports being in pain. So, do you believe it actually is?

And I’ll say “Yeah, but a computer can report that it feels. You’d be skeptical of that, wouldn’t you? You’d hunt down its claim and carve it into brute causal explanation”

And if the only thing we had to explain were systems claiming to feel or be conscious, then that would be a valid point. But that’s not the case: we have first-personal accounts of actually (seeming to) feel; those are what we need to explain.

People from around four hundred years ago, watching the sun revolve around the earth, would also agree the data is the data when they say it’s plainly true the sun revolves around the earth.

No. The data, then as now, was that the sun rises in the east, and sets in the west. That this is because the sun revolves around the earth was an explanatory hypothesis, later discarded in favour of a more all-encompassing one. I think you really need to get clear about the distinction between observational data and hypothesis invoked to explain it, if we are to get any further here.

I’m just explaining how a system could fail to detect how it detects, at a certain point. This would leave a big question mark for that system. The system might invent something like the idea of ‘conciousness’ to fill in that question mark.

Well, maybe—though I don’t see why it would, honestly. But the question is, how does it manage this? How does an organism convince itself to have a subjective, intentional, phenomenological experience if, in fact, it doesn’t? Saying ‘it might just invent it’ does not explain anything, it merely re-poses the question.

Why doesn’t that click? I doubt you have trouble imagining a robot making up a false idea about itself?

No, the problem is how merely having false ideas suffices to produce (what seems like) experience. That is, I can easily imagine a system that has the ‘false idea’ (to the extent that having an idea does not itself presume consciousness) of being conscious, but does not actually possess any of the properties that I associate with my subjective experience: trivial programs could output verbal reports according to which they do have that experience, but it doesn’t follow from this that they actually do, any more so than it follows that one indeed can fly just by merely believing so.

Thus, it seems possible for a system to report that it is conscious, without actually having, say, a subjective point of view, intentional states, phenomenal experience—that is, reporting conscious experience while everything about this system can be completely and exhaustively explained in third-person terms, whereas (say) the visual experience of seeing something can’t be so explained, or else, one could describe what it’s like to see red to a blind man.

So this is the bridge you need to build—from reporting conscious experience to actually experiencing it, or appearing to experience it.

Surely you can’t be saying ‘Convince me of the theory before I’ll consider the theory hypothetically’?

No, I’m saying that your theory doesn’t seem to explain what you claim it explains, the same way that a mere account of electrons flowing through a wire does not explain the light generated by it, that claiming that something is an illusion/a magic trick does not explain how the trick is performed, that saying ‘it’s a shadow’ does not explain what casts the shadow, and how.

So I’m not saying ‘convince me of the theory’, but merely, ‘convince me that it actually is a theory’—because for this, it needs to provide an actual explanation, an actual mechanism by which the thing in need of explanation comes about. That ‘the system might invent something like the idea of ‘conciousness’’ simply fails to provide this, because on any straightforward reading, ‘making up’ something already presupposes a substantial fraction of conscious phenomenology, or if it doesn’t—if it counts as ‘making something up’ if a system only produces a verbal report to that effect—then it is completely unclear how this is sufficient for the first person phenomenology we all take ourselves to have. Certainly, an account is possible on which a system ‘makes up’ the idea of conscious experience in this sense, but actually fails to take itself to have a subjective viewpoint—that’d merely amount to (unknowingly) uttering a falsehood.

Of course shadows exist: they’re regions of comparatively lower illumination with respect to their surroundings, just as there may be regions of lower pressure, lower temperature, etc. (I can ask an older scientist later at work, if that would please you.)

Please do! Also ask him from what on the periodic table a shadow is made of, if he says shadows exist. And whether he would be frustrated if someone said, say, that emotions physically exist, but they couldn’t point to the periodic table to show what they are made of. Whether he holds claims of existance but with no periodic table back up as lacking credibility, but he also says shadows exist?

Evolutionary scientists will speak of a animals feature as being ‘designed’ for something. But it’s simply a lazy form of speach. If your argument hinges on saying shadows exist, then its not long for this world.

The crux is, in order to make up something, the system must be capable of having beliefs

If I may paraphrase the Joker, where do you get these wonderful rules?

You could easily imagine a system that randomly determines which of a series of cups it lifts, in case that one has the bean under it. You could also imagine a system that reviews a number of recorded models that have features similar to what its recieving as input and then the one with the highest matching feature score is what the system settles on the object being.

And still getting it wrong.

That’s making something up, from the outside.

But ‘seeing from this point of view’ is not the same as having subjectivity.

Another rule?

You’ve seen electron microscope images (say, for instance, of the back of a human hand). You know you can’t explain the back of your hand in this detail. You know your perceptual resolution is too low for that. But now you try to describe a brains responce to stimuli, but instead of acknowledging a low resolution understanding, you assert ‘subjective experience’.

You think you know your mind like the back of your own hand, even as electron microscope images make you realise you don’t know the back of your own hand.

Yes, when the reporter is using low resolution input and low resolution reporting and the recorder is using low resolution hearing, the thing reported is not communicated with great accuracy.

Please, don’t be ridiculous. I have at no point said or even implied that experience has anything to do with some form of gradation of humanity.

No, this is fair play. Unless were here to protect our public images rather than pursue thinking to its (perhaps ugly) conclusion, this is fair play. You’ve talked about zombies who have no experience. I think you have atleast defined one extreme end of a spectrum.

I mean, you basically seem to be saying that conscious experience can be deduced from behaviour. But then, you must have an answer to the question how the two are connected, no?

You keep rephrasing me and arguing with your rephrasing. Right now I’m dealing with your reporting of what you refer to as concious experience. You keep trying to get me to repeat your ‘concious experience’ words – then if I repeat your words, you, as the author of them, get to tell me what they mean (and what they mean conveniently cancels anything I raise, of course). You’re trying to engage in a sympathetic understanding that works by that process, which has served the species quite well in hunting and organising for millions of years, but makes an absolute hash of this topic.

I am not saying you are having a ‘concious experience’, I am saying you are reporting something that involves those words. You’re just shooting down a strawman to ask how I explain your concious experience…when what that is is whatever you want it to be. And then I have to roll with whatever you want it to be because if I did use your word, I’d have treated it as the datum (instead of simply a report, with all the hearsay issues that can contain, of what the datum is).

Until I get through on this whole reporting Vs me using your word thing, were talking past each other.

Me: You, as a whole, don’t. Seriously, why are there so many philosophical theories out there if the idea of ‘experience’ is singular and you have the one true grip on it?

Ask thousands of people what experience is and you wont get the same definition.

You:Possibly. But they would all maintain that they have it; and that’s all that I asked an explanation for.

No, no, no! I’m gunna go all grammar nazi here, essentially!

You do not get to say they might have lots of different defintions, but then have the gall to refer to a singular ‘it’!!

The whole point was to show there is no singular ‘it’ involved in all their reportage. But in the same dang sentence you go and refer to a singular ‘it’ anyway!

Unless you give up the singular ‘it’ here you ignore the point and we are again just talking past each other!

I’m pretty sure most modern methodical thinking methods would not refer to a thousand defintions of one word as a singular ‘it’. I’m pretty sure I’m in the right here! Ask an english teacher! But ask about something other than ‘conciousness’ – ask whether, when thousands of people are asked to define the physical characteristics of a dragon – if you get thousands of different definitions, do you refer to ‘it’ in this case? I think the answer will be no.

But again, reporting to feel something is not what we set out to explain, but appearing to actually feel something. Take again the shot alien: it clearly reports being in pain. So, do you believe it actually is?

Fine – let’s Turing this! You have ten figures, possibly some of them (maybe none) are humans who’s image has been converted to polygon models. Some (maybe none) are just polygon models. Your alien. They are, say, having their hand in ice water (which is reported as really hurting after awhile)

Which ones are in real pain, when they all look the same (polygon constructions)?

And to make it personal, let’s say you can ‘relieve’ them of their pain if you press a button. If you didn’t, you have to meet the people after (if there were any) and talk about how you didn’t save them.

Perhaps your example actually questions your own case?

Me: People from around four hundred years ago, watching the sun revolve around the earth, would also agree the data is the data when they say it’s plainly true the sun revolves around the earth.

You: No. The data, then as now, was that the sun rises in the east, and sets in the west. That this is because the sun revolves around the earth was an explanatory hypothesis, later discarded in favour of a more all-encompassing one. I think you really need to get clear about the distinction between observational data and hypothesis invoked to explain it, if we are to get any further here.

No, I think you need to get clear on how people back then did not seperate observational data and hypothesis on this. They really didn’t. It was ‘the truth’. The ‘data’ was the data! They were certain!

I blunder around every single damn day. I do not have trouble thinking I could get things wrong – that if born back then I would just as much think the sun revolved around the world! That I’d confuse observation for ‘the data’. And from my link above, it seems 1 in 4 Americans STILL believe the sun revolves around the freakin’ earth!

But Jochen, while it seems only one of us can acknowledge the capacity to have a massively flawed perception (as those people from 400 years ago) and you think talking about them and their fixed view means I need a lesson on observation data vs hypothesis, yes, I’d agree we can’t get any further.

Are we both going to consider we are wrong somehow as we talk, or are you going only consider it’s me that could be wrong and needs to ‘get clear’ about something? Perhaps there was something you needed to get clear on? If no, okay, somehow it’s that inequal state of affairs (where only one side of it can be wrong, for some reason) and we’re not going to get any further.

90. Jochen says:

Also ask him from what on the periodic table a shadow is made of, if he says shadows exist.

So, everything that exists scientifically must be made from something in the periodic table. OK, then light doesn’t exist, time doesn’t exist, space doesn’t exist, forces don’t exist, electrons don’t exist, pressure doesn’t exist, energy doesn’t exist… I’m pretty sure any scientist would be very interested to learn that. Or, take a vacuum: it exists, in just the way a shadow does. As do, for instance, quasiparticles in certain substances that essentially are nothing but the absence of electrons. You accuse me of making up rules, but then go on to claim sole authority on the meaning of existence…

And besides, I have pointed out that my analogy does not hinge on what ‘exist’ means in this case; all I need is for the shadow to be something one can point to and ask: what causes that? This is the one question I want answered, and yet you continually ignore it.

That’s making something up, from the outside.

Sure. But making something up from the outside is not what is in need for explanation—that’s trivial. What needs explanation is, once more, how this leads to all the apparent subjective phenomenology we experience.

You think you know your mind like the back of your own hand, even as electron microscope images make you realise you don’t know the back of your own hand.

But there’s no independent facts to consciousness apart from my experience of it. Consciousness is precisely that what I experience; there is no ‘electron microscope’ that can show me additional aspects of this; consciousness is what I experience.

No, this is fair play. Unless were here to protect our public images rather than pursue thinking to its (perhaps ugly) conclusion, this is fair play. You’ve talked about zombies who have no experience. I think you have atleast defined one extreme end of a spectrum.

But that I conjoin this with some kind of value judgment is purely your invention. Really, for getting so huffy when I ‘paraphrase’ you, you yourself seem to have no qualms simply making up points for you to rebut…

Furthermore, if you think that blindness is somehow one step towards zombiehood, then it seems you haven’t understood the zombie argument: its entire point is that a zombified creature would be behaviourally indistinguishable from a being with full experience; blindness’ entire ‘point’, in a manner of speaking, is that those suffering from it are behaviourally disadvantaged.

A better, but still somewhat incomplete, analogy would be blindsight: where a proband reports being consciously unaware of what happens in a certain part of their visual field, yet has some information available nevertheless. If this could be trained to fully equal the performance of a person with normal vision, you’d plausibly have something like a partial zombie (although even here, the proband would probably report that their visual experience differs from normal sight; but he could conceivable lie convincingly).

I am not saying you are having a ‘concious experience’, I am saying you are reporting something that involves those words. You’re just shooting down a strawman to ask how I explain your concious experience…when what that is is whatever you want it to be.

I’m not asking you to explain my conscious experience; I’m asking you to explain yours. For all you know, I might not have any—this is, in a way, exactly the point I’m trying to impress on you: that behaviour and verbal reports are not sufficient to gauge the presence of conscious experience, and that hence a theory that exclusively focuses on this can’t be sufficient, because it is consistent with both the presence and absence of conscious experience.

But you have access to your own conscious experience, your own subjective data, so you know that, for instance, pain feels like something to you, when it feels like nothing for the alien in the ego shooter.

No, no, no! I’m gunna go all grammar nazi here, essentially!

You do not get to say they might have lots of different defintions, but then have the gall to refer to a singular ‘it’!!

This is not a grammatical issue. You’re engaging in the Socratic fallacy: if making reference to a term always involved first defining it unambiguously, no discourse could ever get off the ground. And it’s in fact very common to refer to something in the singular even though there exist no singular definition: take the case of knowledge. Ever since Gettier overturned about 2000 years of thinking on the matter, an all-encompassing definition has been lacking. Chalmers (maybe not originally) has even suggested that one could make do with a converging series of progressively sharper definition, refined in any given case until the instance you’re interested in becomes clarified. Nevertheless, there is no problem referring to knowledge singularly.

Or take the old fable of the blind men and the elephant: one feels the trunk, the other the ear, the next the tusk, one its side, the other a leg, and the last one the tail. The first one claims an elephant is something like a big tube, the second one asserts it’s a kind of membrane, the third one says it’s something hard, curved, and pointy, the fourth one claims it’s like a big, hairy, warm wall, number five says it’s something like a tree, and to the last one it’s kind of like a rope. So there’s six definitions that wildly disagree (more so than definitions of experience typically do), and nevertheless, the blind men will agree with the proposition ‘there’s an elephant in the room’, and quite happily and correctly refer to it as a single entity, even though they disagree about the entity’s particulars.

Which ones are in real pain, when they all look the same (polygon constructions)?

And to make it personal, let’s say you can ‘relieve’ them of their pain if you press a button. If you didn’t, you have to meet the people after (if there were any) and talk about how you didn’t save them.

Perhaps your example actually questions your own case?

No, it strengthens it, in exactly the way you have described: it is impossible for me to gauge which of the polygon constructs have conscious experience, since I only have access to third-personal data. Regarding conscious experience, however, it is the first-personal data I must account for; and leaving this out is exactly where your explanation fails.

Perhaps consider it this way: if some entity utters the claim ‘I am conscious’, then there are three possibilities:

1) It could be right: there is meaningful conscious experience associated to its internal states.
2) It could be wrong, but believes itself to be right: it seems to itself that there is conscious experience associated to its internal states.
3) It could be wrong, and not believe itself to be right: it is simply, say, by programming, constrained to make the utterance, without there being anything behind it.

From the outside, it is impossible to distinguish these cases, and hence, your theory, in solely explaining outside behaviours, never even touches on the actual explanandum in conscious experience.

Let’s say we have grounds to dismiss case 1), for instance, convincing arguments that such genuine consciousness is not compatible with naturalism, but needs some sort of mysticism or dualism to function, and we had strong reason to believe that neither of these make sense.

Then, the problem is to distinguish between cases 2) and 3)—from introspection, we know that (at least) case 2) applies to us: we take ourselves to have conscious experience, to feel pain, to experience the redness of red, to have something it is like to be us. This is what we would like to have explained.

But your theory is incapable of distinguishing between cases 2) and 3), and hence, if it is right, then it still simply fails to account for the experiential fact that we take ourselves to have conscious experience.

No, I think you need to get clear on how people back then did not seperate observational data and hypothesis on this. They really didn’t. It was ‘the truth’. The ‘data’ was the data! They were certain!

Well, I wasn’t around then, so I wouldn’t know what they thought was ‘the truth’. However, I do know that the idea that everyone in antiquity unquestioningly held to a geocentric worldview is factually incorrect—in fact, the very first heliocentric model (that we know of) is due to Aristarchos of Samos, from the thrid century BC. So I think your claim somewhat stretches the facts: there were alternative explanatory hypotheses of the data around.

Furthermore, it’s of no importance what a pre-scientific civilization might have thought about the (scientific) concept of ‘observational data’ or not. Today, we are clear on its definition, and it pertains exactly to what we observe, and the aim of scientific theory is to account for these observations. Thus, I observe my own conscious states, and even if this is in some way illusional, a good scientific theory will explain these observations.

Are we both going to consider we are wrong somehow as we talk, or are you going only consider it’s me that could be wrong and needs to ‘get clear’ about something?

I readily acknowledge that there is an asymmetry in this discussion, but that is founded in what we bring into it: you make a positive claim regarding having a theory of consciousness, a hypothesis whose acceptance you lobby for; I am merely explaining why I think that hypothesis is inadequate. I make no positive claims about any theory of consciousness, but merely report my own experience, on which I indeed am the only authority; any successful theory of consciousness must account for this experience, or it quite simply doesn’t explain what needs explanation. You’re the seller, I’m a prospective buyer; you need to convince me that what you’re selling does what I want it to do, or else, I’m not buying.

The problem we face, from my point of view, is that I can’t quite seem to be able to get through to you what it is that I’d want to buy, and that thus, you keep—falsely, in my view—claiming that what you offer supplies it. So let me try and make this clear one last time.

Go back to the blindsight patient. Surely, you’d agree that his experience is different from that of a normally sighted person—after all, he says so, and one can use other means to convince oneself that yes, he does actually not have any conscious visual perception in some part of his visual field.

But nevertheless, further experiments show that he does have some information about what happens in that blind spot—he can correctly guess, upon being prompted, whether there was an object present, or whether there was movement, or something like that.

So now let’s just imagine that the blindsight patient were able to hone his skills to such a point that he can ‘auto-cue’ himself, and makes nearly infallible guesses about what was present in his visual fied, allowing him a performance nearly indistinguishable from that of a normally sighted person. Yet still, he would insist that he does not consciously perceive anything in that blind spot, just as he did before—the only difference being that what the experimenter asked him earlier, he now asks himself.

This might be somewhat hard to imagine given the richness of our visual experience, so let’s think about a grossly impoverished visual experience—say, a world in which there are only three kinds of objects—a ball, a cube, and a cone, say—that each come in two variants—large or small. Now, you or I, or any normally sighted person, would, upon there being placed an object before us, immediately see that it’s, for example, a large cube.

But a blind person, obviously, wouldn’t. In fact, one often imagines that a blind person ‘sees’ only darkness, but not even that—what he experiences is an absence of representation, not a representation of absence, so it’d ‘look’ more like what your visual field looks like at the back of your head—like nothing.

But, a blindsight patient could now, upon being cued, answer questions like ‘is it a cube, or a cone?’, ‘is it large, or small?’, and so on, with greater than chance probability, despite having the same phenomenal experience as a completely blind person—i.e. none at all. A phenomenology which, I repeat, certainly differs from ours.

Now, the next step is that the blindsight person teaches himself to ask those questions to himself—and each time, comes up right with some probability greater than chance. Eventually, given enough training, his performance might even equal a normally sighted person—yet, there does not seem to be a reason that his experience should change, as well; when asked ‘what do you see?’, he’d still answer ‘I don’t see anything’. And when we follow up, ‘but then how do you know?’, he might just say, ‘well, that’s easy, I just ask the question to myself, and then I think, well, it’s probably a large cube!’.

And then, as a final step, suppose a blindsighted person decides, perhaps fed up with our questioning, to lie about his visual experience. He’s got all the third-person data available to him that a normally sighted person does, and could thus duplicate their verbal performance; but still, there does not seem to be any reason that he should have the same phenomenology.

So this, then, is what I want a theory to explain: the difference between the blindsight liar, and the normally sighted person; or, if there is no such difference, then, I want an explanation for how the illusion of experience comes about (after all, we are not knowingly lying about our conscious experience). But on your theory, we might all be blindsight liars, like the alien in the ego shooter, screaming only because the program requires it to, without any subjective pain.

In short, there either is, or is no difference between us and blindsighted persons trained to perfectly replicate a sighted person’s performance. If there is, that is, if there is some genuine phenomenology that is present in the sighted, absent in the blindsighted, then I would expect an explanation for this fact; an explanation for how phenomenology arises in the one, but not in the other case. If there isn’t, that is, the phenomenology of both is the same—say, for instance, none—then I want an explanation for how it comes about that we take ourselves to have a different phenomenology from none at all.

So in either case, there is something to explain: since it’s an exhaustive list of cases, there is something to explain, period. But your theory doesn’t address this: you say we ‘make up’ conscious experience, but how the ‘making up’ works is just what needs explanation. Hence, your theory, as best I can tell, does not provide what I’m looking for in an explanation for consciousness.