"Feser... has the rare and enviable gift of making philosophical argument compulsively readable" Sir Anthony Kenny, Times Literary Supplement

Selected for the First Things list of the 50 Best Blogs of 2010 (November 19, 2010)

Monday, November 16, 2015

Augustine on semantic indeterminacy

St.
Augustine’s dialogue The Teacher is
concerned with the nature of language. There
are several passages in it which address what twentieth-century philosophers
call semantic indeterminacy -- the
way that utterances, behavior, and other phenomena associated with the use of
language are inherently indeterminate or ambiguous between different possible
interpretations. Let’s take a look. (I will be quoting from the Peter King
translation, in Arthur Hyman, James J. Walsh, and Thomas Williams, eds., Philosophy
in the Middle Ages, Third edition.)

The dialogue
is a discussion between Augustine and his son Adeodatus. Several pages into the dialogue (at pp. 12-13
of the text I’m quoting from) the question arises whether someone can teach
another person the meaning of a term without using words or other signs such as
pointing one’s finger at a thing, but instead just by way of one’s actions:

Augustine:
What if I should ask you what walking is, and you were then to
get up and do it? Wouldn't you be using the thing itself to teach me, rather
than using words or any other signs?

Adeodatus: I admit that this is the
case. I'm embarrassed not to have seen a
point so obvious. On this basis, too,
thousands of things now occur to me that can be exhibited through themselves
rather than through signs: for example, eating, drinking, sitting, standing,
shouting, and countless others.

Augustine: Now do this: tell me-- if I were completely ignorant of the meaning of the word
['walking'] and were to ask you what walking is while you were walking, how
would you teach me?

Adeodatus: I would do it a little bit
more quickly, so that after your question you would be prompted by something
novel [in my behavior], and yet nothing would take place other than what was to
be shown.

Augustine: Don’t you know that walking is one thing and hurrying
another? A person who is walking doesn't
necessarily hurry, and a person who is hurrying doesn't necessarily walk. We speak of 'hurrying' in writing and in
reading and in countless other matters. Hence
given that after my question you kept on doing what you were doing,
[only] faster, I might have thought walking was precisely hurrying -- for
you added that as something new -- and for that reason I would have been
misled.

End
quote. Augustine’s point is that the
behavior Adeodatus was proposing as a means by which one may teach the meaning
of the word “walking” is ambiguous or indeterminate between the meaning walking and the meaning hurrying. Nothing in the behavior considered by itself could determine one or the other
interpretation, nor could it rule out yet some other possible interpretation (such
as jogging or being chased). Hence
exhibiting that behavior could not by
itself teach the meaning of “walking.”

Later on in
the discussion (at p. 27), Adeodatus himself reinforces the point with a
related but slightly different example:

Adeodatus: …For
example, if anyone should ask me what it is to walk while I was resting or
doing something else, as was said, and I should attempt to teach him what he
asked about without a sign, by immediately walking, how shall I guard against
his thinking that it's just the amount of walking I have done? He'll be mistaken if he thinks this. He'll think that anyone who walks farther
than I have, or not as far, hasn't walked at all.

Here the
idea is that by walking six feet (say), you will have done something the
meaning of which is indeterminate between the meaning walking and the meaning walking
six feet. Hence if someone asked you
what “walking” means and you carried out that behavior in response, he could
come away thinking “Oh, ‘walking’ means moving in that manner” but he could
also come away thinking “Oh, ‘walking’ means moving six feet in that manner.” Again,
since nothing in the behavior considered by
itself could determine either of these meanings or some other meaning
altogether, the behavior by itself
could not suffice to explain the meaning.

Now, you
might think that further behavior that provides a larger context for the
walking, or gestures, or explanatory utterances, or other elements of the
overall communicative environment, will suffice to determine which meaning is
intended. Augustine himself doesn’t
pursue the issue much further, but in fact the indeterminacy would afflict all
of these other aspects of the situation as well. This is the lesson of examples like W. V.
Quine’s “gavagai” example in Word and
Object, and Saul Kripke’s “quus” example in Wittgenstein on Rules and Private Language. Any collection
of behaviors, gestures, and even utterances, will be ambiguous or indeterminate
between different possible interpretations.
Even if you add to the story mental pictures and other images, including
inner “utterances” -- as when you call before your mind the way that the
sentence “By this action I mean walking!”
sounds, or the way the sentence looks when written out -- that will not solve
the problem, because those images are
also susceptible of different possible interpretations.

So what does determine what is meant? Here different philosophers offer different
answers. Quine famously held that there
simply is no fact of the matter about
what one means by an utterance. Meaning
is not merely indeterminate from behavior
and the like, but indeterminate full stop. But Augustine would not agree with that. (Which is a good thing, since, as I have
argued many times, the idea that there is no
determinate meaning full stop is incoherent. See e.g. my article “Kripke, Ross, and the
Immaterial Aspects of Thought,” reprinted in Neo-Scholastic
Essays.)

But again,
what does determinate what is
meant? Augustine doesn’t say much about
that -- the indeterminacy of semantic content is not his main topic, after all
-- other than to note (at p. 28) that someone who is intelligent will be able
to figure out the significance of behavior, a judgment with which his son
concurs:

Adeodatus: … If he is sufficiently
intelligent, he’ll know the whole of what it is to walk, once walking has been
illustrated by a few steps.

Of course
this is, in one sense, not terribly informative or helpful, even though it is
perfectly true that we typically are able to figure out what is meant by
different behaviors. For we want to know
exactly how an intelligent person
figures out the meaning, given that the behavior is inherently ambiguous or
indeterminate in its significance.

In another
way, though, Augustine’s point is a deep one, even if this is best seen by
reading it as an answer to a question that is not exactly the one he was
addressing. Materialist or naturalist
accounts of thought and its content typically suppose that they can be
explained in terms of causal relations
of some kind. The idea is that a thought
will have the content that (say) the cat
is on the mat if it bears the right sort of causal relation to the state of
affairs of the cat’s being on the mat.
Spelling out what the “right” sort of causal relation would be is where
things get very complicated. And the
main issue is that indeterminacy problems afflict every attempt to spell out
the analysis. For the state of affairs
we call the cat’s being on the mat
can also be described as a state of affairs involving a domesticated mammal’s being on the mat. So why does the fact that this state of
affairs causes the thought entail that the thought has the content the cat is on the mat as opposed to the
content a domesticated mammal is on the
mat? You can add details to the
description of the causal relation to get around this problem, but the revised
account of the causal relation will in turn face indeterminacy problems of its
own. (An example would be Fred Dretske’s
account of semantic content, which I discussed
in a post a few years ago.)

At the end
of the day, the indeterminacy can only be eliminated by simply conceptualizing the relevant causal
relata in this specific way rather
than that way. That is to say, it can be eliminated only
when there is an intellect present
which can do the needed conceptualizing.
Yet the whole point of the causal theory of content was to explain where
thoughts having a certain conceptual content come from. So any such theory must fail. It inevitably must presuppose the very thing
it was supposed to be explaining. (This
is a point which has been made in different ways by Karl Popper and Hilary Putnam,
and which I develop in “Hayek, Popper, and the Causal Theory of the Mind,” also
reprinted in Neo-Scholastic Essays.)

The deep
point implicit in what Augustine says, then -- though again, this isn’t really
the set of issues he was addressing -- is that the intellect’s grasp of
meanings is more fundamental than any behavior, gestures, utterances, aspects
of the communicative context, etc. that might be used to teach or express
meanings. Hence you are not going to be
able explain the former in terms of the latter.
You are not going to be able to reduce intelligence to patterns of
behavior or dispositions to behavior (as the behaviorist holds), or explain it
in terms of causal relations between the human organism and aspects of its
environment (as causal theories of content hold), etc., because the behavior,
causal relations, etc. have whatever semantic associations they have only by
reference to an intellect which grasps those associations. The
intellect is itself the central and irreducible element of the semantic
situation. (It is irreducible to
inner “utterances” and other mental imagery too. When I entertain the thought that the cat is
on the mat, I might “hear” in my mind the English sentence “The cat is on the
mat,” but that auditory image is not itself
the thought. See “Kripke, Ross, and the
Immaterial Aspects of Thought” for more on this subject.)

In this
vein, Augustine also notes (at p. 29) that it is a mistake to think that
gestures like pointing are the key to understanding meaning. Pointing one’s finger is, after all, itself
just another piece of behavior susceptible of alternative interpretations, and
is not in the first place fundamentally about the thing pointed to at all:

Augustine: … I don’t much care about
aiming with the finger, because it seems to me to be a sign of the pointing-out
itself rather than of any things that are pointed out. It's like the exclamation ‘look!’ -- we
typically also aim the finger along with this exclamation, in case one sign of
the pointing-out isn’t enough.

In other
words, what pointing primarily does
is to call attention to the fact that the one pointing is trying to call our
attention to something, and only secondarily
does it indicate the thing that is being pointed to. This reflects the fact that the presence of
an intellect is fundamental to the semantic situation, and the significance of gestures,
utterances, actions, etc. is only derivative.
Imagine a garden hose lying on the ground in such a way that it seems to
“point” to a certain tree. We don’t
regard this as genuine “pointing” -- in the sense that deliberately aiming your
finger at someone is genuine pointing -- because we know that the hose does not
have an intellect and thus cannot be trying to call our attention to
something. We would regard it as genuine
pointing only if we supposed some person had come along and arranged the hose
that way in order to get us to notice the tree.

It would be
absurd, then, to try to explain how intellect gets into the picture by starting
with meaningless physical elements and their behaviors, then supposing that
some kind of “pointing” arises in sufficiently complex systems -- say, by means
of causal relations of some sort -- and then in turn supposing that intellects
arise in some subset of these systems which cross some yet higher threshold of
complexity. All of this would get things
precisely backwards. For “pointing” of
the relevant sort could arise only where there is already an intellect present, which intends by the “pointing” to
call attention to something.

Is the idea that the pointing out itself means, 'think of this' in intention? So say I cannot save my friend, but beg his attention from danger by waving my arms to get his attention, then 'point' (with my finger) to the danger --- the obvious idea is to get his mind to think of the danger, everything else was just a means or instrument? Obviously it is impossible that any of my actions or behavior 'meant' (or could be interpreted as) "the danger". My actions were not the danger. The thing itself was.

Perhaps the best example of this kind of thing is a Stop Sign in Canada/North America? The thing itself is meaningless nonsense. It means nothing to a complete foreigner, for example.

If pointing implies intellectual concepts that cannot be reduced to behaviourism this opens the door to acknowledging animals have intellect or that their pointing is somehow not pointing in the same sense but I suspect the behaviourists would use the same reason against humans?

Could you explain more about why it's absurd to say intelligence arises from meaningless physical elements through complexity? I'm thinking of the process of emergence where you get self-organization. There's no way to know ahead of time what will emerge from simple beginnings, given enough iterations.

A physicalist explanation might say intentionality is simply a directional flow of energy, like a river flowing or something like that. We say living beings intelligent simply because they have crossed a key threshhold of complexity. No?

Thanks for a very interesting post. I hadn't heard of that particular dialogue by St. Augustine before. Thanks for drawing it to my attention.

You write: "Any collection of behaviors, gestures, and even utterances, will be ambiguous or indeterminate between different possible interpretations." I agree. But what would you say to a diehard physicalist who maintained that human beings are neurally hard-wired to find certain behaviors (e.g. pointing by other individuals) salient, but not others, and that people are also neurologically predisposed to interpret gestures in certain particular ways, rather than in other ways which are equally compatible with the behavior observed?

You also write: "...we want to know exactly how an intelligent person figures out the meaning, given that the behavior is inherently ambiguous or indeterminate in its significance." That's an excellent question. How would you answer it?

While I agree with a lot of what you are saying here, you don't distinguish between being completely indeterminate (which is the incoherent idea) and being somewhat indeterminate. It is obvious that are words are not completely indeterminate, but it is also pretty clear that they are somewhat indeterminate, e.g. if you saw every cat ancestor that ever lived, you would not be able to determine an exact point where there was a "first cat" such that all the ancestors before that were not cats. And this is not just because you don't understand "this is a cat" well enough. It is because it is not defined well enough, to you or to anyone, to do that in the first place.

This is far from obvious, at least apart from a clear account of what you mean by "word." A vocal sound (qua sound) or a mark on a piece of paper (qua mark) is completely indeterminate. I'm inclined to think it would remain so even if we knew someone intended it to mean something; unless we were already familiar with the language in question, we still wouldn't have even the beginning of a clue about what it was intended to mean.

But you appear rather to be thinking of a "word" as already having an established meaning in the mind of the hearer/reader. If so, then I think you're right that such a word is "somewhat indeterminate" in the sense that the user probably can't sharply differentiate between its referents and everything else, but that's a bit beside the present point.

Re the indeterminacy argument, the last post had me wondering about a possible paraphrase: a physical object can (re)present itself determinately, but anything else it can represent only indeterminately; ergo, any precise concept of something non-physical (a googleplex-o-gon, an instance of modus ponens, and so on) requires an immaterial intellect, etc. But as Augustine points (ha) out, an object cannot even present itself as itself without a mind to intend it.

Sandymount: this opens the door to acknowledging animals have intellect or that their pointing is somehow not pointing in the same sense but I suspect the behaviourists would use the same reason against humans?

Yes, an (irrational) animal that points is not doing the same thing as a man who points. We can correctly apply the same view to human behaviour, but of course I do not need to interpret my own behaviour to know that I really do possess determinate concepts and thus have an immaterial intellect. That we are not directly aware of concepts in someone else is simply the old question of Other Minds — a single instance of determinate thought, anywhere, anywhen, is sufficient to demonstrate the conclusion; once we have established that I have a mind (and not merely a brain), there is no reason to suppose the corresponding behaviour in other men does not have the same explanation.

John Moore: We say living beings are intelligent simply because they have crossed a key threshhold of complexity. No?

Yes, no! This issue was raised in the previous thread (and Ed has helpfully provided links to many other relevant posts on this topic). The point is that a mental concept (vs. image) is not the kind of thing that can be built out of non-concepts. It's like saying that middle C is a colour — well, we can show that it's not just one colour, but what if we got a complex-enough arrangement of colours? How do we know that there is not some threshold, enough colours, spread out in enough dimensions, that we might not eventually get to middle C? But of course the answer is obvious: if you understand why middle C is not a colour, you will see that it is not the kind of thing that can be any arrangement of colours. Adding complexity just adds more of something that doesn't help.

Vincent Torley: [what about claiming] people are also neurologically predisposed to interpret gestures in certain particular ways

I'd say they're good at guessing. But the point (ha again) is that if minds were just brains instead of intellects, then we'd be guessing — lucky or not — even when we interpreted our own thoughts, and so being determinately right (such as when making a valid logical inference) would be strictly ruled out. (In fact, even saying that we "interpret" our own thoughts would lead to infinite regression, I think.)

"...we want to know exactly how an intelligent person figures out the meaning"

I think the answer is that ultimately we have forms in our intellects, so there is a point (not ha) at which we don't have to "figure out" anything — the actual concept is directly present to us.

Entirelyuseless: It is obvious that are words are not completely indeterminate, but it is also pretty clear that they are somewhat indeterminate

"Indeterminacy" in this context is not the same thing as "vagueness". (Something can be determinately vague!) If you understand a valid instance of modus ponens, that doesn't mean you have a picture that is adequate enough to distinguish it from other logical forms; it means you have the very concept, or else you couldn't say that it was in fact logically valid.

"Emergence" is modern-speak for having the relevant causal powers already baked in at lower levels. If atoms have no causal powers to participate in intellect, they don't acquire them through sheer force of numbers or arrangement.

And intellect (in the Scholastic/Thomistic understanding of the word) isn't the sort of thing that can "emerge," especially through mere complexity. The intellect is incorporeal/immaterial and simple, not corporeal/material and complex.

The emergence the physicalist’s reality of “*Self-Assembly*” sums to nonsense:

As we’ve seen elsewhere around here, there is, on Materialism, no (actual) “*Man*” at all – anywhere – but only his singular ontology of his “singular and seamless continuum of particle in motion” (or whatever) leaving him with only arbitrary cutting points. Hence there are no (actual) “stages” of “man” nor any possibility of “*emerging*” properties.

David Oderberg briefly touches on the physicalist’s problem of “Self-Assembly” finding coherence:

“Metaphysically, moreover, the very concept of self-organization is suspect. More precisely, the idea that an entity can organize itself into *existence*, which is what is at issue, is deeply suspicious. For if an entity – any entity – is to organize itself into existence, it has to exist before it can do *any* organizing, let alone organizing its own existence; so it has to exist before it exists, which is absurd. This means that self-organizing systems are really systems that are organized into existence from *without*, as a convection cell is organized into existence by its environment, albeit with apparent spontaneity and unpredictability. Once in existence, there is no conceptual problem with the entity’s continually organizing itself through self-regulating, homeostatic, or other mechanisms that involve, say, taking in energy from the environment, utilizing it and expelling waste products. But that it could organize its entry into the world in the *first* place looks like as good a case of metaphysical impossibility as one is likely to get.”

While the Christian has the metaphysical and scientific wherewithal to *distinguish* the Human Being – and Mind – and so on – as an actual substance, a real entity, it doesn’t seem that the Naturalist does. Not “really”, as that pesky and “singular and seamless continuum of particle in motion” (or whatever) leaves him with only arbitrary cutting points.

David Bentley Hart carries the state of affairs into locations the physicalist cannot reach as he describes consciousness:

“[consciousness] is that….absolutely singular and indivisible reality which no inventory of material constituents and physical events will ever be able to eliminate. Here again, and as nowhere else, we are dealing with an irreducibly primordial datum.”

David Hart further echoes Oderberg regarding emerging properties:

“A true physicalism makes no allowance for emergent properties in nature that are not already implicit in their causes. Unless, then, one is positing the existence of proto-conscious material elements, particles of intentionality and awareness that are in some inconceivable way already rational and subjective, and that can add up to the unified perspective of a single conscious subject (which seems a quite fantastic notion), one is really just talking about some marvelously inexplicable transition from the undirected, mindless causality of mechanistic matter to the intentional unity of consciousness. Talk of emergence in purely physical terms, then, really does not seem conspicuously better than talk of magic.”

Moreover (adding to Hart's point as quoted by scbrownlhrm), even if a true physicalist emergence could get us to consciousness, that alone wouldn't be enough to get us to intellect. A grasshopper is (presumably) a unified conscious subject.

I'm actually fairly sympathetic to panexperientialism, but it doesn't do away with the need for higher-level forms.

That last point is important Scott raises is important. A lot of these criticisms of Physicalism presuppose a Mechanistic understanding of Matter and in consequence would be far less damaging to Physicalists such as Ellis, Martin and maybe Nagel who accept immanent teleology.

For what it's worth I think the Scholastics were wrong to so neglect the special character of consciousnesses.

"Emergence" is indeed modern-speak for "let's pretend we know more than we actually do". "Y emerges from X" signifies a correlation but leaves its type entirely open.

If some kind of epistemic relation between X and Y (for example some specific type of causality) is already known, then adding "emergence" to it will add nothing. If no epistemic relation is known, adding "emergence" will not magically (see D. B. Hart quote above) produce one.

Thus the "emergence of Y" is nothing but an unnecessary (re-)affirmation that we acknowledge the existence of Y. But it sounds kinda sciency, which is why I suppose materialists are drawn to it.

DB Hart: "Unless, then, one is positing the existence of proto-conscious material elements, particles of intentionality and awareness that are in some inconceivable way already rational and subjective, and that can add up to the unified perspective of a single conscious subject (which seems a quite fantastic notion)"

Scott: "A grasshopper is (presumably) a unified conscious subject."

Or as Mary Midgley put it in "The Myths We Live By": Thoughts are not granular. And "[...] neither do cultures have particles." One might as well believe in memes.

entirely useless: "It is obvious that are words are not completely indeterminate"

The meaning of a word is not some kind of "internal pointing" or disposition of the mind towards an object. Rather, meaning is immanent in our use of words.

We may disagree about whether a certain structure built by humans should be called a "house" or not. But our disagreement is not due to some inherent vagueness (see Mr. Green@November 17, 2015 at 7:57 AM) of the term "house" (that is, with regard to whatever internal rules a language may have). What can be indeterminate is not a word but our application of it in the context of a certain practice. We differ, as Wittgenstein put it, not in opinion (what we say), but in form of life. We certainly do not differ with regard to some "approximation to a reality of houses" which exists independently of us.

Thanks for everyone commenting about emergence or complexity or self-organization. It's interesting to see how the materialist position and the theistic position seem to be worlds apart. Anyway, I just wanted to repeat a few key points that the materialist proposes and that you guys might not have addressed directly:

a) Intentionality is a physical flow of energy.b) Intellect is equivalent to goal pursuit.

If the theist assumes that intentionality or intellect or consciousness must be non-physical things, then these materialist arguments obviously make no sense, but I'm hoping to read more direct arguments explaining why the materialist arguments are themselves illogical or self-contradictory. Thanks!

a) Intentionality is a physical flow of energy.b) Intellect is equivalent to goal pursuit.

I don't understand what these are supposed to mean. Taxis behavior in earthworms and bacteria is goal pursuit; what would it mean to say that this is equivalent to intellect? Electric shocks from amber could be called a physical flow of energy; what would someone be saying if they claimed that this is intentionality?

Not all energy flows are intentionality, of course. But all intentionality is energy flow. If you want to understand, there are ways to do that.

Do earthworms and bacteria actually pursue goals? What does it mean to have a goal? Do rivers have a goal of getting to the sea, or is there some level of complexity required before we call an energy-flow goal pursuit?

Again, you can understand the materialist's argument if you read and think about it. Or you can dismiss it out of hand. I'm hoping more people will take this argument seriously and give me strong criticisms of it.

"a) Intentionality is a physical flow of energy. / b) Intellect is equivalent to goal pursuit. / If the theist assumes that intentionality or intellect or consciousness must be non-physical things, then these materialist arguments obviously make no sense, but I'm hoping to read more direct arguments explaining why the materialist arguments are themselves illogical or self-contradictory. Thanks!"

No you're not. If you were hoping that, you'd click on the links to the right of this blog, where it says "Blog Archive."

"Again, you can understand the materialist's argument if you read and think about it. Or you can dismiss it out of hand. I'm hoping more people will take this argument seriously and give me strong criticisms of it."

No you're not. If you were hoping that, you would have provided an argument.

To be clear about what has gone wrong here... You've responded to Brandon by saying that not all energy flows are intentionality but all intentionality is energy flow.

Well, that's fine, the materialist can go ahead and claim that. But now we don't know what he means. There's no claim here that's determinate enough to warrant a counterargument.

(Fred Dretske does, by the way, argue that all nomological causation is intentionality, even if an exceedingly unremarkable and not-necessarily-mental variety. That's probably the more interesting materialist route to take, here.)

The same problem arises by denying that earthworms and bacteria pursue goals. In one sense they do. In another sense, in which humans do pursue goals, they don't. We can go ahead and use that latter sense, but then defining intellect in those terms is not helpful to the materialist.

Please pardon the following tangent, but as the spring semester looms, I'm feeling desperate! I've taught this text numerous times in medieval philosophy courses with varying levels of success. I find medieval philosophy terribly challenging to teach in general, as compared with ancient or modern. I find it hard to select what texts to read, especially from voluminous writers like Augustine and Aquinas. By the same token, I don't know how to convey the breadth of systematic thinkers like these either, especially squeezed into one semester. I also don't know how to do justice to the Islamic and Jewish traditions. I always feel like I am being so specific, as when we focus on short passages (ontological argument, Five Ways, etc.), that the larger context of a thinker or movement is not being conveyed, or that I am painting with such broad brush strokes ("Aquinas is an Aristotelian") that the details and substance are lost. Does anyone have recommendations for how to approach putting together a syllabus for medieval philosophy? Each course will have about 27 lectures.

Nobody is saying the Materialist’s model is incoherent. Rather, it’s just that the model isn’t saying anything – as it houses no capacity to rise above the epistemic.

The model is simply equivocating and conflating on the "intention" and "pursuit".

The Naturalist’s / Materialist’s entire metaphysical landscape is constituted of but *one* start/stop geography, that being the "singular and seamless continuum of particle (or whatever) in motion".

The Materialist's argument sums, therefore, to this:

Particle Cascade = Intention

But that's unhelpful because:

Particle Cascade = EVERYTHING

Hence there is no argument being made, as what is actually being posited is this:

Particle Cascade = Particle Cascade

Now, PC = PC *is* “coherent” as it does not contradict “itself”. But nobody cares because nothing is being argued – or defined – in such a model. At bottom it just *is* the metaphysical equivalent of [A = A].

Intention, Ought, Good, Evil, Love, Feeling, Hunger, Heat, Cold, Aboutness, Life, Non-Life, Space, Dirt, Man....... the whole show just is one "singular and seamless continuum of particle (or whatever) in motion" leaving the Materialist (or PN) with only arbitrary cutting points. As each cutting point is factually arbitrary – the stuff of semantic/epistemic – then it is factually the case that there are no (actual) "stages" of "man" and thus no (ontological) possibility of "*emerging*" properties.

There are no metaphysical *categories* -- plural. Just one, singular, seamless continuum of particle (or whatever) in motion.

John Moore:Not all energy flows are intentionality, of course. But all intentionality is energy flow. If you want to understand, there are ways to do that.[...]Again, you can understand the materialist's argument if you read and think about it. Or you can dismiss it out of hand.

It is actually you who should "read and think about it" more, particularly with regard to the expression "all intentionality is energy flow".

What you can at best assert with any evidence-backed confidence is that all intentionality requires certain material phenomena (which I take is what 'energy flow' is supposed to refer to) to occur. Which (at least in our modern times) is pretty much a triviality that nobody will deny, materialist or not.

If you can explain how and why a certain instance of an "energy flow" is necessarily connected to some particular instance of intentionality -- as opposed to simply noting that it is -- you will have advanced the cause of materialism. No one has so far even got close to giving such an explanation. All that has ever been established are inductive correlations between certain physical conditions and first person experiences. (Even these are rather shaky so far, advances in neuroimaging and other techniques notwithstanding.)

The main reason why you need to "read and think about it" more is that you are not differentiating between human experiences and the material conditions which make them possible, which is indicated by your careless use of "is". My experience of shopping in a supermarket is a phenomenon quite distinct from the movements of my body which occur during my search for a pint of milk.

In addition, "energy flow" is a highly abstract concept, derived from certain laws appearing in theories in the natural sciences, these laws being themselves abstract generalizations of experimental results. To build your materialist case from such abstractions and generalizations which are at least twice removed from the reality of the phenomena we actually experience is already by itself a concern which renders the materialist project dubitable.

Last, the issue of materialism is not one that divides its advocates and opponents into theists and atheists.

I was only referring to this business of "Particle Cascade = Particle Cascade" which is what "Energy Flow = Intention" unpacks to, or breaks down to. John Moore asked us to show how "it" or "that" is self-contradicting. In principle it isn't in that "A = A" does not self-negate etc.... - of course it's just an empty, vacuous, statement, as described, summing merely to "A = A". If Energy Flow is "EF", then:

Overall, on the umbrella, laubadetriste is correct. At the end of the day, not only is Material*ism* incoherent, but Materialism's "speak" on anything "intentional" ends up incoherent. Feser has several items on that latter point, but one to give a bit of context is this:

"The main reason why you need to "read and think about it" more is that you are not differentiating between human experiences and the material conditions which make them possible."

Yes, that's the whole thing. The materialist's project is precisely to unify human experience with material conditions. To differentiate them is the theist's project.

By the way, I know I need to read and think more - that's why I'm here. I have been reading Prof. Feser's blog for years, so it's not as if I'm a total newbie.

Anyway, various commenters point out that I haven't made the materialist argument here, but I've merely hinted at it, and that's true. I don't think the argument fits in a comment box. But here I'll just add some detail:

The energy flow I'm talking about is just what physicists talk about. It's light, for example, as it strikes an object (such as an apple) and then shines into our eye. Our retina changes that light energy into neuro-electric energy that flows through our brain into the visual cortex and then through myriad neural network channels to many other parts of our brain. The same neuro-electric energy may eventually flow out of our brain through motor nerves to our muscles, where the energy changes to kinetic motion. Our mouth and throat muscles move so that we say "That's an apple." Now it's sound wave energy.

As you see, the energy changes into different forms, but it's all one flow of purely physical energy from the apple to the spoken word "apple."

And so the argument is that this energy flow is how the word "apple" means an apple. This is one aspect of intentionality. And the argument is that all aspects of intentionality can be explained similarly using energy flows through the brain, with input, processing and output.

Professor Feser,I believe that this is the same problem which I have when materialists claim that a simple substrate plus a simple rule can generate complexity ("emergent properties"): They never define "complexity", but they seem to all use the implicit definition of "requiring a large number of concepts to be understood by a rational mind". At least, I've never seen a materialist attempt to explain why 20 electrons, regardless of arrangement, are not 1024 times more complex than 10 electrons, without recourse to a mind capable of concepts.

scbrownlhrm said...In short, nothing is anything.Which, of course, is all the Materialist can say at the end of the day.[...]At the end of the day, not only is Material*ism* incoherent, but Materialism's "speak" on anything "intentional" ends up incoherent

John Moore:

The materialist's project is precisely to unify human experience with material conditions. To differentiate them is the theist's project.

First, there are lots of atheists who deny materialism. You again make it sound as if the distinction between theism and atheism substantially hinges on whether one is a materialist. It doesn't.

Second, the unification of phenomena of experience and phenomena describable in material language is a hopeless endeavour, as hopeless as finding a common logical space which conceptually unifies "he is 6 feet tall" and "he is a good boy". These can obviously be true of the same person, but operate on completely separate levels of description, none of which can be derived or generated from the other.

Your "That's an apple" example has the same problem. What any "energy flow" account, no matter how detailed, misses entirely is the content of "that's an apple", what it means to the speaker and how he experiences it. In short, it misses all of the intentional bits. Which means it is not a complete description of reality (and could not be made into one either, at least not merely by increasing the number of material details described).

What you have described is part of what happens physically as someone is having an experience. At best these things indicate that an experience is taking place, but they are not the same as the experience, nor do they come anywhere near of capturing or entailing it. If you reply that moving muscles, producing sounds and having a certain electro-chemical brain state just are what meaning and experience amount to, you will have merely restated the materialist claim, but you will not have stengthened it. (See scbrownlhrm "A=A" comments.)

In short, your description is not by any means an explanation of intentionality. Hence, contrary to what you assert at the end, you have not made an argument, you have merely restated the materialist claim.

Non-intentional attempts to describe intentional phenomena always end up either dropping intentionality altogether (eliminativism) or smuggling it back in through a back door. (Which has been explained many times by Edward Feser and others, in several different threads on this blog.)

Here I have laid out (very briefly) another source of confusion, the puzzle that ensues through the (misguided) idea that there must be some X which makes the equation "brain activity + X = metal activity" true (with X possibly being "nothing").

The energy flow I'm talking about is just what physicists talk about. It's light, for example, as it strikes an object (such as an apple) and then shines into our eye. Our retina changes that light energy into neuro-electric energy that flows through our brain into the visual cortex and then through myriad neural network channels to many other parts of our brain. The same neuro-electric energy may eventually flow out of our brain through motor nerves to our muscles, where the energy changes to kinetic motion. Our mouth and throat muscles move so that we say "That's an apple." Now it's sound wave energy.

As you see, the energy changes into different forms, but it's all one flow of purely physical energy from the apple to the spoken word "apple."

And so the argument is that this energy flow is how the word "apple" means an apple. This is one aspect of intentionality. And the argument is that all aspects of intentionality can be explained similarly using energy flows through the brain, with input, processing and output.

I know I'm not the first to say so, but this isn't an "argument" or even a hint at one; it's merely a statement, and a false one at that. Even aside from possible nitpicks about your summary of the physics, the "energy flow" in question here is very obviously not "how the word 'apple' means an apple," and no one who was not already in thrall to a tendentious (non-)theory could possibly think otherwise.

The problem isn't only that, as pck has rightly noted, the "energy flow" simply fails to capture anything whatsoever of the intentionality underlying the use of the word "apple." It's also possible for two entirely different "energy flows" to "mean" an apple in the same (non-)way, as when I use the word at two different times (and if e.g. I've had my larynx removed in between, the two "energy flows" will be very different)—or for that matter when two different people use it.

That means that even if the "energy flow" were directly observable in the first place (which it isn't), since different flows are associated with different meanings, the only way you could correlate them would be to have independent knowledge of the meanings. On the non-theory to which you're referring, that is of course just what we can't have; if the "energy flow" is all there is, then it's all there is and that's that. Moreover, even if it were possible to correlate the "energy flow" and the meaning after all, they'd still have to be two different things, or there would be nothing to "correlate."

Eric MacDonald: Fascinating, though, to see Augustine arguing about indeterminacy in this way, since he does not allow for such indeterminacy in some of his biblical exegesis.

entirelyuseless: Eric: you're wrong about Augustine. He says that since God knows all things, every sentence of Scripture has every true meaning that anyone could ever think of.

I don't think this is right.

That is, however true it may be that, "[Augustine] says that since God knows all things, every sentence of Scripture has every true meaning that anyone could ever think of", I don't think it follows of necessity that Eric is wrong in what he says about Augustine.

If in any of his biblical exegesis Augustine asserts, claims, intimates or suggests that there is only one true meaning for a passage, verse, phrase or word, and even if he does so only rhetorically, then it follows that Augustine, in his biblical exegesis, sometimes does not allow that indeterminacy to which Eric has referred.

So, if just one instance can be found in which Augustine asserts, claims, intimates or suggests that there is only one true meaning for some passage, some verse, some phrase or some word, then I think one would be hard-pressed to rationally disagree that Eric is right in what he said about Augustine.

Augustine did write that many words (in Scripture) do not retain one uniform signification. But that many words do not, does not mean that all words do not; and Augustine does not at all come across as one who would use 'many' when he means 'all'.

So, I think it is quite safe, i.e., rational, to take Augustine as implying that some words do retain a single signification when he writes that many words do not. And this is to say that I don’t see that anything has been said which renders credible the claim that Eric is wrong in what he says about Augustine.

Great blog! still working my way back through all the posts but am now on Mar. 11, 2013. Also just finished schol. meta. and enjoyed it but found some of it very tough for a non philo like me. Anyway your Aquinas is on its way from amazon as i type.

@John Moore: "Yes, that's the whole thing. The materialist's project is precisely to unify human experience with material conditions. To differentiate them is the theist's project."

To say that unifying human experience with material conditions is the materialist's project is more revealing than you may realize, for of course if that is the project, then the truth of the matter, whether human experience can be unified with material conditions, is prejudged. What if it should be the case that the two are disparate? On that project, so much the worse for the truth...

Nor is there a "theist's project." For one thing, theisms and theists on their own accounts have different ends. And for another, a number of those accounts are rather more unifying than that "materialist's project." The beatific vision comes to mind.

"Anyway, various commenters point out that I haven't made the materialist argument here, but I've merely hinted at it, and that's true. I don't think the argument fits in a comment box. But here I'll just add some detail..."

@pckWhat can be indeterminate is not a word but our application of it in the context of a certain practice.

It seems to me like the reason a word can be applied with different meanings in various (perhaps encrypted) contexts is because the word itself is indeterminate. The normal contextual prompts and memory associations we get from the "flow and complexity" of a modifying word or phrase, a sentence, and story are what indicate the types, layers, and shades of meaning. Similarly, fixating on elementary particles and energy as determinate of meaning is like saying individual sounds/letters considered as the fundamental components of language hold some meaning. They don’t but there are combinations and structures of sounds/letters that do.

OK, I'm starting to understand better. Commenters pck and scbrownlhrm point out that I have just stated what the materialist argument is, and I haven't actually provided evidence or reasons for it. But at least we've taken a step forward in this discussion.

I agree with scbrownlhrm that my idea is like A=A, but I don't see why he thinks this is an "empty, vacuous statement." I think the energy-flow model is a useful tool for understanding how words mean things and also why "the word itself is indeterminate."

pck writes: "At best these things indicate that an experience is taking place, but they are not the same as the experience." As for me, I'm hunting for evidence that human experience is something else besides physical energy flow through a neural network, and that's why I keep reading Prof. Feser's blog! But I'm looking for practical things that I can relate directly to neural network programming.

It's because I'm working on artificial intelligence, and I think the theistic position says it's impossible to build a machine that can be intellectually and morally equivalent to a human being. So I want to know if the grand AI project is really futile or not.

Again, you can understand the materialist's argument if you read and think about it. Or you can dismiss it out of hand. I'm hoping more people will take this argument seriously and give me strong criticisms of it.

I've read a lot of materialists, and I've never come across a materialist saying the sorts of things you attribute to materialists. It could be that you have a very specific sort of materialism in mind, of course.

Do earthworms and bacteria actually pursue goals? What does it mean to have a goal? Do rivers have a goal of getting to the sea, or is there some level of complexity required before we call an energy-flow goal pursuit?

Why would anyone, particularly a materialist, deny that earthworms have goals, however low-level they may be? Cartesians, holding that the mind is completely separate from the body, have reasons to hold that earthworms, being just simple bodies, have no goals, but a materialist is never going to have a problem with a body pursuing goals. And if complexity is supposed to be the magic ingredient, how does it do it? River flow, for instance, is an extraordinarily complex system, not a simple one; I'm not sure what measure of complexity would give us, as an unequivocal result, that the nervous system is more complex than an entire river. Likewise, we can identify what looks like moisture-seeking behavior in humans, chimpanzees, rats, earthworms, and bacteria. It takes rather different forms in each case, but we can also identify similarities. I can't see anything in materialism itself to require that they not be broadly the same.

Not all energy flows are intentionality, of course. But all intentionality is energy flow. If you want to understand, there are ways to do that.

No doubt there are such ways, but it would seem that they all involve starting the right energy flows. And there would have to be a difference between the energy flow that is understanding and the energy flows that are thinking about things while not understanding them? To identify the right energy flows we seem to need independent means of identifying the intentionality they are supposed to be, without first determining the kind of energy flow, in which case it doesn't seem that the energy flow account is telling us much.

It's because I'm working on artificial intelligence, and I think the theistic position says it's impossible to build a machine that can be intellectually and morally equivalent to a human being.

I don't see why you think theism in particular would require us to say it's impossible to build such a machine.

John Moore: As you see, the energy changes into different forms, but it's all one flow of purely physical energy from the apple to the spoken word "apple."

Except it's not "one flow", it's just an arbitrary collection of particles (depending on the sort of materialism) — the only way you can pick out "one" this or "one" that is if (a) there is something making this group of particles or that into some kind of unity, which means you have Aristotelian substantial forms or some variation thereupon, in which case you've gone beyond mere materialism, or (b) there is a unity imposed from without (actually or derivatively) from our minds, which again seems to lead us beyond mere materialism (on pain of circularity or infinite regress). (The previously cited articles flesh out such lines of thinking.)

I think the theistic position says it's impossible to build a machine that can be intellectually and morally equivalent to a human being. So I want to know if the grand AI project is really futile or not.

If the "AI project" is to build a machine with a soul, then sure, it's impossible. But we don't expect artificial flowers or hearts or lights to be real flowers or hearts or lights; we simply expect them to be similar to the real things in some useful way. Surely the AI project is to build computers that are useful in some way that human intellectual power is. As of 2015, it's impossible to build a computer that plays chess in just the way that a human grandmaster does; but we can and have built computers that beat grandmasters by operating in a very different way.

@Brandon: I don't know whether earthworms have goals or not. Maybe it depends on how you define a goal. For me, a goal is a particular kind of neural network construct that specifies an unfulfilled condition. A goal is a set of neurons in your brain connected up in such a way that an input results in one of two possible outputs: either "Yes, we're done" or else "No, keep trying."

So it's a question of how complex the earthworm's nervous system is. But it's clear that rivers (and garden hoses) do not have goals.

@Mr. Green: I think the energy flow through the brain is a single flow because it's like an electronic circuit. Our nerves maintain the energy flow's integrity while conducting it through our bodies. And if you cut off the input source, the output also stops. This is a lot like a computer IC chip, by the way.

You're right that most AI researchers are trying to build machines that are useful to humanity, but my focus in on making artificial people who will be alive and thinking and feeling, much like you and me. On the other hand, you make a good point that true AI machines will do things differently from us. We can be playing the same game but using different strategies.

I'm a bit new to this, and I didn't read through the whole comment string to see if my question has been addressed there, but Professor Feser makes the point that indeterminacy can only be eliminated "when there is an intellect present which can do the needed conceptualizing."

My question is, the Pythagorean Theorem was true when dinosaurs ruled the earth, T/F? I believe the answer is true, but I don't know why because there was no intellect present on the scene to do the conceptualizing. Unless, of course, the theorem existed in a divine intellect, but the atheist or the materialist would not permit that line of reasoning.

@pckWhat can be indeterminate is not a word but our application of it in the context of a certain practice.

It seems to me like the reason a word can be applied with different meanings in various (perhaps encrypted) contexts is because the word itself is indeterminate. The normal contextual prompts and memory associations we get from the "flow and complexity" of a modifying word or phrase, a sentence, and story are what indicate the types, layers, and shades of meaning.

The trouble here is in the assertion that "we get smth. from a word", as if the word in and of itself had any power to create meaning (or any other kind of intellectual illumination). But language by itself has no such powers, just as mathematics, by its internal rules of symbolic manipulation, cannot manage to say anything (= give any facts) about the world. Meaning is created through the use of words and "use" means to apply words in the context of our actions. These actions can be non-linguistic ("I chopped a block of wood") or linguistic themselves (setting up a framework for a scientific theory). The meaning is never in the symbols (and/or in the internal rules which relate them to each other) used to express whatever we want to express. (This is one reason why the brain cannot possibly be a symbol-processor as is claimed by AI occasionally.) Thus "modifying a word" or phrase amounts to changing or extending its use, for example in a metaphor ("the foot of a mountain") or applying a familiar paradigm usually applied to humans ("my brother looks pensive") to other animals ("the cat looked pensive"). The question "What is pensiveness really?", taken in the same way as "Is there really no oxygen on the moon?" is bound to create confusion, since the former calls for conceptual clarification while the latter needs empirical investigation. There can be no "truth about pensiveness" in the same way there can be truth about oxygen on the moon.

As for me, I'm hunting for evidence that human experience is something else besides physical energy flow through a neural network, and that's why I keep reading Prof. Feser's blog! But I'm looking for practical things that I can relate directly to neural network programming.

This is going to be difficult because of a central (usually unexamined) premise of AI, which is that intelligence is seperable from its human context. The meaning (= use) of the term "intelligence" is so deeply paradigmaticlly bound up with human abilities that a machine being intelligent is a conceptual impossibility, no matter what its performance is. I have written more extensively about this here. (Please ignore the ensuing discussion with what turned out to be a troll.)

It's because I'm working on artificial intelligence, and I think the theistic position says it's impossible to build a machine that can be intellectually and morally equivalent to a human being. So I want to know if the grand AI project is really futile or not.

As I argue in the article linked to above, the only shot at creating intelligence that an atheist has (a theist would still object for other reasons) would be to create androids, that is, beings made by humans which humans could relate to in the same way that we relate to other humans. (This is one reason why the Turing test is meaningless.) The idea that it is possible to compute thought, meaning, intelligence and so on is a conceptual impossibility. One central fallacy is to think that performance is all that matters.

Some literature which deals more extensively with the above arguments (and a lot more) can be found here and here.

My question is, the Pythagorean Theorem was true when dinosaurs ruled the earth, T/F? I believe the answer is true, but I don't know why because there was no intellect present on the scene to do the conceptualizing. Unless, of course, the theorem existed in a divine intellect, but the atheist or the materialist would not permit that line of reasoning.

I think the question as presented is nonsensical and therefore has no intelligible answer. It's an "if a tree falls in the forest" kind of puzzle but with no tree having even fallen.

While the question of divine intellect is very interesting, I don't think we need to invoke that here.

Recall that mathematical truth is not adjudicated by how successfully a mathematical statement can be applied, but instead by reference to rules used independently of such applicability. Mathematical truth is ruled in the same way a win in chess is ruled. Hence to say that 1+1=2, the P.T., and so on, are (or are not) true at all times and in every corner of the universe is to miss the point, just as it would miss the point to say that "Certain positions on a chess board constitute a victory in chess, but what about on Jupiter, where the board would be crushed by gravity?". Or you could ask "Is a chess victory 'not absolute' because adjudicating a win requires the players to see and recognize a winning position, but what about blind men or people who don't know chess?".[*] Clearly, all of this does not raise any issue which might have an intelligible or useful answer.

Where and when a mathematical theorem is proved or a chess game is won does not matter since no spatio-temporal circumstances figure in the respective adjudications of success.

Thus while it is true that only under certain spatio-temporal conditions it is possible to conduct a mathematical proof or play chess, these conditions must not be confused or conflated with the success conditions for mathematical proof or victory in chess.

[*] I think I just caught a glimpse into how Sam Harris "thinks". Very troubling. It's like looking down a well-ordering of the real numbers, but without the rule-based guidance.

As for me, I'm hunting for evidence that human experience is something else besides physical energy flow through a neural network [...] I'm looking for practical things that I can relate directly to neural network programming.

The issues in question here are not of a practical nature, but of a conceptual one. The question whether AI is possible is not going to be solved by experiment, but only by conceptual clarification.

It's because I'm working on artificial intelligence, and I think the theistic position says it's impossible to build a machine that can be intellectually and morally equivalent to a human being. So I want to know if the grand AI project is really futile or not.

One does not need to invoke theism in order to criticize the concepts of AI. As I have argued here[*], one of AI's biggest problems is its tacit and unexamined premise that intelligence is a notion which is separable from its human context. I conclude that it is not (which is one reason why the Turing test is meaningless). I further argue that if you want to have a shot at human-created intelligence at all, you will need to go much further than mere computation and construct androids to which humans can relate in sufficiently similar ways as we can relate to other humans. Personally I doubt we will ever see this and theists will still object to the android project for different reasons, but the above outline is what an atheist would minimally have to achieve in a project which could intelligibly say of itself that it has created intelligent behaviour.

Some literature that might be of help with your AI questions can be found here and here.

[*] Please ignore the ensuing discussion with what turned out to be a troll there.

First: laubadetriste’s point about your a priori itself likely (at some point) forcing you to jump ship: “To say that unifying human experience with material conditions is the materialist's project is more revealing than you may realize, for of course if that is the project, then the truth of the matter, whether human experience can be unified with material conditions, is prejudged. What if it should be the case that the two are disparate? On that project, so much the worse for the truth…..”

Second: Simply that you seam entirely unaware that it is nothing more than a composite of sloppy metaphysical lines that accounts for the straw man of A.I. being levied against the Theist (on the one hand) or as a coherent (non-eliminative) model for materialism (on the other hand). As if A.I. means anything at all to a system of meaning-makers housed atop a system of metaphysical seamlessness “through and through”. Teleology’s void cannot be so easily “pretended / make-believed” into existence (or out of existence) merely by pushing molecules around. Of course, as adeptly spelled out elsewhere, “Human Nature” is a metaphysical non-entity within Naturalism and hence arbitrary question begging in a bizarre form of Functionalism is all that the Naturalist has left there in his means and ends comprised solely of nature’s “…..singular and seamless continuum of particle (or whatever) in motion….." As “A = A” is the bottom of the whole show, there just are no such things as “stages” and hence of “emerged properties”. Perhaps “Consciousness” and an odd and sloppy Non-Theistic “potentiality” can all work together to allow the Naturalist to feel like he’s actually found coherence. Well….. of course not …..but the Naturalist can hope. On all fronts it seems that all available data sets and philosophical truth claims lead us to the unavoidable conclusion that when the Materialist speaks of A.I. or of bodiless heads, or what have you, he is actually speaking about something akin to zombies. The semantics in all of that brings us to what is presently a semantic equivocation – a metaphysical conflation – on the part of the Materialist as he must instill in Material all his claims of intentionality – or else he must eliminate “intention” all together. The annihilation of the Self at e-v-e-r-y level just is inevitable given the Naturalist’s limited toolbox. With said annihilation comes (should we be surprised by logical consistency in the Naturalist’s own premise/conclusion set) the annihilation of the very thing the Naturalist was hoping to maintain in the first place. Which is fine – as “intention” does not mean “intention” at all but merely water’s energy flow downhill, itself constituted entirely of a singular and seamless continuum of volitionless reverberations of quanta (or whatever) “flux”. Hard stop.

"Let’s face it, religion is another reason why we have not yet created true AI machines. Are there any religious people working on artificial general intelligence? Somehow I doubt it. Maybe it’s impossible to believe in both God and AI."

Your actual opinion probably is of small interest here, so there's not likely to be much interest in changing it. But the reasons underlying your opinion, or the reasons you advance for holding that opinion, may be of some interest. So, a few questions:

1. What is the meaning of the term 'artificial' in the expression 'Artificial Intelligence'?

2. What is the difference between something which is falsely artificial and something which is truly artificial?

3. What is it about non-believers in God which enables them to better appreciate when something is truly artificial?

4. What is it about believers in God which renders them less likely to believe that 'intelligence' can be 'artificial'?

For me, a goal is a particular kind of neural network construct that specifies an unfulfilled condition. A goal is a set of neurons in your brain connected up in such a way that an input results in one of two possible outputs: either "Yes, we're done" or else "No, keep trying."

Earthworm and bacterial behavior pretty clearly can be explained in terms of unfulfilled conditions, and often is by the scientists who study them, so the neural aspect of this would have to be doing all the work. But we don't make computers with neurons but with circuits, so if you accept the possibility of AI, goals can't just be neurons in brains connected in a particular way. And surely you don't assess whether the people around you are pursuing goals by looking at their neurons, and refusing to say that they have goals as long as you haven't had a chance to look at how their neurons are connected up.

pck: Thus while it is true that only under certain spatio-temporal conditions it is possible to conduct a mathematical proof or play chess, these conditions must not be confused or conflated with the success conditions for mathematical proof or victory in chess.

I should have added that the existence of mathematical success conditions requires the presence of an intellect. Mathematical success conditions require the existence of certain human abilities. It doesn't make any sense to say that the success conditions of math already existed before humans had appeared on earth (and started doing math). Logical spaces like math are created, not discovered. They extend the domain of human abilities. Since human abilities clearly did not exist at a time when no humans existed, neither did the Pythagorean Theorem (or the truth of it).

Seeing that mathematical truths do not depend on spatio-temporal conditions, we are tempted to believe that it follows that they extend to all space and time, having something like a ghostly presence in it. But this is confused, since what is logically independent of space and time cannot be present in space and time at all. It is equally strange to say that "the PT exists today in Houston, Texas" (and perhaps other places) as it is to say it existed before humans walked on earth. That is simply not the way in which the PT exists (bound to some location and/or time).

To be sure, the PT is "with us", but in the sense that it is part of humanity's collective abilities, not in the sense the pyramids in Egypt or the Euro currency are with us.

To summarise: Mathematical truth is adjudicated by the use of certain success conditions. These success conditions are not part of the physical world, but play a role in the excercise of certain human abilities. Human abilities require the presence of humans. Thus the truth of mathematical theorems like the PT did not exist before humans existed.

For me, a goal is a particular kind of neural network construct that specifies an unfulfilled condition. A goal is a set of neurons in your brain connected up in such a way that an input results in one of two possible outputs: either "Yes, we're done" or else "No, keep trying."

This just shows how caught up you are in identifying your models with that which you want to model.

First, as Brandon has already noted, when somebody asks you about your goals (at work, in life, etc.), do you ever reply by talking about what certain neurons in your brain are currently up to? Of course not, since the answer would not even be intelligible. You'd be mixing incompatible and ununifiable logical domains.

Second, no neural output has ever been, or could ever be "yes, we're done" or "no, keep trying". You have just put intentionality back into your conceptual framework. Neurons are not humans with diminished abilities. (The Bit in the original Tron movie comes to mind. It can only answer "yes" or "no" but miraculously understands every question. Or is at least treated as if it does.)

Third, even if all the rest wasn't the huge conceptual muddle that it is, "yes we're done" and "no, keep trying" would still be far too unspecific to identify the concept of goals. In this model, there is no difference between a goal and, for instance, a commentary-like response given to someone's report of their activities. Another problem is that actual goals can be only partly achieved, for which there is no equivalent in a yes/no model.

Again, on Augustine, I know of nowhere in his writings where he asserts that any statement in Scripture has only one possible meaning. And it is not my responsibility to prove such a negative: if someone believes he does assert this, it is up to him to find an example.

Again, on Augustine, I know of nowhere in his writings where he asserts that any statement in Scripture has only one possible meaning. And it is not my responsibility to prove such a negative: if someone believes he does assert this, it is up to him to find an example.

My earlier comment was in response to what you yourself had earlier said, and not to what you are saying now.

You then had said not that Augustine said every sentence of Scripture has every possible meaning that anyone could ever think of, but that he said it has every true meaning that anyone could ever think of.

I did not contest what you said of Augustine, only what you said of what Eric had said.

This is what Eric said: "[Augustine] does not allow for such indeterminacy in some of his biblical exegesis."

If only one of all the possible meanings is the true meaning, then every other possible meaning is not the true meaning. In this case, however many of the possible meanings a person might think of, there is only one true meaning of which he can think.

And if for a passage, verse, phrase, or word there are multiple true meanings, and Augustine nonetheless treats that passage, verse, phrase, or word as if it had but one true meaning, then Eric was not wrong in saying, "[Augustine] does not allow for such indeterminacy in some of his biblical exegesis."

o "That thine alms may be in secret." What else is meant by "in secret," but just in a good conscience, which cannot be shown to human eyes, nor revealed by words? since, indeed, the mass of men tell many lies. (ibid)

o For what else is meant by the statement, "For all they that take the sword shall perish with the sword," but that the soul dies by that very sin, whatever it may be, which it has committed? (ibid)

@pckI don't see how your response could make sense unless the sentences and symbols you are using are transmitting meaning. I also have no reason to treat semiotics as an incoherent field of study. The examples you provided didn't show that language is meaningless so they aren't supporting your claim.

I'm not speaking for pck here, but the problem I have is that "sentences and symbols," regarded purely as sounds or marks, don't carry meanings, whereas regarding them as signs and symbols (i.e. as language) already presumes that they're "carriers" of derived intentionality. Language qua language isn't "meaningless," but the noises and shapes we use to convey it aren't intrinsically meaningful and acquire their meaning only through being used as language.

Basically, what Scott said. The term "language" itself is always a bit problematic since it has several connotations and I haven't been as precise as I should have been. When I said

But language by itself has no such powers, just as mathematics, by its internal rules of symbolic manipulation, cannot manage to say anything

I was hoping that the analogy to math I gave made it clear that I was using "language" in the sense of "a syntactic game governed by certain rules". Of course I agree that language in the broader sense is more than mere syntax and can, as you say, "transmit meaning". But it does not transmit meaning merely by the power of its internal, syntactic rules. Your initial post seemed to suggest that you take an "encode/transmit/decode" approach to language. (Correct me if I'm wrong about this.) That approach was what I was objecting to. Scott highlights the problem when he says "regarding them as signs and symbols (i.e. as language) already presumes that they're 'carriers' of derived intentionality". (Emphases mine.) Use (practice) is what blesses otherwise dead symbols with meaning.

There is indeed nothing incoherent about semiotics. What is incoherent is to construe semiotics as the foundation of meaning. The idea that syntax comes first and is somehow able to generate meaning and eventually practice is what I was criticizing. The proper order is just the reverse, practice first, then semantics, then syntax.

Now syntax is by far the easiest of those three to analyse and to make computational models of. Which is one major reason why scientistically minded people, who love their computational models of mind, are stuck with this inverted conceptual hierarchy. The naturalisation of meaning is built on this error, which also has massive implications on how terms like "information" and "knowledge" are (mis)construed by naturalist thinkers.

@John Moore: "Commenters pck and scbrownlhrm point out that I have just stated what the materialist argument is, and I haven't actually provided evidence or reasons for it."

No they didn't. Pck and scbrownlhrm, and also Scott and another person whose name escapes me, pointed out that you did not give an argument. For an argument without "evidence or reasons," as you put it, fails to be an argument at all. It may be at best (as has been noted) a statement or an assertion. I note also that November 17, 2015 at 4:41 PM you used "argument" as a synonym for "points that the materialist proposes"--which presumably would rather be "propositions," or some parts of an argument or some such. I suspect the meaning of the word "argument" seems like a minor quibble to you--this would explain some of your approach--but if so that would also explain some of your confusion. For of course an argument (but not a statement or an assertion) can be truth-preserving, and so an argument (but not a statement or an assertion) could get you from (e.g.) *light shines on apples*, through intermediate steps to *materialism is true*. Mess that up and you're just telling funny stories.

"I agree with scbrownlhrm that my idea is like A=A, but I don't see why he thinks this is an 'empty, vacuous statement.'"

Then I would suggest that you ought to look up some more terms. In this case, I suggest looking up "tautology." For "A=A" is almost a proverbial example of that very thing. As I write, if you Google "'A=A' tautology" a number of useful results are returned. I would suggest what is currently the fourth result on the very first page, Tautology 4: What is a tautology? This will help you with understanding scbrownlhrm, and also, perhaps, yourself.

My suggestion to use the Blog Archive still stands.

"By the way, I know I need to read and think more - that's why I'm here. I have been reading Prof. Feser's blog for years, so it's not as if I'm a total newbie."

Uh huh.

It's not that you're a "newbie." Hell, "newbies" are welcome. And if you've been around a few years, well then welcome... old-timer. What is peculiar is demonstrating so little awareness of this blog, while stepping in medias res, so to speak, and asking for other people to explain things to you from scratch, while playing coy about your own "arguments" and expecting others to do your work for you. "I wrote about why theists tend to think AI is impossible, but if you can make me change my opinion, go ahead!" indeed.

I imagine you pulling those tricks elsewhere. Perhaps on December 18th you'll enter a darkened movie theater, an hour after the start of the new Star Wars. You'll sit for ten minutes, and then turn to the person next to you and ask loudly, "So why are they on that planet?" The person will shush you and then hurriedly give a whispered run down of the first hour of the movie, so as to get back to enjoying it. "Oh, no! I'm a big fan of Star Wars, and that could've been done so much better. Here's my fanfic,"--and here you'll pull out your phone, and go to your blog--"you can tell me why you think that is better than what I wrote..."

Nothing stops the theist in principle endorsing the possibility of Ai just as nothing in principle rules out the theist being a materialist (evinced by the fact that a number of theists e.g. Voltaire, Jefferson, Inwagen et cetera were/are materialists). Historically and for religious reasons most theists have also been dualists of some kind. More to the point most theists would reject the kind of Functionalism typically associated with theories of AI for the same reasons that some Naturalists e.g. Searle, Chalmers and early Jackson do.

On Thomistic lines a 'machine be intellectually and morally equivalent to a human being' is metaphysically impossible due to the special nature of the intellect or 'rational soul' (this of course does not of course rule out there being non-human rational animals). It is debatable whether it would be possible to create a 'machine' along the lines of a being possessed of a Vegetative or Sensitive Soul if one accepts the Aristotelean notion of non-consciues entities, for instance atomic and molecular natural kinds, possessing a kind of 'physical intentionality' aka dispositional properties - Ed himself admits that if accepted then something like Functionalism (and thus the possibility of AI) might be applicable in the case of the lower animals. I would deny this for reasons relating to the nature of consciousness, but as has been pointed out, I place far more emphasis on egological, first person accounts than he does.

Talking about words and meanings, we seem to agree. For example, pck wrote, "Meaning is created through the use of words and 'use' means to apply words in the context of our actions." This sounds a lot like my energy-flow example where the word "apple" means an apple. The word by itself is nothing without the process of energy flowing from the apple through our brains to cause our actions.

pck says that "the brain cannot possibly be a symbol-processor," and I fully agree, except I would just add the word "only," because the brain can be a symbol-processor, but it can't possibly be only that.

pck wrote later: "The idea that syntax comes first and is somehow able to generate meaning and eventually practice is what I was criticizing." I fully agree with this. Intelligence is more than merely shuffling symbols around, and the brain can't be merely a digital computer. It must be an analog computer that has causal powers in the real world.

----

pck also criticized the premise of AI that intelligence is separable from its human context. I sort of agree, but I want to expand the idea a bit and say intelligence is not separable from the context of life. Thus, in order to make a true AI, that AI must be alive. It doesn't need to be human, but it must be alive.

And by the way, I define life simply as a thing that evolves by natural selection.

scbrownlhrm wrote that "Human Nature" is a metaphysical non-entity within Naturalism. But this depends on how you define human nature. I say the essence of human nature is that we strive to survive and perpetuate our species in the competitive context of natural selection.

@Brandon wrote about goals: "Earthworm and bacterial behavior pretty clearly can be explained in terms of unfulfilled conditions." But there's a difference between explaining something in terms of goals, versus actually having goals. You can use the concept of a goal in a metaphorical sense to refer to rivers trying to get to the sea, but let's not forget that goals are also actual physical things you can point to in a brain or an electronic circuit.

Brandon also wrote that "we don't make computers with neurons but with circuits." Are you claiming that biological neurons have some special essence that can't be modeled in an artificial neural network? I think of neurons in terms of their functions. It's like the heart is a pump, and we can make artificial hearts that pump blood. A neuron is a switch, and we can make artificial switches that work the same way.

@pck wrote about goals: "No neural output has ever been, or could ever be "yes, we're done" or "no, keep trying". So again I'm wondering how seriously you're saying this. In computer programming, a high voltage is sometimes called "Yes" or more commonly a binary 1. A low voltage is a "No" or a binary 0. In electronic terms, "we're done" is simply to stop executing a loop, and you could interpret "keep trying" as a continuation of a loop.

Maybe you guys are simply asserting that these electronic things can never account for the full range of human feelings and aspirations, but that's the whole argument - the materialist says they can, and you say they can't.

@Glenn asked about the meaning of "artificial" in artificial intelligence, and this is a great question. He also asked about things that are "falsely artificial," which sounds silly but is actually another great question.

I say "fake AI" is what most developers are doing today, which is building algorithmic machines that merely follow instructions to serve human purposes. True AI, by contrast, would not be following coded instructions and would only pursue its own self interest. I think true AI must be a neural network that evolves by natural selection.

So the great thing about Glenn's question is: If a machine mind evolves by natural selection, why would we still call it "artificial"?

Glenn also asked why non-theists would be more likely to accept or recognize a true AI. I think it has to do with the concept of soul. As scbrownlhrm wrote earlier: When materialists speak of AI they are "actually speaking about something akin to zombies." Let's say a p-zombie is a being that looks and acts just like a human but has no soul.

I think there's no way you can see whether something has a soul or not. If you believe in souls, and if you think a machine can't have one, then you'll never accept true AI as morally and actually equivalent to a human being, regardless of how perfectly the AI mimics humanity.

"In computer programming, a high voltage is sometimes called "Yes" or more commonly a binary 1."

"sometimes called"

There's the problem. High voltage doesn't inherently mean anything, we just call it things for convenience, or because we need to name it in such a way so we can then go in to assign meaning to other physical occurrences, like the output.

Glenn also asked why non-theists would be more likely to accept or recognize a true AI.

That is not one of the four questions I asked.

If you believe in souls, and if you think a machine can't have one, then you'll never accept true AI as morally and actually equivalent to a human being, regardless of how perfectly the AI mimics humanity.

A stick insect is excellent at mimicking a stick, but it is still an insect, and not a stick.

Surely, no matter how excellent or perfectly a machine (with or without a soul) might mimic a human (with or without a soul), it would still be a machine, and not a human.

Hoooo boy. That takes talent! I counted fourteen howlers--almost one per paragraph. Now we just have to wait for everyone you misunderstood to respond. I see Glenn is quick on the draw. Hold on, let me get some popcorn...

But there's a difference between explaining something in terms of goals, versus actually having goals. You can use the concept of a goal in a metaphorical sense to refer to rivers trying to get to the sea, but let's not forget that goals are also actual physical things you can point to in a brain or an electronic circuit.

Yes, but if scientists regularly find it useful to make use of goal-ascription in explaining what earthworms do, that is at least a prima facie reason to think that earthworms do have goals; and you have provided no reason at all to think that the scientists are wrong in their assessment of the best account of earthworm behavior. Likewise, I have yet to come across a neuroscientist pointing to a physical thing in the brain and saying, "That's the goal of saving for retirement", or any such thing; so I am skeptical that your goal-as-physical-thing-you-can-point-to actually figures in any serious neuroscientific explanation of the brain. Perhaps you have particular articles in neuroscience in mind?

Are you claiming that biological neurons have some special essence that can't be modeled in an artificial neural network? I think of neurons in terms of their functions. It's like the heart is a pump, and we can make artificial hearts that pump blood. A neuron is a switch, and we can make artificial switches that work the same way.

No, I am claiming that it's pretty obvious that artificial neural networks don't literally have neurons -- all neurons are biological, if we are not using 'neuron' as a metaphor. It is absurd that you keep getting finicky about sticking to the literal meaning of 'goals' while simultaneously using 'neuron' in what is very obviously -- and very provably, if one looks at the history of neural nets -- a figure of speech. When 'neuron' is used of artificial networks, no one is claiming that the networks are actually made of neurons; they are claiming that switches in the network are functionally analogous to neurons in a brain. That is historically why people started calling them 'neurons': the term was used to name certain mathematical functions in attempts to model the brain, and then was later applied to non-neural physical implementations of these mathematical functions. It's all analogy: the physical switches are analogous to the mathematical posits which were developed to be analogous to biological neurons. If you want to use the figure of speech, there's no problem with it; but denying that it is a figure of speech is simply absurd, and contrary to the actual evidence of the history of the term. And if you use 'neuron' functionally rather than literally, why in the world are you so stubbornly insisting that 'goal' should be used literally rather than functionally?

"the intellect’s grasp of meanings is more fundamental than any behavior, gestures, utterances, aspects of the communicative context, etc. that might be used to teach or express meanings. Hence you are not going to be able explain the former in terms of the latter."

An old analog TV could pull VHF signals out of thin air and decode those signals into a pattern of moving pictures. There's nothing in the signal itself that "means" the following is a sequence of interlaced frames. But that's the way the TV circuitry interprets it. When a valid VHF signal is present, the TV's circuits are excited as designed. The electronics "finds meaning" in a signal that has no inherent meaning. Certain patterns of electromagnetic waves are "important" to the electronics itself. They're important because the circuits filter through those things designed to be "seen" as important. The signal does not explain the operation of the circuits. Nobody would expect that. If X explains Y, wouldn't it be kind of freakish if Y also explained X? To expect "communicative context" (inputs) to explain the brain's functions is bizarre.

Obviously our brains filter and decode inputs. Supposedly those inputs have no inherent meaning. But as with a VHF signal, that's irrelevant. No ghost in the machine is required to decode signals in a standard, "meaningful" way. There's a lot of talk on this blog about meaning. But rarely does anyone question what "meaning" really is. Maybe it's no more than collections of inputs that excite combinations of neural networks as those networks were designed by nature. Feedback, which we call experience, fine tunes those networks. We feel excitation of those networks as "meaning." This requires no supernatural secret sauce. We need no more than what materialism supplies.

The electronics "finds meaning" in a signal that has no inherent meaning.

Funny how every time you talk about material structures "finding meaning", "interpreting" and seeing "importance" you put the relevant terms in quotes. It is almost as if you intuitively recognize that you are using them metaphorically instead of literally.

There's a lot of talk on this blog about meaning. But rarely does anyone question what "meaning" really is.

Your entire post talks about meaning without questioning what it is. And then you reveal this shining jewel of intellectual achievement:

Maybe it's no more than collections of inputs that excite combinations of neural networks as those networks were designed by nature.

Or maybe not. Actually, definitely and provably not.

We feel excitation of those networks as "meaning."

One would be hard pressed to find a less comprehensible and more question begging statement than this. Nobody has ever "felt meaning". Even a less misconceived statement such as "I feel the activity of the nerves in my hand as pain" is complete and utter nonsense.

This requires no supernatural secret sauce.

Which of course nobody ever claimed.

The fact that human beings are not just their bodies does not mean that human beings are their bodies plus some ghost-in-the-machine, elan vital, ectoplasm, or other crazy stuff. What it does mean is that reality is more interesting than what a limited set of physical concepts like "movement", "location", "energy", etc. can capture. The ghost in the machine nonsense comes from the misguided attempt to treat that which cannot be described exclusively in physical terms (for example abilities and achievements like reasoning, thinking, calculating, talking, winning & losing, etc.) just like that which can. This fallacy is called reification. The terms "mind", "psyche" and "soul" also fall under this. To have a mind is not to be in possession of an object, material or immaterial. It is to possess certain abilities.

Not even a mundane ability such as walking can possibly be physically located anywhere. My legs and my brain make walking possible, but they are not my ability to walk. Nor is there any need for a "supernatural explanation" to bridge some imagined explanatory gap between my ability to walk and the material facts that enable me to walk. There is no such gap. Likewise for the meaning of "walking". Just because meaning is immaterial, it does not follow that there is anything strange, unintelligible or otherworldly about the concept.

pck wrote, "Meaning is created through the use of words and 'use' means to apply words in the context of our actions." This sounds a lot like my energy-flow example where the word "apple" means an apple. The word by itself is nothing without the process of energy flowing from the apple through our brains to cause our actions.

The final statement is of course true, but you completely misunderstood me if you think I would agree that an "energy flow" can "mean an apple". No "energy flow" is or describes an action and therefore no energy flow is a case of a use of words. Words are an impossibility without certain material phenomena, but it does not follow that words (or their use) are material phenomena. You are not distinguishing properly between movement and action. If you limit yourself to the description of physical phenomena, you narrow down your means of expression so much you can no longer describe human agency. You lose the ability to talk about the 1st person perspective.

pck says that "the brain cannot possibly be a symbol-processor," and I fully agree, except I would just add the word "only," because the brain can be a symbol-processor, but it can't possibly be only that.

This is confused. A brain can process certain combinations of molecules into other combinations within the context of chemical reactions. But brains cannot process symbols. Symbols are abstract entities. Only humans can deal with those. This is not a comment on the details of our physical makeup. It's a remark about how the terms "human being" and "brain" are used. Human beings are not their bodies. You can feel pain but your body cannot. You can learn to play the clarinet, your body cannot. And so on. You are not your body because we don't talk like that. And it achieves nothing to start talking like that either. It is one of the perpetual fallacies of cognitive science to ascribe abilities to the brain that can intelligibly be ascribed only to the entire human being.

pck wrote later: "The idea that syntax comes first and is somehow able to generate meaning and eventually practice is what I was criticizing." I fully agree with this. Intelligence is more than merely shuffling symbols around, and the brain can't be merely a digital computer. It must be an analog computer that has causal powers in the real world.

No brain has ever computed anything. It's not a question of analog or digital. The ordinary use of "compute" does not apply to brains. You can extend the use of "compute" to brains but this does not illuminate our ordinary usage of the term, it merely creates a new meaning of the same word form which is parasitic on the old one. It's completely up to you to call brain processes "analog computing", but how is that ever going to be useful? Does this creation of a secondary meaning of "computing" help with any theories or practices in neuroscience? No. Does it help with any epistemic issues in the philosophy of mind? No again. It just increases the probability of falling into even more misunderstandings and confusions.

For a more detailed conceptual elucidation about consciousness, knowledge, brains and bodies see here.

pck wrote, "Meaning is created through the use of words and 'use' means to apply words in the context of our actions." This sounds a lot like my energy-flow example where the word "apple" means an apple. The word by itself is nothing without the process of energy flowing from the apple through our brains to cause our actions.

The final statement is of course true, but you completely misunderstood me if you think I would agree that an "energy flow" can "mean an apple". No "energy flow" is or describes an action and therefore no energy flow is a case of a use of words. Words are an impossibility without certain material phenomena, but it does not follow that words (or their use) are material phenomena. You are not distinguishing properly between movement and action. If you limit yourself to the description of physical phenomena, you narrow down your means of expression so much you can no longer describe human agency. You lose the ability to talk about the 1st person perspective.

pck says that "the brain cannot possibly be a symbol-processor," and I fully agree, except I would just add the word "only," because the brain can be a symbol-processor, but it can't possibly be only that.

This is confused. A brain can process certain combinations of molecules into other combinations within the context of chemical reactions. But brains cannot process symbols. Symbols are abstract entities. Only humans can deal with those. This is not a comment on the details of our physical makeup. It's a remark about how the terms "human being" and "brain" are used. Human beings are not their bodies. You can feel pain but your body cannot. You can learn to play the clarinet, your body cannot. And so on. You are not your body because we don't talk like that. And it achieves nothing to start talking like that either. It is one of the perpetual fallacies of cognitive science to ascribe abilities to the brain that can intelligibly be ascribed only to the entire human being.

pck wrote later: "The idea that syntax comes first and is somehow able to generate meaning and eventually practice is what I was criticizing." I fully agree with this. Intelligence is more than merely shuffling symbols around, and the brain can't be merely a digital computer. It must be an analog computer that has causal powers in the real world.

No brain has ever computed anything. It's not a question of analog or digital. The ordinary use of "compute" does not apply to brains. You can extend the use of "compute" to brains but this does not illuminate our ordinary usage of the term, it merely creates a new meaning of the same word form which is parasitic on the old one. It's completely up to you to call brain processes "analog computing", but how is that ever going to be useful? Does this creation of a secondary meaning of "computing" help with any theories or practices in neuroscience? No. Does it help with any epistemic issues in the philosophy of mind? No again. It just increases the probability of falling into even more misunderstandings and confusions.

For a more detailed conceptual elucidation about consciousness, knowledge, brains and bodies see here.

@pck and Scott:Thanks for the clarifications, they were helpful but I still have some objections. First I should backtrack and make some clarifications of my own. The post topic is indeterminacy and not meaning or language broadly construed so my presumption or expectation of meaning is in line with the way the topic has been discussed. Which brings up the more general assessment that indeterminacy has been described as a situation where there was a particular meaning communicated but that a particular meaning is supposedly indecipherable to the recipient under a naturalist framework. Second, at a descriptive level naming a shape or noise or a pattern of shapes and noises as a letter or word places it in the category of language sign. So in that case my previous claim that letters hold no meaning was mistaken, letters are an minimal indication that a linguistic reference is possible. Third, I don't think the distinctions of practice, semantics, and syntax are particularly well suited for a hierarchical ordering after a language is learned. They seem to be adequate for the learning order of a language but interactive language seems much more integrated that that. For instance practice functions at both the individual and social levels, languages evolve because of how they are practiced by a society. Syntax is important to determining meaning because it establishes a flow of information just by its structure, which helps provide important clues for meaning like whether a word is a noun or verb or adjective. Fourth, brains are rather obviously hardwired for filtering of extraneous signals, pattern recognition and integrated perception, not just human brains but other animal brains as well. These functions are much more advanced in humans and I would say this combination allows for a unique ability to objectify information as a tool.

The naturalisation of meaning is built on this error, which also has massive implications on how terms like "information" and "knowledge" are (mis)construed by naturalist thinkers.

@pck wrote about goals: "No neural output has ever been, or could ever be "yes, we're done" or "no, keep trying". So again I'm wondering how seriously you're saying this.

How could any competent user of the term "neural output" not seriously say this? You are conflating "neural output" with "ways to interpret neural output".

In computer programming, a high voltage is sometimes called "Yes" or more commonly a binary 1. A low voltage is a "No" or a binary 0. In electronic terms, "we're done" is simply to stop executing a loop, and you could interpret "keep trying" as a continuation of a loop.

A high voltage is not a "yes". Or a binary 1. If you don't believe me, try saying "1" every time you would ordinarily say "yes". Or if you want to play it tough use a taser on yourself for the high voltage experience. You have to do some calling and interpreting to make "1" stand for "yes". You have even said it yourself, so why is it so hard to understand?

Maybe you guys are simply asserting that these electronic things can never account for the full range of human feelings and aspirations, but that's the whole argument - the materialist says they can, and you say they can't.

And we're back to square one. We're not just saying they can't, we have given reasons to think they can't. How could any amount of description of material phenomena ever account for even the simplest human experience? When you can pull off something like making a blind man experience sight by reading him descriptions of what goes on in a normal sighted person's eyes and brain, then and only then will you be able to legitimately claim that the materialist case has merit. I'm not holding my breath.

(And you should really stop saying "argument" when you mean "point of contention". It's not conducive to clarity.)

So in that case my previous claim that letters hold no meaning was mistaken, letters are an minimal indication that a linguistic reference is possible.

Letters don't hold meaning. They don't indicate the possibility of reference, they are (some of) the tools of reference.

I don't think the distinctions of practice, semantics, and syntax are particularly well suited for a hierarchical ordering after a language is learned. They seem to be adequate for the learning order of a language

The latter was all I was claiming and all I needed. But the hierarchy of practice, meaning and manipulating signs remains valid beyond that. You don't get to sever your roots after having reached linguistic competence.

Syntax is important to determining meaning because it establishes a flow of information just by its structure, which helps provide important clues for meaning like whether a word is a noun or verb or adjective.

No, you have that backwards. Syntax cannot possibly ever determine meaning. No arrangement of signs enables you to derive from them what they mean unless you are familiar with what to do with them already. Structure is completely arbitrary with respect to meaning. By changing your practices you can make any structure of symbols mean whatever you want.

Take as an example double negation. "I didn't do nothing" does not mean "I did something". Sometimes two negations make a yes, sometimes they don't. Our practices determine which. You change meaning by changing your use, not by changing your language's internal rules. Doing the latter while leaving everything else as it is will create irritation and confusion, not new meaning.

I think what you may be referring to is the case of already familiar structures and rules providing guidance for guessing the meaning of bits of language one has not previously encountered, helping you to decide what to do next. Whether a word is a noun or verb or adjective will of course provide some such guidance, but it will not give you meaning solely by virtue of belonging to some grammatical category. (This is one reason why computerized translation of natural languages is so far from perfect. The internal rules of language cannot convey any meaning. You need to live language in order to participate in meaning. Computer programs cannot do that, so they are limited to taking cues from the structure of a text. More often than not, the text's structure will massively underdetermine the author's meaning and the automatic translation will be ambiguous to the same degree.)

Fourth, brains are rather obviously hardwired for filtering of extraneous signals, pattern recognition and integrated perception

All alarm bells going off at once here. Filtering? Signals? Pattern recognition? All of these concepts need a conscious subject to handle and/or perform them. No brain is capable of any of these. This is computer programmer talk projected onto the brain, and there is nothing obvious about it. As for the hardwiring, it's hard to find any structures in nature with greater plasticity than the brain.

As for the hardwiring, it's hard to find any structures in nature with greater plasticity than the brain.

That's a good point, there is a loose architecture but it is highly plastic. But the same characteristic is true for other animal brains. Somehow they manage to filter through a wealth of sensory data and recognize significant patterns and then respond to them. It is pretty strange other animals can navigate the world with such accuracy and sometimes in coordinated groups without any concepts. Maybe they have an organ closely connected to their most vital sensory inputs which is dedicated to that sort of filtering/recognition/response functionality for the whole body.

That's a good point, there is a loose architecture but it is highly plastic. But the same characteristic is true for other animal brains.

Here is an interesting facet of hard-wiring regarding the brain and its capacity to generate meaning: For mammals generally, growls and harsh gutterral sounds are interpreted threateningly, and soft, cooing, "kindly" sounds are interpreted as non-threatening. One of the ways we TRAIN young animals is by relying on that hard-wiring. It would be much harder to generate, from scratch on a completely blank slate, that yelling at a puppy meant "don't do that" or "I don't like that".

You claim you'd be hard pressed to find a less comprehensible and more question begging statement than "We feel excitation of those networks as 'meaning,'" yet you offer no reason why I can't comprehend what I do comprehend. Then you follow up with the baseless assertion, "Nobody has ever 'felt meaning'." How do you know the essence of meaning is not simply a feeling?

"The fact that human beings are not just their bodies does not mean that human beings are their bodies plus some ghost-in-the-machine, elan vital, ectoplasm, or other crazy stuff."

Yet you cannot explain what this crazy stuff is. It's by nature mysterious. It will forever stay mysterious. The mysterious cannot explain anything. The mysterious is mysterious because it's incomprehensible. I believe my characterization is correct.

"It is almost as if you intuitively recognize that you are using them metaphorically instead of literally."

I admit I often use these terms metaphorically. I assume there's a difference between a TV decoding a signal it was designed to decode and us decoding signals we are designed to decode. My primary purpose in using this particular analogy is to dispute the claim that, in a materialist understanding, inputs alone have to explain outputs or "meanings." The machine, whether TV or organism, filter and amplify the inputs that are relevant to it -- inputs that excite its internal physical design.

I think the point is that the output would have no inherent meaning either. After all, if a tv is playing in a universe where intelligent life no longer exists, the moving pictures won't have any more meaning then leaves moving in the wind or water moving through a river.

And yet There Are Those who seem to have ever so much trouble with it:

Yet you cannot explain what this crazy stuff is. It's by nature mysterious. It will forever stay mysterious.

I have a hard time thinking of anything less "mysterious" than consciousness and intentionality. I know that "crazy stuff" far better, and far more intimately, than I'll ever know the "matter" on which the materialist claims to base his entire cosmos.

It seems "mysterious," I think, only to those who insist that it should be explained in terms of (that is, reduced to) something else—forgetting, perhaps, that any such proposed "something else" would really be something else.

Then you follow up with the baseless assertion, "Nobody has ever 'felt meaning'." How do you know the essence of meaning is not simply a feeling?

The only way that claim could be viewed as baseless would be to ignore almost every instance of the actual literal use of the term "meaning". We do not answer questions like "What do you mean by X?" or "What is the meaning of X?" by giving references to feelings. And just to be perfectly clear: An explanation of "the meaning of X" can certainly include references to emotions, but it cannot be based on appeals to them.

If this is too abstract for you, here is some concrete homework: Answer the questions "What does 'immigration' mean?" and "What does the symbol 'x' in 'f(x)' mean?" using only appeals to feelings. As a mathematician I would be highly interested in the possibility of shifting my whole mode of operation into the domain of emotions. Instead of working my way through those tedious 200 pages of the proof of Fermat's last theorem, maybe I could just feel the truth of it.

Don: Yet you cannot explain what this crazy stuff is. It's by nature mysterious. It will forever stay mysterious. The mysterious cannot explain anything. The mysterious is mysterious because it's incomprehensible.

You appear to have misread me. The crazy stuff I was referring to is not the crazy stuff you are referring to. My crazy stuff is actually crazy. Your crazy stuff isn't. See Scott's reply.

Don: I believe my characterization is correct.

Keeper of the faith.

Scott: I have a hard time thinking of anything less "mysterious" than consciousness and intentionality [...]

Precisely. Without consciousness there are no views, stances or positions. Including the materialist one. It is only when physical models, computer paradigms or 3rd person perspectives in general are dogmatically declared to be the only proper means of explanation that the "mysteries" appear.

ozero91 said...

I think the point is that the output would have no inherent meaning either. After all, if a tv is playing in a universe where intelligent life no longer exists, the moving pictures won't have any more meaning then leaves moving in the wind or water moving through a river.

This is where the materialist looks left and right in embarrassment and reluctantly puts one hand one the knee of Platonism.

@Don Jindra: "'The fact that human beings are not just their bodies does not mean that human beings are their bodies plus some ghost-in-the-machine, elan vital, ectoplasm, or other crazy stuff.' / Yet you cannot explain what this crazy stuff is. It's by nature mysterious. It will forever stay mysterious. The mysterious cannot explain anything. The mysterious is mysterious because it's incomprehensible."

Hello Don. Why, it seems like just last week I accused you of perversity because you criticized someone for doing the very opposite of what in fact he did. Now I see that you criticize pck for positing the existence of something which he denies the existence of. Let me not omit notice of your rigorous consistency in misreading.

Perhaps pck's previous statement will be more clear now than when he first made it: "The materialist panics because he thinks that if mechanical language wasn't enough to explain thought, he would need to amend it with 'Spooky Stuff' to make the equation / Brain Mechanics + Spooky Stuff = Thought / hold. He doesn't believe in Spooky Stuff (and he shouldn't), so he concludes that / Brain Mechanics = Thought / must be true instead. But in fact both equations are fallacies, because a phenomenon like thought cannot be decomposed like that, just as the concept of a 'joyful dance' cannot be decomposed into 'dance movements + joy'."

"I believe my characterization is correct."

Did you ever see that episode of Seinfeld where George decides to do the exact opposite of what he normally would, and achieves surprising success?

@pck: "Take as an example double negation. 'I didn't do nothing' does not mean 'I did something'. Sometimes two negations make a yes, sometimes they don't."

You have perhaps heard the anecdote, that once when J.L Austin was lecturing, he said that in some languages a double negative makes a positive, and in some a negative and a positive make a negative, and in some a negative and a positive make a positive, but in no known language did two positives make a negative. To which, from the back row and with a loud, pronounced New York accent, Sidney Morgenbesser replied, "Yeeeahh yeeeahh..."

@Scott: "I have a hard time thinking of anything less 'mysterious' than consciousness and intentionality. I know that 'crazy stuff' far better, and far more intimately, than I'll ever know the 'matter' on which the materialist claims to base his entire cosmos. / It seems 'mysterious,' I think, only to those who insist that it should be explained in terms of (that is, reduced to) something else—forgetting, perhaps, that any such proposed 'something else' would really *be* something else."

How very well put. :)

There is a good essay to be written about how some who take themselves to be very hard-headed and commonsensical completely overlook the *scope* of the views they misunderstand. This essay would dwell on Dr. Johnson kicking rocks, and it would include Keynes on practical men, and Drax the Destroyer's "Nothing goes over my head. My reflexes are too fast, I would catch it."

Great story, thank you for the reminder. I had indeed read about that incident, but it was a rather long time ago. Perfect story to lighten up a dreary morning.

Here's another one I like, particularly for the ending:

"Morgenbesser was leaving a subway station in New York City and put his pipe in his mouth as he was ascending the steps. A police officer told him that there was no smoking on the subway. Morgenbesser pointed out that he was leaving the subway, not entering it, and hadn’t lit up yet anyway. The cop again said that smoking was not allowed in the subway, and Morgenbesser repeated his comment. The cop said, 'If I let you do it, I’d have to let everyone do it.' Morgenbesser replied, 'Who do you think you are, Kant?' Due to his accent, the word 'Kant' was mistaken for a vulgar epithet and Morgenbesser was hauled off to the police station. He won his freedom only after a colleague showed up and explained the Categorical Imperative to the unamused cops."

Scott: It seems 'mysterious,' I think, only to those who insist that it should be explained in terms of (that is, reduced to) something else—forgetting, perhaps, that any such proposed 'something else' would really *be* something else.

laubadetriste: How very well put. :)

Very well put indeed.

This is exactly why "Brain Mechanics + X = Thought / Consciousness / Experience" cannot work for any X, including X="nothing". If X is epistemically compatible with the brain mechanics, it cannot produce the "domain change" ("scope" in laubadetriste's remark below) from BM to T/C/E. And if X isn't, it cannot add up with BM at all, just like, to borrow from Terry Eagleton, my right foot cannot add up with my envy.

Thus, in the words of Peter Hacker, "there is more [to the world than mechanics], but not additionally more".

laubadetriste:

There is a good essay to be written about how some who take themselves to be very hard-headed and commonsensical completely overlook the *scope* of the views they misunderstand. This essay would dwell on Dr. Johnson kicking rocks, and it would include Keynes on practical men, and Drax the Destroyer's "Nothing goes over my head. My reflexes are too fast, I would catch it."

"I think the point is that the output would have no inherent meaning either."

I agree if by inherent you're talking about a cosmic or objective meaning. I happen to think Grace Kelly was a beautiful woman. But does that beauty exist outside human perspective? I don't think it does. Why should the term "meaning" be different?

Scott,

"I have a hard time thinking of anything less 'mysterious' than consciousness and intentionality."

I agree with that too. But from a materialist perspective this is a temporary condition. There's hope humans can reveal the mystery. I see no possibility of rising above mystery if the search stops with the unknowable substance proposed by dualists.

"I know that "crazy stuff" far better, and far more intimately, than I'll ever know the "matter" on which the materialist claims to base his entire cosmos.

Assuming the 'crazy stuff' is not intimately felt matter to begin with. :)

"It seems 'mysterious,' I think, only to those who insist that it should be explained in terms of (that is, reduced to) something else—"

I don't deny that. Mystery disappears only when it can be explained in terms that are not mysterious. Ultimately this depends on what people accept as undeniable fact. Basically I see dualists as too broad-minded in their narrow-mindedness. :)

pck,

"We do not answer questions like 'What do you mean by X?' or 'What is the meaning of X?' by giving references to feelings."

My claim is that the term "meaning" is itself not well understood. So it wouldn't be surprising if we answer questions about it erroneously. Nevertheless, You're incorrect about what we do say. Let's think about this scenario:

Girl: "What do you mean when you say you love me?"

Boy: "I mean I love listening to you talk. I love seeing your face. I love holding your hand. I love smelling your hair. I love tasting your lips. You make me feel like I've never felt before. And I never want to lose that feeling. You give life meaning."

Conversations like that are not unknown in the real world. I've had a conversation like that. So I dispute your claim that we don't describe meaning in terms of feeling.

Child: "What do you mean by cold?"

Mother: "Touch that ice cube and you'll understand."

The fundamental meaning of cold is defined exclusively by the experience of how it has made us feel throughout the years. Every other description is related to those or it has no meaning.

"I have a hard time thinking of anything less 'mysterious' than consciousness and intentionality."

I agree with that too. But from a materialist perspective this is a temporary condition. There's hope humans can reveal the mystery. I see no possibility of rising above mystery if the search stops with the unknowable substance proposed by dualists.

Materialists think they can decrypting the whole, when all they can do is encrypt some of its parts. And the substance isn't unknowable. It just defies the materialist's attempts to encrypt it. This annoys the materialists, so they imperialistically insist, "Nothing but that which we can encrypt is real."

You fell into the exact trap I predicted you would fall into, giving examples that involve references to feelings but which aren't based on or identifiable with them. And not only did you fall into that trap, but you additionally succeeded in impaling yourself onto a number of spikes at the bottom which you managed to set up yourself while you were falling.

There is a reason why I said:

The only way that claim could be viewed as baseless would be to ignore almost every instance of the actual literal use of the term "meaning".

Your Boy/Girl example is painfully defective. It does not even involve a literal use of the term "meaning". In the phrase "you give my life meaning", "meaning" is used with the connotation of "importance". But the challenge was to explain (solely by reference to emotions)

(a) "what is the meaning of (the word) 'life'"and not(b) "what is the meaning of life".

You have managed to confuse the life of a person with the word "life" in the phrase "the life of a person". That, to quote laubadetriste, takes talent. It's as if I had asked you about the etymology of "pizza" and you had "answered" by telling me why you like pizza. (Yes, it's that bad.)

Your ice example is equally sloppy. It conflates an experience (touching ice) involved in learning what "cold" means with the practice of using the word "cold". Giving an overview of the latter is to explain the meaning of "cold". The former, by contrast, is a part of what needs to happen in the acquisition of that meaning (= the ability to use "cold" correctly). Two related, but entirely different kinds of affairs.

Finally, you did not answer my challenge of demonstrating how everyday terms like "immigration" or technical terms as "f(x)" can be shown to be feelings.

Thus, in summary, another complete failure on all counts on your part, owed to sloppy thinking, confusion and conflation of elementary concepts and the avoidance of dealing with the actual issues. Or as I call it, the Jindra experience.

"You fell into the exact trap I predicted you would fall into, giving examples that involve references to feelings but which aren't based on or identifiable with them."

You assert it's a trap. I assert it's the crux of the issue. You assert I don't use "meaning" in its literal sense. I assert I use it precisely in its literal sense -- the way it's used every day by average English speakers. I claim "meaning" is in fact used with the connotation of "importance." So how do suggest we resolve this difference in semantics? We can't possibly discuss meaning in math unless we get past the simpler stuff.

There is no such thing as a word "not being well understood". For every word there are paradigmatic practices which establish what counts as its proper use.[*] A speaker who ventures too far outside of these practices will no longer be understood. But it is the speaker who won't be understood, not the word.

"I don't understand this word" is the same as to say "I don't know how to use it". The meaning of a word lies in its use within the practices of our lives. There is nothing "not well understood" about the concept of meaning. (The fact that you have a hard time explaining it is another, entirely personal matter.)

[*] If there aren't, then there simply is no word: "xnfrptyf" isn't an English word, but that is not because "xnfrptyf" is "not well understood".

So it wouldn't be surprising if we answer questions about it erroneously.

So you think that the concept of "meaning" may have been for thousands of years in thousands of languages used by millions of speakers and all or most of them have been misusing it the whole time? It's hard to imagine the amount of cognitive dissonance it would take to genuinely subscribe to that idea.

I claim "meaning" is in fact used with the connotation of "importance."

Of course it occasionally is. But obviously "What is the meaning of 'immigration'?" is not the same as to ask "What is the importance of immigration?" If you cannot see the difference, you are quite simply a lost case for the English language.

Likewise for "What does the 'x' mean in 'f(x)'?" A correct answer such as "it is a variable" or "it stands for a number" cannot be reformulated exclusively in terms of "importance". That is the challenge you would have to meet, which of course you cannot and will not.

You can now pull a Jindra and say something like "Are you suggesting that it is not important that 'x' stands for a number?" or some such nonsense. And that would show once more why you are not a candidate for any serious discussion. The content simply escapes you and you would rather play silly games with words forms.

So how do suggest we resolve this difference in semantics?

In theory it could be done by you realigning yourself with the proper use of the terms in question. Similar to a resocialization after a long prison sentence. But I think we both know that that is not going to happen.

"There is nothing 'not well understood' about the concept of meaning."

Suppose you tell me what it is. Suppose you tell me how well we understand virtue, justice, courage, knowledge, beauty, consciousness and love.

"'What is the meaning of 'immigration'?" is not the same as to ask 'What is the importance of immigration?'"

What is the importance of immigration? How many answers do you want? The importance of settling the West, as was important in late 19th century USA? The importance of cheap labor or skilled labor? The importance of diversity or uniformity? The importance of security? Immigration means many different things to us. Every one of those meanings is important, but important by convention and our current context. Superficially this example looks different. But thoroughly answering both questions would result in the same answers, hitting the same issues.

"A correct answer such as 'it is a variable' or 'it stands for a number' cannot be reformulated exclusively in terms of 'importance'

I don't have to prove meaning is exclusively about importance, but at its root I think it probably is. Specific to your example, I claim f(x) has no meaning outside human interest (that is, our associating some importance to the function). So I'll also claim a^2+b^2=c^2 has no inherent meaning or importance. It's like Ross's indeterminacy issue taken from a different direction. No function or equation means anything worthwhile unless applied by us to describe some particular in the real world. Every function or equation could describe many real world situations. Math people don't like me to insist math and numbers have an empirical basis. But without that basis, math is no more than a game. It becomes mere symbol manipulation, an endless mind puzzle, like playing chess. No more meaningful than that. IOW, math without an empirical foundation would become solely about our emotional love of games.

pck: There is nothing 'not well understood' about the concept of meaning.

This should actually have been "about the use of the concept". But your reply would no doubt have been the same confused mess, since clearly you are unable to distinguish between the use of a word and an explanation of its use. You should try smoking the picture of a pipe sometime. Maybe you will be enlightened.

DJ: Math people don't like me to insist math and numbers have an empirical basis.

John Moore: I think the energy flow through the brain is a single flow because it's like an electronic circuit.

Electronic circuits aren't "one" either — not without forms to account for the unity, and we're back to a Platonic/Aristotelian view instead of a materialistic one. "Nerves", "bodies", "input", "output" are all beyond matter. Matter is just particles swirling in the void, and any grouping of these particles vs. those is simply ungrounded... on a materialist view.

What you should be considering here is where the classical view came from: Plato and Aristotle didn't decide they wanted to be theists so they tried to come up with some excuse to give everything souls, or something like that. The whole idea of forms arose because philosophers tried to figure out what it meant to say that there were things, this thing or that thing, instead of, say, a random scattering of particles. You can't take having the things for granted and throw out the metaphysical basis for them.

It seems to me that perhaps the most fundamental distinction between Aristotelian psychology, understood to be an account of how we come to believe something, and modern psychology, is that the former considers as its starting point the intellect while the latter considers as its starting point a Humean-influenced notion of 'impressions'. The result is that the former attempts to approach psychology with the hopes of explaining the fact of our immaterial intelligence and ability to conceive of immaterial universals while the latter has nothing in mind to explain besides the philosopher's personal biases which ultimately leads to countless different conclusions that all seem to only hold in common the tendency of undercutting the very intelligence and conceptualization that make any psychological analyses possible to begin with. That we must affirm the former and the fact of our intelligence and immaterial universals thus influences perhaps the entire gamut of the division between traditional and modern philosophy, including the field of linguistics. The potential problem of semantic indeterminacy thus becomes a thing that must have a yet-to-be-found solution rather than an insurmountable obstacle that compels us to abandon the possibility of any determinate meaning. And as Ed has shown, it seems the obvious point to be had, which shall allow us to form a more clear and precise account of the acquisition of meaning in a linguistic context, is that meaning is not reducible to behavior. It is thus the hellbent image-lover Humean and the stubborn and mechanistic behaviorist who must forfeit their place at the table rather than the Aristotelian, insofar as we accept that we are rational creatures who can trust anything that we are discussing at the table to begin with.

About Me

I am a writer and philosopher living in Los Angeles. I teach philosophy at Pasadena City College. My primary academic research interests are in the philosophy of mind, moral and political philosophy, and philosophy of religion. I also write on politics, from a conservative point of view; and on religion, from a traditional Roman Catholic perspective.