Meta

I’ve been studying philosophy of mind for a while, but recently I’ve been delving into psychology, cognitive science and some of the so-called hard sciences- biology, neuroscience, chemistry. A think you learn from a particular brand of philosophers is to always attempt to take into account things like intentionality, qualia and the unity of conscious experience. However, there is a sense that those first-rate problems take a back seat when you are constructing a theory of the mind (which is, to many, apparently different than a theory of consciousness). While we certainly need a theory of our psychological architecture, it seems odd to me that a theory of the mind could leave out these kinds of things. But it is easy to begin to feel them explained away and those folk-psychological intuitions begin to melt away in a sea of billions of neurons and synapses.

All this is to say, is this what happens to eliminativists and physicalists about the mind? Are they so wrapped up in the science of neural networks that when they do confront qualia they feel they are justified in ignoring the problem? If so, I don’t blame them. It’s hard not to feel the pull of these intuitions when all these powerful scientific fields converge on a single problem. The philosophy of it all begins to look like alchemy.

The extended mind is an interesting theory of consciousness and cognition that attempts to reshape the way we look at what it means to be human. Does they mind end at our meaty borders? Can palm-pilots become external modules of the mind? I’m not so sure if there is much about the theory that is explanatorily interesting, but it is fun to think about nonetheless.