Brain Hammer

Saturday, August 15, 2015

For those of you who have been holding off on getting your brains destructively uploaded, I have a couple of bits of good news. There's a new video and a paper draft available for my project, "Metaphysical Daring as a Posthuman Survival Strategy," forthcoming in a special issue of Midwest Studies in Philosophy on science fiction and philosophy edited by Eric Schwitzgebel. Gander at the draft at this link. Here's the abstract:

Believing that one can survive having one’s mind “uploaded” to a computer (while having one’s brain destroyed) may be better than the contrary belief in a sense of “better” determined independently of the belief’s truth. Different metaphysical views about a person’s persistence conditions can be ordered on a scale ranging from extremes of metaphysical daring to extremes of metaphysical timidity. Further, the adoption of more daring metaphysical views may confer survival advantages to posthuman adopters and their descendants. Regardless of whether their views are true, the metaphysically timid who refuse to upload may go extinct and be supplanted by their more daring posthuman descendants. This possibility can serve as a basis for contemporary humans to endorse posthumanist values and projects, including a willingness to subjecting themselves to mind uploading procedures.

The video has just been made available by the University of Texas, Arlington, where I presented this stuff in 2014. If you want to skip around in the video, here are the main landmarks: Kenneth Williford's very nice introduction ends around 03:20. Following the talk is a Q-and-A that starts around 31:39.

Dennett’s most famous argument for his first-person operationalism (hereafter, FPO) proceeds by pointing out the alleged empirical underdetermination of theory-choice between “Stalinesque” and “Orwellian” explanations of certain temporal anomalies of conscious experience (Dennett, op. cit., pp. 115-126). The explanations conflict over whether the anomalies are due to misrepresentations in memories of experiences (Orwellian) or misrepresentations in the experiences themselves (Stalinesque).David Rosenthal (1995, 2005a, 2005b) has offered that his Higher-order Thought theory of consciousness (hereafter, “HOT theory”) can serve as a basis for distinguishing between Orwellian and Stalinesque hypotheses and thus as a basis for resisting FPO. The gist of HOT theory is that one’s having a conscious mental state consists in one’s having a higher-order thought (a HOT) about that mental state.I’ll argue that HOT theory can defend against FPO only on a “relational reading” of HOT theory whereby consciousness consists in a relation between a HOT and an actually existing mental state. I’ll argue further that this relational reading leaves HOT theory vulnerable to objections such as the Unicorn Argument (Mandik, 2009). To defend against such objections, HOT theory must instead admit of a “nonrelational reading” whereby a HOT alone suffices for a conscious state. Indeed, HOT theorists have been increasingly explicit in emphasizing this nonrelational reading of HOT theory (Rosenthal, 2011; Weisberg, 2010 and 2011). However, I’ll argue, on this reading HOT theory collapses into a version of FPO.

This work collects contributions to the growing literature in what might be dubbed "singularity studies," where the singularity in question is the technological singularity, a hypothetical future moment when the rate of change of human technology reaches a speed that surpasses human capacities to predict and prepare for further changes. On the presumption that technological increase follows an exponential curve, the singularity is the knee of the bend, the point beyond which the graph abandons the nearly horizontal for the nearly vertical. Despite the danger that merely human cognition won't keep up with post-singularity events, perhaps cognition that is either artificially enhanced or wholly artificially constituted will be able to thrive in post-singularity times. But if such cognitive systems appear on the scene, we must wonder what implications this will have for humans of the sort currently predominant. Can wholly non-human super-intelligent minds be tamed or otherwise coaxed into friendliness toward their human creators? This is the core question of super artificial-intelligence (super AI). Can the essence of what presently counts as a human be preserved in a wholly artificial substrate? This is the core question of mind uploading. These core questions, as well as equally important and intriguing related questions orbiting the cores, are engagingly tackled in a variety of styles by the tome's contributors.

Monday, June 15, 2015

Well, part of what I’m trying to say is that, like most metaphysical debates, this is going to be irresoluble by argumentation. There’s really nothing that pure reason is going to allow us to settle one way or another. All the evidence that we have we all tend to agree on. That evidence just underdetermines whether computers could have conscious experiences or whether they would be mere copies or actual survival of personal identity.

What I try to do as a way of resolving that metaphysical impasse is to look at it from a Darwinian or evolutionary point of view. The basic point of Darwinian evolution applies to any kind of system where you have things that are replicating and various degrees of fitness that would apply to the things that are reproducing. On this kind of abstract characterization, we could describe various hypothetical systems as having features that would be more fit.

Now one of the features that these computer simulations would have is something we could describe as being belief-like. In particular, these things are going to have the belief that they are going to survive the procedure. Now the metaphysical debate is about whether that belief is true, and what I’m trying to argue is that we can say, regardless of whether that belief is true, that belief would have survival value. Physical systems that have that belief are more likely to make more copies of themselves than physical systems that lack that belief.

Sunday, October 5, 2014

Philosophy in an Inclusive Key Summer Institute, PIKSI, is dedicated to improving the pipeline of diverse and under-represented students into Philosophy graduate programs and, ultimately, into the profession. PIKSI is conducting a fund-raising campaign this month with the hope to get support from far and wide.

This is something concrete we can do to increase diversity in philosophy.

Here is a link to the PIKSI fundraising page, where you can also find a video made by previous participants,

Hide your brains; the neurophilosophers are coming! Philosopher and neuroscientist Berit (Brit) Brogaard joins Richard Brown and Pete Mandik on the SpaceTimeMind podcast to discuss what makes some states of the mind or brain conscious and others unconscious. Is this sort of question answerable from a psychological or philosophical perspective that makes no essential reference to neuroscience? Or, instead, are neuroscientific data unavoidable in this domain? And: Can Brit go a full ten minutes without using the word “brain?"

On this alternative to Wu’s hypothesis, the best way to account for so-called spatial constancy is [...] instead of treating it as the appearance of an absence of motion, we treat it as an absence of an appearance of motion. So, back to my visual inspection of my yard, I see the chairs and the tree trunk, which involves, I presume, neural processes encoding relevant information concerning shape, location, and color, and during saccades I just don’t have any representations of those things as moving, which, I presume, is different from having representations of the things as not moving.