Thursday, October 27, 2016

The Monday before last, Ned Block and Eric Mandelbaum brought me into their philosophy grad seminar at New York University to talk about belief. Our views are pretty far apart, and I got pushback during class (and before class, and after class!) from a variety of directions. But the issue that stuck with me most was the big-picture issue of dispositionalism vs respresentationalism about belief.

I'm a dispositionalist. By this I mean that to believe some particular proposition, such as that your daughter is at school, is nothing more or less than to be disposed toward certain patterns of behavior, conscious experience, and cognition, under a range of hypothetical conditions -- for example, to be disposed to go to your daughter's school if you decide you want to meet her, to be disposed to feel surprise should you head home for lunch and find her waiting there, and to be disposed, if the question arises, to infer that her favorite backpack is also probably at the school (since she usually takes it with her). All of these dispositions hold only "ceteris paribus" or "all else being equal" and one needn't have all of them to count as believing. (For more details about my version of dispositionalism in particular, see here.) Crucial to the dispositionalist approach (but not unique to it) is the idea that the implementational details don't matter -- or rather, they matter only derivatively. It doesn't matter if you've got a connectionist net in your head, or representations in the language of thought, or a billion little homonuculi whispering in thieves' cant, or an immaterial soul. As long as you have the right clusters of behavioral, experiential, and cognitive dispositions, robustly, across a suitably broad range of hypothetical circumstances, you believe.

On a representationalist view, implementation does matter. On a suitably modest view of what a "representation" is (I like Dretske's account), the human mind uses representations. For example, it's very plausible that neural activity in primary visual cortex is representational, if representations are states of a system that function to track or convey information about something else. (In primary visual cortex, patterns of excitation in groups of neurons function to indicate geometrical features in various parts of the visual field.) The representationalist about belief commits to a general picture of the mind as a manipulator of representations, and then characterizes believing as a matter of having the right sort of representations (e.g., one with the content "my daughter it school") stored or activated in the right type of functional role in the mind (for example, stored in memory and poised (if all goes well) to be activated in cognitive processing when you are asked, "where is your daughter now?").

I interpreted some of the pushback from Block, Mandelbaum, and their students as follows: "Look, the best cognitive science employs a representational model of the mind. So representations are real. Even you don't deny that. So if you want a truly scientific model of the mind instead of some vague dispositionalism that looks only at the effects or manifestations of real cognitive states, you should be a representationalist."

How is a dispositionalist to reply to this concern? I have three broad responses.

The Implementational Response. The most concessive response (short of saying, "oops, you're right!") is to deny that there is any serious conflict between the two positions by allowing that the way one gets to have the dispositional profile constitutive of belief might be by manipulating representations in just the manner that the representationalist supposes. The views can be happily married! You don't get to have the dispositional profile of a believer unless you already have right sort of representational architecture underneath; and once you have the right sort of representational architecture underneath, you thereby acquire the relevant dispositional profile. The views only diverge in marginal or hypothetical cases where representational architecture and dispositional profile come apart -- but maybe those cases don't matter too much.

However, I think that answer is too concessive, for a couple of reasons.

The Messiness Response. Here's a too-simple hypothetical representationalist architecture for belief. To believe that P (e.g., that my daughter is at school today) is to just to have a representation with the content P ("my daughter is at school today") stored somewhere in the mind, ready to be activated when it becomes relevant whether P is the case (e.g., I'm asked "where is your daughter now?"). One problem with this view is the problem of specifying the exact content. I believe that the my daughter is at school today. I also believe that my daughter is at JFK Elementary today. I also believe that my daughter is at JFK Elementary now. I also believe that Kate is at JFK Elementary now. I also believe that Kate is in Ms. Salinas' class today. This list could obviously be expanded considerably. Do I literally have all of these representations stored separately? Or is there only one representation stored, from which the others are swiftly derivable? If so, which one? How could we know? This puzzle invites us to reject the simplistic picture that believing P is a matter of having a stored representation with exactly the content P. But once we make this move, we open ourselves up to a certain kind of implementational messiness -- which is plausible anyway. As we have seen in the two best-developed areas of cognitive science -- the cognitive science of memory and the cognitive science of vision -- the underlying architectural stories tend to be highly complex and tend not to map neatly onto our folk psychological categories. Furthermore, viewed from an appropriately broad temporal perspective, scientific fashions come and go: We have this many memory systems, no we have this many; early visual processing is not much influenced by later processing, wait yes it is influenced, wait no it's not after all. Dynamical systems, connectionist networks, patterns of looping activation can all be understood in terms of language-like representations, or no they can't, or maybe map-like representations or sensorimotor representations are better. Given the messiness and uncertainty of cognitive science, it is premature to commit to a thoroughly representationalist picture. Maybe someday we'll have all this figured out well enough so that we can say "this architectural structure, this one, is what you have if you believe that your daughter is at school, we found it!" That would be exciting! That day, I abandon dispositionalism. Until then, I prefer to think of belief dispositionally rather than relying upon any particular architectural story, even as general an architectural story as representationalism.

The What-We-Care-About Response.Why, as philosophers, do we want an account of belief? Presumably, it's because we care about predicting and explaining our behavior and our patterns of experience. So let's suppose as much divergence as it's reasonable to suppose between patterns of experience and behavior and patterns of internal architecture. Maybe we discover an alien species that has outward behavior and inner experiences virtually identical to our own but implemented very differently in the underlying architecture. Or maybe we can imagine a human being whose actions and experiences, not only in her actual circumstances but also in a wide range of hypothetical circumstances, are just like that of someone who believes that P, but who lacks the usual underlying architecture. On an architecture-driven account, it seems that we have to deny that these aliens or this person believes what they seem to believe; on a dispositional account, we get to say that they do believe what they seem to believe. The latter seems preferable: If what we care about in an account of belief is patterns of behavior and experience, then it makes sense to build an account of belief that prioritizes those patterns of behavior and experience as the primary thing, and treats purely architectural considerations as secondary.

Friday, October 21, 2016

One of my regular TAs, Chris McVey, uses a lot of storytelling in his teaching. About once a week, he'll spend ten minutes sharing a personal story from his life, relevant to the class material. He'll talk about a family crisis or about his time in the U.S. Navy, connecting it back to the readings from the class.

At last weekend's meeting of the Minorities And Philosophy group at Princeton, I was thinking about what teaching techniques philosophers might use to appeal to a broader diversity of students, and "storytime with Chris" came to mind. The more I think about it, the more I find to like about it.

Here are some thoughts.

* Students are hungry for stories, and rightly so. Philosophy class is usually abstract and impersonal, or when not abstract focused on toy examples or remote issues of public policy. A good story, especially one that is personally meaningful to the teacher, leaps out and captures attention. People in general love stories and are especially ready for them after long dry abstractions and policy discussions. So why not harness that? But furthermore, storytelling gives shape and flesh to the stick figures of philosophical abstraction. Most abstract principles only get their full meaning when we see how they play out in real cases. Kant might say "act on that maxim that you can will to be a universal law" or Mengzi might say "human nature is good" -- but what do such claims really amount to? Students rightly feel at sea unless they are pulled away from toy examples and into the complexity of real life. Although it's tempting to think that the real philosophical force is in the abstract principles and that storytelling is just needless frill and packaging, I think that the reverse might be closer to the truth: The heart of philosophy is in how we engage our minds when given real, messy examples, and the abstractions we derive from cases always partly miss the point.

* Personal stories vividly display the relevance of philosophy. Many -- maybe most -- students are understandably turned off by philosophy because it seems so remote from anything of practical value. What's the point, they wonder, in discussing Locke's view of primary and secondary qualities, or semi-comical far-fetched problems about runaway trolleys, or under what conditions you "know" something is a barn in Fake Barn Country? It takes a certain kind of beautiful, nerdy, impractical mind to love these questions for their own sake. Too much focus on such issues can mislead students into thinking that philosophy is irrelevant to their lives. However (I hope you'll agree), nothing is more relevant to our lives than philosophy. Every choice we make expresses our values. Every controversial opinion we form depends upon our general worldview and our implicit or explicit sense of what people or institutions or methods deserve our trust. Most students will understandably fail to see the connection between academic philosophy and the philosophy they personally live through their choices and opinions unless we vividly show how these are connected. Through storytelling, you model your struggle with Kant's hard line against lying, or with how far to trust purported scientific experts, or with your fading faith in an immaterial soul -- and students can see that philosophy is not just a Glass Bead Game.

* Personal stories shift the locus of academic capital. We might think of "academic capital" as the resources students bring to class which help them succeed. In philosophy class, important capital includes skill at reading and evaluating abstract arguments and, in class discussion, skill at working up passable pro and con arguments on the spot. Academic capital of this sort also includes knowledge of the philosophical tradition, comfort in a classroom environment, confidence that one knows how this game is played. These are terrific skills to have of course; and some students have more of them than others, or at least believe they do. Those students tend to dominate class discussion. If you tell a personally meaningful story, however, you can make a different set of skills and experiences suddenly important. Students who might have had similar stories from their own lives now have something unique to contribute. Students who are good at storytelling, students who have the social and emotional intelligence to evaluate what might have really happened in your family fight, students with cultural knowledge of the kind of situation you describe -- they now have some of the capital. And they might be a very different group from the ones who are so good at the argumentative pro-and-con. In my experience, good philosophical storytelling engages and draws out discussion from a larger and more diverse group of students than does abstract argument and toy example.

If philosophers were more serious about engaged, personal storytelling in class, we would I think have a different and broader range of students who loved our courses and appreciated the importance and interest of our discipline.

Tuesday, October 11, 2016

(inspired by a conversation with Cory Doctorow about how a kid's high-tech rented eyes might be turned toward favored products in the cereal aisle)

At two million dollars outright, of course I couldn't afford to buy eyes for my four-year-old daughter Eva. So, like everyone else whose kids had been blinded by the GuGuBoo Toy company's defective dolls (may its executives rot in bankruptcy Hell), I rented the eyes. What else could I possibly do?

Unlike some parents, I actually read the Eye & Ear Company's contract. So I knew part of what we were in for. If we didn't make the monthly payments, her eyes would shut off. We agreed to binding arbitration. We agreed to a debt-priority clause, to financial liability for eye extraction, to automatic updates. We agreed that from time to time the saccade patterns of her eyes would be subtly adjusted so that her gaze would linger over advertisements from companies that partnered with Eye & Ear Co. We agreed that in the supermarket, Eva's eyes would be gently maneuvered toward the Froot Loops and the M&Ms.

When the updates came in, we always had the legal right to refuse them. We could, hypothetically, turn off Eva's eyes, then have them surgically removed and returned to Eye & Ear Co. Each new rental contract was thus technically voluntary.

When Eva was seven, the new updater threatened shutoff unless we transferred $1000 into a debit account. Her updated eyes contained new software to detect any copyrighted text or images she might see. Instead of buying copyrighted works in the usual way, we agreed to have a small fee deducted from our account for each work Eva viewed. Instead of paying $4.99 for the digital copy of a Dr Seuss book, Eye & Ear would deduct $0.50 each time she read the book. Video games might be free with ads, or $0.05 per play, or $0.10, or even $1.00. Since our finances were tight, we set up parental controls: Eva's eyes required parental permission for any charge over $0.99 or any cumulative charges over $5.00 in a day -- and of course they also blocked any "adult" material. Until we granted approval, blocked or unpaid material was blurred and indecipherable, even if she was just peeking over someone's shoulder at a book or walking past a television in a dentist's lobby.

When Eva was ten, the updater overlaid advertisements in her visual field. It helped keep the rental costs down. (We could have bought out of the ads for an extra $6,000 a year.) The ads never interfered much with Eva's vision -- they just kind of scrolled across the top of her visual field sometimes, Eva told us, or printed themselves onto clouds and the sides of buildings.

By the time Eva was thirteen, I'd finally risen to a managerial position at work, and we could afford the new luxury eyes for her. By adjusting the settings, Eva could see infrared at night. She could zoom in on distant objects. She could bug out her eyes and point them in different directions like some sort of weird animal, to take in a broader field of view. She could also take snapshots and later retrieve them with a subvocalization -- which gave her a great advantage at school over her normal-eyed and cheaper-eyed peers. Installed software could text-search through stored snapshots, solve mathematical equations, and pull relevant information from the internet. When teachers tried to ban such enhancements from the classroom, Eye & Ear fought back, arguing that the technology had become so integral to the children's lives that it couldn't be removed without disabling them. Eye & Ear refused to develop the technology to turn off the enhancement features, and no teacher could realistically prevent a kid from blinking and subvocalizing.

By the time Eva was seventeen it looked like she and the two other kids at her high school with luxury eye rentals would more or less have their choice among elite universities. I refused to believe the rumors about parents intentionally blinding their children so that they too could rent eyes.

When Eva turned twenty, all the updates -- not just the cheap ones -- required that you accept the "acceleration" technology. Companies contracted with Eye & Ear to privilege their messages and materials for faster visual processing. Pepsi paid a hundred million dollars a year so that users' eyes would prioritize resolving Pepsi cans and Pepsi symbols in the visual scene. Coca Cola cans and symbols were "deprioritized" and stayed blurry unless you focused on them for a few seconds. Loading stored images worked similarly. A remembered scene with a Pepsi bottle in it would load almost instantly. One with a Coke bottle would take longer and might start out fuzzy or fragmented.

Eye & Ear started to make glasses for the rest of us, which imitated some of the functions of the implants. Of course they were incredibly useful. Who wouldn't want to take snapshots, see in the dark, zoom into the distance, get internet search and tagging? We all rented whatever versions we could afford, signed the annual terms and conditions, received the updates. We wore them pretty much all day, even in the shower. The glasses beeped alarmingly whenever you took them off, unless you went through a complex shutdown sequence.

When the "Johnson for President" campaign bought an acceleration, the issue went all the way to the Supreme Court. Johnson's campaign had paid Eye & Ear to prioritize the perception of his face and deprioritize the perception of his opponent's face, prioritize the visual resolution and recall of his ads, deprioritize the resolution and recall of his opponent's ads. Eva was now a high-powered lawyer in a New York firm, on the fast track toward partner. She worked for the Johnson campaign, though I wouldn't have thought it was like her. Johnson was so authoritarian, shrill, and right-wing -- or at least it seemed so to me, when I took my glasses off.

Johnson favored immigration restrictions, and his opponent claimed (but never proved) that Eye & Ear implemented an algorithm that highlighted people's differences in skin tone -- making the lights a little lighter, the darks a little darker, the East Asians a bit yellow. Johnson won narrowly, before his opponent's suit about the acceleration had made it through the appeals process. It didn't hit the high court until a month after Johnson's inauguration. Eva helped prepare Johnson's defense. Eight of the nine justices were over eighty years old. They lived stretched lives with enhanced longevity and of course all the best implants. They heard the case through the very best ears.

Tuesday, October 04, 2016

When I was graduate student in Berkeley in the 1990s, philosophy PhD students were required to pass exams in two of the following four languages: French, German, Greek, or Latin. I already knew German. I argued that Spanish should count (I had read Unamuno in the original as an undergrad), but my petition was denied since I didn’t plan to do further work in Spanish. I argued that a psychological methods course would be more useful than a second foreign language, given that my dissertation was in philosophy of psychology, but that was not treated as a serious suggestion. I'd learned some classical Chinese, but I thought it would be pointless to attempt 600 characters in two hours as required (much more daunting than 600 words in a European language). So I crammed French for a few weeks and passed the exam.

I have recently become interested in mainstream Anglophone philosophers’ tendency to privilege certain languages and traditions in the history of philosophy. If we think globally, considering large, robust traditions of written work treating recognizably philosophical topics with argumentative sophistication and scholarly detail, it seems clear that at least Arabic, classical Chinese, and Sanskrit merit inclusion alongside French, German, Greek, and Latin as languages of major philosophical importance.

The exclusion of Arabic, Chinese, and Sanskrit from Berkeley’s standard language requirements could not, I think, have been mere ignorance. Rather, the focus on French, German, Greek, and Latin appeared to express a value judgment: that these four languages are more central to philosophy as it ought to be studied.

The language requirements of philosophy PhD programs have loosened over the years, but French, German, Latin, and Greek still form the core language requirements in departments that have language requirements. Students therefore continue to receive the message that these languages are the most important ones for philosophers to know.

I examined the language requirements of a sample of PhD programs in the United States. Because of their sociological importance in the discipline, I started with the top twelve ranked programs in the Philosophical Gourmet Report. I then expanded the sample by considering a group of strong PhD programs that are not as sociologically central to the discipline – the programs ranked 40-50 in the U.S.

Among the top twelve programs (corrections welcome):

* Four appeared to have no foreign language requirement (Michigan, NYU, Rutgers, Stanford).

* Seven (Berkeley, Columbia, Harvard, Pitt, UCLA, USC, Yale) had some version of a language requirement, requiring one of French, German, Greek or Latin -- always exactly that list. Some programs explicitly allowed another language and/or another relevant research skill by petition or consultation.

* Only Princeton had a language requirement that did not appear to privilege French, German, Greek, and Latin. Princeton only requires a language “relevant to the student’s proposed course of study” (or alternatively “a unit of advanced work in another department” or “completion of an additional unit of work in any area of philosophy”).

You might think that, practically speaking, Arabic or classical Chinese would be a fine language to choose. Students can always petition; maybe such petitions are almost always granted. This response, however, ignores the fact that something is communicated by other languages’ non-inclusion on the privileged list. For a tendentious comparison – maybe too tendentious! – consider an admissions form that said “we admit men, but also women by petition”. One thing is treated as a norm and the other as an exception.

Interestingly, the PhD programs ranked less highly by the Philosophical Gourmet had more relaxed language requirements overall. In the 40-50 group, only two of the eleven mentioned a language requirement or list of languages. Still, the privileged languages were from the same set: “French, German, or other” at Saint Louis University, and optional certification in French, German, Greek, or Latin at Rochester.

I do not believe that we should be sending students the message that French, German, Greek, and Latin are more important than other languages in which there is a body of interesting philosophical work. It is too Eurocentric a vision of the history of philosophy. Let’s change this.