Monday, October 16, 2017

I defend a how-possibly argument for Kantian (or Kant*-ian) transcendental idealism, drawing on concepts from David Chalmers, Nick Bostrom, and the cyberpunk subgenre of science fiction. If we are artificial intelligences living in a virtual reality instantiated on a giant computer, then the fundamental structure of reality might be very different than we suppose. Indeed, since computation does not require spatial properties, spatiality might not be a feature of things as they are in themselves but instead only the way that things necessarily appear to us. It might seem unlikely that we are living in a virtual reality instantiated on a non-spatial computer. However, understanding this possibility can help us appreciate the merits of transcendental idealism in general, as well as transcendental idealism's underappreciated skeptical consequences.

As I see the distinction, Truth philosophers sincerely aim to present the philosophical truth as they see it. They tend to prefer modest, moderate, and commonsensical positions. They tend to recognize the substantial truth in multiple different perspectives (at least once they've been around long enough to see the flaws in their youthful enthusiasms), and thus tend to prefer multidimensionality and nuance. Truth philosophers would rather be boring and right than interesting and wrong.

Dare philosophers reach instead for the bold and unusual. They want to explore the boundaries of what can be defended. They're happy for the sake of argument to champion unusual positions that they might not fully believe, if those positions are elegant, novel, fun, contrarian, or if they think the positions have more going for them than is generally recognized. Dare philosophers sometimes treat philosophy like a game in which the ideal achievement is the breathtakingly clever defense of a position that others would have thought to be patently absurd.

There's a familiar dynamic that arises from their interaction. The Dare philosopher ventures a bold thesis, cleverly defended. ("Possible worlds really exist!", "All matter is conscious!", "We're morally obliged to let humanity go extinct!") If the defense is clever enough, so that a substantial number of readers are tempted to think "Wait, could that really be true? What exactly is wrong with the argument?" then the Truth philosopher steps in. The Truth philosopher finds the holes and presuppositions in the argument, or at least tries to, and defends a more seemingly sensible view.

This Dare-and-Truth dynamic is central to the field and good for its development. Sometimes there's more truth in the Dare positions than one would have thought, and without the Dare philosophers out there pushing the limits, seeing what can be said in defense of the seemingly absurd, then as a field we wouldn't appreciate those positions as vividly as we might. Also, I think, there's something intrinsically valuable about exploring the boundaries of philosophical defensibility, even if the positions explored turn out to be flatly false. It's part of the magnificent glory of life on Earth that we have fiendishly clever panpsychists and modal realists in our midst.

Now consider Wonder.

Why study philosophy? I mean at a personal level. Personally, what do you find cool, interesting, or rewarding about philosophy? One answer is Truth: Through philosophy, you discover answers to some of the profoundest and most difficult questions that people can pose. Another answer is Dare: It's fun to match wits, push arguments, defend surprising theses, win the argumentative game (or at least play to a draw) despite starting from a seemingly indefensible position. Both of those motivations speak to me somewhat. But I think what really delights me more than anything else in philosophy is its capacity to upend what I think I know, its capacity to call into question what I previously took for granted, its capacity to cast me into doubt, confusion, and wonder.

Unlike the Dare philosopher, the Wonder philosopher is guided by a norm of sincerity and truth. It's not primarily about matching wits and finding clever arguments. Unlike the Truth philosopher, the Wonder philosopher has an affection for the strange and seemingly wrong -- and is willing to push wild theses to the extent they suspect that those theses, wonderfully, surprisingly, might be true.

But in the Dare-and-Truth dynamic of the field, the Wonder philosopher can struggle to find a place. Bold Dare articles and sensible Truth articles both have a natural home in the journals. But "whoa, I wonder if this weird thing might be true?" is a little harder to publish.

Probably no one is pure Truth, pure Dare, or pure Wonder. We're all a mix of the three, I suspect. Thus, one approach is to leave Wonder out of your research profile: Find the Truth, where you can, publish that, and leave Wonder for your classroom teaching and private reading. Defend the existence of moderate naturalistically-grounded moral truths in your published papers; read Zhuangzi on the side.

Still, there are a few publishing strategies for Wonder philosophers. Here are four:

(1.) Find a Dare-like position that you really do sincerely endorse on reflection, and defend that -- optionally with some explicit qualifications indicating that you are exploring it only as a possibility.

(2.) Explicitly argue that we should invest a small but non-trivial credence in some Dare-like position -- for example, because the Truth-type arguments against it aren't fully compelling.

(3.) Find a Truth-like view that generates Wonder if it's true. For example, defend some form of doubt about philosophical method or about the extent of our self-knowledge. Defend the position on sensible, widely acceptable grounds; and then sensibly argue that one possible consequence is that we don't know some of the things that we normally take for granted that we do know.

(4.) Write about historical philosophers with weird and wonderful views. This gives you a chance to explore the Wonderful without committing to it.

In retrospect, I think one unifying theme in my disparate work is that it fits under one of these four heads. Much of my recent metaphysics fits under (1) or (2) (e.g., here, here, here). My work on belief and introspection mostly fits under (3) (with some (1) in my bolder moments): We can't take for granted that we have the handsome beliefs (e.g., "the sexes are intellectually equal") that we think we do, or that we have the moral character or types of experience that we think we do. And my interest in Zhuangzi and some of the stranger corners of early introspective psychology fits under (4).

Maybe it's true that philosophers typically come from wealthy or educationally elite family backgrounds? Variousstudiessuggest that lower-income students and first-generation college students in the U.S. and Britain are more likely to choose what are sometimes perceived as lower risk, more "practical" majors like engineering, the physical sciences, and education, than they are to choose arts and humanities majors.

To explore this question, I requested data from the National Science Foundation's Survey of Earned Doctorates. The SED collects demographic and other data from PhD recipients from virtually all accredited universities in the U.S., typically with response rates over 90%.

I requested data on two relevant SED questions:

What is the highest educational attainment of your mother and father?

and also, since starting at community college is generally regarded as a less elite educational path than going directly from high school to a four-year university,

Did you earn college credit from a community or two-year college?

Before you read on... any guesses about the results?

Community college attendance.

Philosophy PhD recipients [red line below] were less likely than PhD recipients overall [black line] to have attended community college, but philosophers might actually be slightly more likely than other arts and humanities majors to have attended community college [blue line]:

[click picture for clearer image]

[The apparent jump from 2003 to 2004 is due to a format change in the question, from asking the respondent to list all colleges attended (2003 and earlier) to asking the yes or no question above (2004 and after).]

The NSF also sent me the breakdown by race, gender, and ethnicity. I found no substantial differences by gender. Non-Hispanic white philosophy PhD recipients may have been a bit less likely to have attended community college than the other groups (17% vs. 21%, z = -2.2, p = .03) -- actually a somewhat smaller effect size than I might have predicted. (Among PhD recipients as a whole, Asians were a bit less likely (14%) and Hispanics [any race] a bit more likely (25%) to have attended community college than whites (20%) and blacks (19%).)

In sum, as measured by rates of community college attendance, philosophers' educational background is only a little more elite than that of PhD recipients overall and might be slightly less elite, on average, than that of PhD recipients in the other arts and humanities.

Parental Education.

The SED divides parental education levels into four categories: high school or less, some college, bachelor's degree, or advanced degree.

Overall, recipients reported higher education levels for their fathers (35% higher degree, 25% high school or less [merging 2010-2015]) than for their mothers (25% and 31% respectively). Interestingly, women PhD recipients reported slightly higher levels of maternal education than did men, while women and men reported similar levels of paternal education, suggesting that a mother's education is a small specific predictor of her daughter's educational attainment. (Among women PhD recipients [in all fields, 2010-2015], 27% report their mothers having a higher degree and 29% report high school or less; for men the corresponding numbers are 24% and 33%.)

Philosophers report higher levels of parental education than do other PhD recipients. In 2010-2015, 45% of philosophy PhD recipients reported having fathers with higher degrees and 33% reported having mothers with higher degrees, compared to 43% and 31% in the arts and humanities generally and 35% and 25% among all PhD recipients (philosophers' fathers 1129/2509 vs. arts & humanities' fathers (excl. phil.) 11110/26064, z = 2.3, p = .02; philosophers' mothers 817/2512 vs. a&h mothers 8078/26176, z = 1.7, p = .09). Similar trends for earlier decades suggest that the small difference between philosophy and the remaining arts and humanities is unlikely to be chance.

[click picture for clearer image]

Although philosophy has a higher percentage of men among recent PhDs (about 72%) than do most other disciplines outside of the physical sciences and engineering, this fact does not appear to explain the pattern. Limiting the data either to only men or only women, the same trends remain evident.

Recent philosophy PhD recipients are also disproportionately non-Hispanic white (about 85%) compared to most other academic disciplines that do not focus on European culture. It is possible that this explains some of the tendency toward higher parental educational attainment among philosophy PhDs than among PhDs in other areas. For example, limiting the data to only non-Hispanic whites eliminates the difference in parental educational attainment between philosophy and the other arts and humanities: 46% both of recent philosophy PhDs and of arts and humanities PhDs report fathers with higher degrees and 34% of both groups report mothers with higher degrees. (Among all non-Hispanic white PhD recipients, it's 41% and 31% respectively.)

Unsurprisingly, parental education is much higher in general among PhD recipients than in the U.S. population overall: Approximately 12% of people over the age of 25 in the US have higher degrees (roughly similar for all age groups, including the age groups that would be expected of the parents of recent PhD recipients).

In sum, the parents of PhD recipients in philosophy tend to have somewhat higher educational attainment than PhD recipients overall and slightly higher educational attainment that PhD recipients in the other arts and humanities. However, much of this difference may be explainable by the overrepresentation of non-Hispanic whites within philosophy, rather than by a field-specific factor.

Conclusion.

Although PhD recipients in general tend to come from more educationally privileged backgrounds than do people who do not earn PhDs, philosophy PhD recipients do not appear to come from especially elite academic backgrounds, compared to their peers in other departments, despite our field's penchant for highbrow examples.

ETA2: On my public Facebook link to this post, Wesley Buckwalter has emphasized that not all philosophy PhDs become professors. Of course that is true, though it looks like a majority of philosophy PhDs do attain permanent academic posts within five years of completion (see here). If it were the case that people with community college credit or with lower levels of parental education were substantially less likely than others to become professors even after completing the PhD, then that would undermine the inference from these data about PhD recipients to conclusions about philosophy professors in general.

Monday, September 25, 2017

I'm working on a paper, "Kant Meets Cyberpunk", in which I'll argue that if we are living in a simulation -- that is, if we are conscious AIs living in an artificial computational environment -- then there's no particularly good reason to think that the computer that is running our simulation is a material computer. It might, for example, be an immaterial Cartesian soul. (I do think it has to be a concrete, temporally existing object, capable of state transitions, rather than a purely abstract entity.)

Since we normally think of computers as material objects, it might seem odd to suppose that a computer could be composed from immaterial soul-stuff. However, the well-known philosopher and theorist of computation Hilary Putnam has remarked that there's nothing in the theory of computation that requires that computers be made of material substances (1965/1975, p. 435-436). To support this idea, I want to construct an example of an immaterial computer -- which might be fun or useful even independently of my project concerning Kant and the simulation argument.

--------------------------

Standard computational theory goes back to Alan Turing (1936). One of its most famous results is this: Any problem that can be solved purely algorithmically can in principle be solved by a very simple system. Turing imagined a strip of tape, of unlimited length in at least one direction, with a read-write head that can move back and forth along the tape, reading alphanumeric characters written on that tape and then erasing them and writing new characters according to simple if-then rules. In principle, one could construct a computer along these lines -- a "Turing machine" -- that, given enough time, has the same ability to solve computational problems as the most powerful supercomputer we can imagine.

Now, can we build a Turing machine, or a Turing machine equivalent, out of something immaterial?

For concreteness, let's consider a Cartesian soul [note 1]: It is capable of thought and conscious experience. It exists in time, and it has causal powers. However, it does not have spatial properties like extension or position. To give it full power, let's assume it has perfect memory. This need not be a human soul. Let's call it Angel.

A proper Turing machine requires the following:

a finite, non-empty set of possible states of the machine, including a specified starting state and one or more specified halting states;

a finite, non-empty set of symbols, including a specified blank symbol;

the capacity to move a read/write head "right" and "left" along a tape inscribed with those symbols, reading a symbol inscribed at whatever position the head occupies; and

a finite transition function that specifies, given the machine's current state and the symbol currently beneath its read/write head, a new state to be entered and a replacement symbol to be written in that position, plus an instruction to then move the head either right or left.

A Cartesian soul ought to be capable of having multiple states. We might suppose that Angel has moods, such as bliss. Perhaps he can be in any one of several discrete states along an interval from sad to happy. Angel’s initial state might be the most extreme sadness and Angel might halt only at the most extreme happiness.

Although we normally think of an alphabet of symbols as an alphabet of written symbols, symbols might also be imagined. Angel might imagine a number of discrete pitches from the A three octaves below middle C to the A three octaves above middle C. Middle C might be the blank symbol.

Instead of physical tape, Angel thinks of integer numbers. Instead of having a read-write head that moves right and left in space, Angel thinks of adding or subtracting one from a running total. We can populate the "tape" with symbols using Angel's perfect memory: Angel associates 0 with one pitch, +1 with another pitch, +2 with another pitch, and so forth, for a finite number of specified associations. All unspecified associations are assumed to be middle C. Instead of a read-write head starting at a spatial location on a tape, Angel starts by thinking of 0, and recalling the pitch that 0 is associated with. Instead of the read-write head moving right to read the next spatially adjacent symbol on the tape, Angel adds one to his running total and thinks of the pitch that is associated with the updated running total. Instead of moving left, he subtracts one. Thus, Angel's "tape" is a set of memory associations like that in the figure below, where at some point specific associations run out and Middle C is assumed on to infinity.

The transition function can be understood as a set of rules of this form: If Angel is in such and such a state (e.g., 23% happy) and is "reading" such and such a note (e.g., B2), then Angel should "write" such-and-such a note (e.g, G4), enter such-and-such a new state (e.g., 52% happy), and either add or subtract one from his running count. We rely on Angel's memory to implement the writing and reading: To "write" G4 when his running count is +2 is to commit to memory the idea that next time the running count is +2 he will "read" – that is, actively recall – the symbol G4 (instead of the B2 he previously associated with +2).

As far as I can tell, Angel is a perfectly fine Turing machine equivalent. If standard computational theory is correct, he could execute any computational task that any ordinary material computer could execute. And he has no properties incompatible with being an immaterial Cartesian soul as such souls are ordinarily conceived.

--------------------------

[Note 1] I attribute moods and imaginings to this soul, which Descartes believes arise from the interaction of soul and body. On my understanding of Descartes, such things are possible in souls without bodies, but if necessary we could change to more purely intellectual examples. I am also bracketing Descartes' view that the soul is not a "machine", which appears to depend on commitment to a view of machines as necessarily material entities (Discourse, part 5).
--------------------------

Tuesday, September 19, 2017

We present evidence that mainstream Anglophone philosophy is insular in the sense that participants in this academic tradition tend mostly to cite or interact with other participants in this academic tradition, while having little academic interaction with philosophers writing in other languages. Among our evidence: In a sample of articles from elite Anglophone philosophy journals, 97% of citations are citations of work originally written in English; 96% of members of editorial boards of elite Anglophone philosophy journals are housed in majority-Anglophone countries; and only one of the 100 most-cited recent authors in the Stanford Encyclopedia of Philosophy spent most of his career in non-Anglophone countries writing primarily in a language other than English. In contrast, philosophy articles published in elite Chinese-language and Spanish-language journals cite from a range of linguistic traditions, as do non-English-language articles in a convenience sample of established European-language journals. We also find evidence that work in English has more influence on work in other languages than vice versa and that when non-Anglophone philosophers cite recent work outside of their own linguistic tradition it tends to be work in English.

Comments and criticisms welcome, either by email to my academic address or as comments on this post.
By the way, I'm traveling (currently in Paris, heading to Atlanta tomorrow), so replies and comments approvals might be a bit slower than usual.

Thursday, September 14, 2017

The Doctor: You lot, you spend all your time thinking about dying, like you're gonna get killed by eggs, or beef, or global warming, or asteroids. But you never take time to imagine the impossible. Like maybe you survive. (Doctor Who, “The End of the World”)

For many hardnosed people, I imagine there’s an obvious answer to both questions: there is no special value in human survival, and in fact, the universe may be a better place for everyone (including perhaps us) if we were to all quietly go extinct. This is a position I’ve heard from ecologists and antinatalists, and while I won’t debate it here, I find it deeply unpersuasive. As far as we know, humanity is the only truly intelligent species in the universe – the only species that is capable of great works of art, philosophy, and technological development. And while we may not be the only conscious species on earth, we are likely the only species capable of the more rarefied forms of happiness and value. Further to that, even though there are surely other conscious species on earth worth caring about, our sun will finish them off in a few billion years, and they’re not getting off this planet without our help (in other words: no dogs on Mars unless we put them there).

However, even if you’re sympathetic to this line of response, it admittedly doesn’t show there’s any value in specifically human survival. Even if we grant that humans are an important source of utility worth protecting, surely there are intelligent aliens somewhere out there in the cosmos capable of enjoying just as fancy pleasures as those we experience. Insofar as we’re concerned with human survival at all, then, maybe it should just be in virtue of our more general high capacity for well-being?

Again, I’m not particularly convinced by this. Leaving aside the fact that we may be alone in the universe, I can’t shake the deep intuition that there’s some special value in the thriving of humanity, even if only for us. To illustrate the point, imagine that one day a group of tiny aliens show up in orbit and politely ask if they can terraform earth to be more amenable to them, specifically replacing our atmosphere with one composed of sulphur dioxide. The downside of this will be that humanity and all of the life on Earth will die out. On the upside, however, the aliens’ tiny size means that Earth could sustain trillions of them. “You’re rational ethical beings,” they say. “Surely, you can appreciate that it’s a better use of resources to give us your planet? Think of all the utility we’d generate! And if you’re really worried, we can keep a few organisms from every species alive in one of our alien zoos.”

Maybe I’m parochial and selfish, but the idea that we should go along with the aliens’ wishes seems absurd to me (well, maybe they can have Mars). One of my deepest moral intuitions is that there is some special good that we are rationally allowed – if not obliged – to pursue in ensuring the continuation and thriving of humanity.

Let’s just say you agree with me. We now face a further question: what would it take for humanity to survive in this ethically relevant sense? It’s a surprisingly hard question to answer. One simple option would be that we survive as long as the species Homo sapiens is still kicking around. Without getting too deeply into the semantics of “humanity”, it seems like this misses the morally interesting dimensions of survival. For example, imagine that in the medium term future, beneficial gene-modding becomes ubiquitous, to the point where all our descendants would be reproductively cut off from breeding with the likes of us. While that would mean the end of Homo sapiens (at least by standard definitions of species), it wouldn’t, to my mind, mean the end of humanity in the broader and more ethically meaningful sense.

A trickier scenario would involve the idea that one day we may cease to be biological organisms, having all uploaded ourselves to computers or robot bodies. Could humanity still exist in this scenario? My intuition is that we might well survive this. Imagine a civilization of robots who counted biological humans among their ancestors, and went around quoting Shakespeare to each other, discussing the causes of the Napoleonic Wars, and debating whether the great television epic Game of Thrones was a satisfactory adaptation of the books. In that scenario, I feel that humanity in the broader sense could well be thriving, even if we no longer have biological bodies.

This leads me to a final possibility: maybe what’s ethically relevant in our survival is really the survival of our culture and values: that what matters is really that beings relevantly like us are partaking in the artistic and cultural fruits of our civilization.

While I’m tempted by this view, I think it’s just a little bit too liberal. Imagine we wipe ourselves out next year in a war involving devastating bioweapons, and then a few centuries later, a group of aliens show up on Earth to find that nobody’s home. Though they’re disappointed that there are no living humans, they are delighted by the cultural treasure trove of they’ve found. Soon, alien scholars are quoting Shakespeare and George R.R. Martin and figuring out how to cook pasta al dente. Earth becomes to the aliens what Pompeii is to us: a fantastic tourist destination, a cultural theme park.

In that scenario, my gut says we still lose. Even though there are beings that are (let’s assume) relevantly like us that are enjoying our culture, humanity did not survive in the ethically relevant sense.

So what’s missing? What is it that’s preserved in the robot descendant scenario that’s missing in the alien tourist one? My only answer is that some kind of appropriate causal continuity must be what makes the difference. Perhaps it’s that we choose, through a series of voluntary, purposive actions to bring about the robot scenario, whereas the alien theme park is a mere accident. Or perhaps it’s the fact that I’m assuming there’s a gradual transition from us to the robots, rather than the eschatological lacuna of the theme park case.

I have some more thought experiments that might help us decide between these alternatives, but that would be taking us beyond the scope of a blogpost. And perhaps my intuitions that got us this far are already radically at odds with yours. But in any case, as we take our steps into the next stage of human development, I think it’s important for us to figure out what it is about us (if anything) that makes humanity valuable.

Tuesday, September 12, 2017

I have a new science fiction story out this month in Clarkesworld. I'm delighted! Clarkesworld is one of my favorite magazines and a terrific location for thoughtful speculative fiction.

However, I doubt that you'll like my story. I don't say this out of modesty or because I think this story is especially unlikable. I say it partly to help defuse expectations: Please feel free not to like my story! I won't be offended. But I say it too, in this context, because I think it's important for writers to remind themselves regularly of one possibly somewhat disappointing fact: Most people don't like most fiction. So most people are probably not going to like your fiction -- no matter how wonderful it is.

In fiction, so much depends on taste. Even the very best, most famous fiction in the world is disliked by most people. I can't stand Ernest Hemingway or George Eliot. I don't dispute that they were great writers -- just not my taste, and there's nothing wrong with that. Similarly, most people don't like most poetry, no matter how famous or awesome it is. And most people don't like most music, when it's not in a style that suits them.

A few stories do appear to be enjoyed by almost everyone who reads them ("Flowers for Algernon"? "The Paper Menagerie"?), but those are peak stories of great writers' careers. To expect even a very good story by an excellent writer to achieve almost universal likability is like hearing that a philosopher has just put out a new book and then expecting it to be as beloved and influential as Naming and Necessity.

First lesson: Although you probably want your friends, family, and colleagues to enjoy your work, and some secret inner part of you might expect them to enjoy it (because it's so wonderful!), it's best to suppress that desire and expectation. You need to learn to expect indifference without feeling disappointed. It's like expecting your friends and family and colleagues to like your favorite band. Almost none of them will -- even if some part of you screams out "of course everyone should love this song it's so great!" Aesthetic taste doesn't work like that. It's perfectly fine if almost no one you know you likes your writing. They shouldn't feel bad about that, and you shouldn't feel bad about that.

Second lesson: Write for the people who will like it. Sometimes one hears the advice that you should "just write for yourself" and forget the potential audience. I can see how this might be good advice if the alternative is to try to please everyone, which will never succeed and might along the way destroy what is most distinctive about your voice and style. However, I don't think that advice is quite right, for most writers. If you really are just writing for yourself -- well, isn't that what diaries are for? If you're only writing for yourself you needn't think about comprehensibility, since of course you understand everything. If you're only writing for yourself, you needn't think about suspense, since of course you know what's going to happen. And so forth. The better advice here is write for the 10%. Maybe 10% of the people around you have tastes similar enough to your own that there's a chance that your story will please them. They are your target audience. Your story needn't be comprehensible to everyone, but it should be comprehensible to them. Your story needn't work intellectually and emotionally for everyone, but you should try to make it work intellectually and emotionally for them.

When sending your story out for feedback, ignore the feedback of the 90%, and treasure the feedback of the 10%. Don't try to implement every change that everyone recommends, or even the majority of changes. Most people will never like the story that you would write. You wouldn't want your favorite punk band taking aesthetic advice from your country-music loving uncle. But listen intently to the 10%, to the readers who are almost there, the ones who have the potential to love your story but don't quite love it yet. They are the ones to listen to. Make it great for them, and forget everyone else.

One of the most disturbing ethical questions I’ve encountered in relation to videogames, though, is Morgan Luck’s so-called “Gamer’s Dilemma”. The puzzle it poses is roughly as follows. On the one hand, we don’t tend to regard people committing virtual murders as particularly ethically problematic: whether I’m leading a Mongol horde and slaughtering European peasants or assassinating clients as a killer for hire, it seems that, since no-one really gets hurt, my actions are not particularly morally troubling (there are exceptions to this of course). On the other hand, however, there are still some actions that I could perform in a videogame that we’re much less sanguine about: if we found out that a friend enjoyed playing games involving virtual child abuse or torture of animals, for example, we would doubtless judge them harshly for it.

The gamer’s dilemma concerns how we can explain or rationalize this disparity in our responses. After all, the disparity doesn’t seem to track any actual harm – there’s no obvious harm done in either case – or even the quantity of simulated harm (nuclear war simulations in which players virtually incinerate billions don’t strike me as unusually repugnant, for example). And while it might be that some forms of simulated violence can lead to actual violence, this remains controversial, and again, it’s unlikely that any such causal connections between simulated harm and actual harm would appropriately track our different intuitions about the different kinds of potentially problematic actions we might take in video games.

However, while the Gamer’s Dilemma is an interesting puzzle in itself, I think we can broaden the focus to include other artforms besides videogames. Many of us have passions for genres like murder mystery stories, serial killer movies, or apocalyptic novels, all of which involve extreme violence but fall well within the bounds of ordinary taste. However, someone who had a particular penchant for stories about incest, necrophilia, or animal abuse might strike us as, well, more than a little disturbed. Note that this is true even when we focus just on obsessive cases: someone with an obsession for serial killer movies might strike us as eccentric, but we’d probably be far more disturbed by someone whose entire library consisted of books about animal abuse.

Call this the puzzle of disturbing aesthetic tastes. What makes it the case that some tastes are disturbing and others not, even when both involve fictional harm? Is our tendency to form negative moral judgments about those with disturbing tastes rationally justified? While I’m not entirely sure what to think about this case, I am inclined to think that disturbing aesthetic tastes might reasonably guide our moral judgment of a person insofar as they suggest that that person’s broader moral emotions may be, well, a little out of the ordinary. Most of us feel revulsion rather than fascination with even the fictional torture of animals, for example, and if someone doesn’t share this revulsion in fictional cases, it might provide evidence that they might be ethically deviant in other ways. Crucially, this doesn’t apply to depictions of things like fictional murder, since almost all of us have enjoyed a crime drama at some point in our lives, and it's well within the boundaries of normal taste.

Note that there’s a parallel here with one possible response to Bernard William’s famous example of the truck driver who – through no fault of his own – kills a child who runs into the road, and subsequently feels no regret or remorse. As Williams points out, there’s no rational reason for the driver to feel regret – ex hypothesi, he did everything he could – yet we’d think poorly of him were he just to shrug the incident off (interestingly paralleled by the recent public outcry in the UK following a similar incident involving a unremorseful cyclist). I think what’s partly driving our intuition in such cases is the fact that a certain amount of irrational guilt and regret even for actions outside our control is to be expected as part of normal human moral psychology. When such regret is absent, it’s an indicator that a person is lacking at least some typical moral emotions. In much the same way, even if there is nothing intrinsically wrong about enjoying videogames or movies about animal torture, the fact that it constitutes a deviance from normal human moral attitudes might make us reasonably suspicious of such people’s broader moral emotions in such cases.

I think this is a promising line to take in regards to both the gamer’s dilemma and the puzzle of disturbing tastes. One consequence of this, however, would be that as society’s norms and standards change, certain tastes may no longer come to be indicative of more general moral deviancy. For example, in a society with a long history of cannibal fiction, people in general might lack the same intense disgust reactions that we ourselves display despite their being in all respects morally upstanding. In such a society, then, the fact that someone was fascinated with cannibalism might not be a useful indicator as to their broader moral attitudes. I’m inclined to regard this as a reasonable rather than counterintuitive consequence of the view, reflecting the rich diversity in societal taboos and fascinations. Nonetheless, no matter what culture I was visiting, I doubt I’d trust anyone who enjoyed fictional animal torture with watching my dog for the weekend.

How about other European-language journals? To what extent do articles in languages like French, German, and Spanish cite works originally written in the same language vs. works originally written in other languages?

To examine this question, we looked at a convenience sample of established journals that publish primarily or exclusively in European languages other than English -- journals catalogued in the Philosophy section of JStor with available records running at least from 1999 through 2010. [note 1] We downloaded the most recently available JStor archived issue of each of these journals and examined the references of every research article in those issues (excluding reviews, discussion notes, editors' introductions, etc.). This gave us a total of 96 articles to examine, 41 in French, 23 in German, 14 in Italian, 8 in Portuguese, 6 in Spanish, and 4 in Polish.

Although this is not a systematic or proportionate sample of non-English European-language journal articles, we believe it is broad and representative enough to provide a preliminary test of our hypothesis. Are citation patterns in these journals broadly similar to the citation patterns of elite Anglophone journals (where 97% of citations are to same-language sources)? Or are they closer to the patterns of elite Chinese-language journals (51% of citations to same-language sources)?

In all, we had 2883 citations for analysis. For each citation, we noted the language of the citing article, whether the cited source had originally been published in the same language as the citing article or in a different language, and if it was a different language whether that language was English. As in our previous studies, sources in translation were coded based on the original language of publication rather than the language into which it had been translated (e.g., a translation of Plato into German would be coded as ancient Greek rather than German). We also noted the original year of publication of the cited source, sorting into one of four categories: ancient to 1849, 1850 to 1945, 1946-1999, or 2000-present. [note 2]

In our sample, 44% of citations (1270/2883) were to same-language sources, 30% were to sources originally published in English (some translated into the language of the citing article), and 26% (749/2883) were to all other languages combined. These results are much closer to the Chinese-language pattern of drawing broadly from a variety of language traditions than they are to the English-language pattern of citing almost exclusively from the same linguistic tradition.

French- and German-language articles showed more same-language citation than did articles in other languages (51% and 71% respectively, compared to an average of 20% for the other sampled languages), but we interpret this result cautiously due to the small and possibly unrepresentative samples of articles in each language.

Breaking the results down by year category, we found the following:
[if blurry, click for clearer display]

Thus, in this sample, cited sources originally published between 1946 and 1999 were just about as likely to have been originally written in English as to have been written in the language of the citing article. When the cited source was published before 1946 or after 1999, it was less likely to be in English.

Looking article by article, we found that only 5% of articles (5/96) cited exclusively same-language sources. This contrasts sharply with our study of articles in Anglophone journals, 73% of which cited exclusively English-language sources.

We conclude that non-English European-language philosophy articles cite work from a broad range of linguistic traditions, unlike articles in elite Anglophone philosophy journals, which cite almost exclusively from English-language sources.

One weakness of this research design is the unsystematic sampling of journals and languages. Therefore, we hope to follow up with at least one more study, focused on a more carefully chosen set of journals from a single European language. Stay tuned!

note 2: Coding was done by two expert coders, each with a PhD in philosophy. One coder was fluent only in English but had some reading knowledge of German, French, and Spanish. The other coder was fluent in Spanish and English, had excellent reading knowledge of German and Portuguese, and had some reading knowledge of French and Italian. The coding task was somewhat difficult, especially for journals using footnote format. Expertise was required to recognize, for example, the original language and publication period of translated works, which was not always immediately evident from the citation information. We randomly selected 10 articles to code for inter-rater reliability, and in 91% of cases (235 of 258 citations) the coders agreed on both the original language and the year-category of original publication. Errors involved missing or double-counting some footnoted citations, typographical error, or mistakes in language or year category. Errors did not fall into any notable pattern, and in our view are within an acceptable rate given the difficulty of the coding task and the nature of our hypothesis.

Monday, August 28, 2017

In a sample of elite Anglophone philosophy journals, only 3% of citations are to works that were originally written in a language other than English. Are philosophy journals in other languages similar? Do they mostly cite sources from their own linguistic tradition? Or do they cite more broadly?

We will examine this question by looking at citation patterns from several non-English languages. Today we start by examining a sample of 208 articles published in fifteen elite Chinese-language journals from 1996 to 2016. [See Note 1 for methodological details.]

In our sample of 208 Chinese-language articles, 49% (1422/2929) of citations are to works originally written in languages other than the language of the citing article, in stark contrast with our results for Anglophone philosophy journals.

English is the most frequently cited foreign language, constituting 31% (915/2929) of all citations (compared to 17% for all other languages combined). Other cited languages are German, French, Russian, Japanese, Latin, Greek, Korean, Sanskrit, Spanish, Italian, Polish, Dutch, and Tibetan.

Our sample of elite Anglophone journals contained no journals focused on the history of philosophy. In contrast, our sample of elite Chinese-language journals contains three that focus on the history of Chinese philosophy. Excluding the Chinese-history journals from the analysis, we found that the plurality of citations (44%, 907/2047) are to works originally written in English (often in Chinese translation for the older works). Only 32% (647/2047) of citations are to works originally written in Chinese (leaving 24% for all other languages combined).

Looking just at the journals specializing in history of Chinese philosophy, 98% (860/882) of citations are to works originally written in Chinese – a percentage comparable to the percentage of same-language citations in the non-historical elite Anglophone journals in our earlier analysis. Chinese journals specificially discussing Chinese history cite Chinese sources at about the same rate as Anglophone journals cite Anglophone sources when discussing general philosophy.

We were not able to determine original publication date of all of the cited works. However, we thought it worth seeing whether the English-language citations are mostly of classic historical philosophers like Locke, Hume, and Mill, or whether instead they are mostly of contemporary writers. Thus, we randomly sampled 100 of the English-language citations. Of the 100, 68 (68%) of the cited English-language works were published in the period from 1946-1999 and 19 (19%) were published from 2000 to the present.

Finally, we broke the results down by year of publication of the citing article (excluding the three history journals). This graph shows the results.

Point-biserial correlation analysis shows a significant increase in rates of citation of English-language sources from 1996 to 2016 (34% to 49%, r = .11, p < .001). Citation of both Chinese and other-language sources may also be decreasing (r = -.05, p = .03; r = -.08, p = .001), but we would interpret these trends cautiously due to the apparent U-shape of the curves and the possibility of article-level effects that would compromise the statistical independence of the trials.

Citation patterns in elite Chinese-language philosophy journals thus appear to be very different from citation patterns in elite Anglophone philosophy journals. The Anglophone journals cite almost exclusively works that were originally written in English. The Chinese journals cite about half Chinese sources and about half foreign language sources (mostly European languages), with English being the dominant language in the foreign language group, and increasingly so in recent years.

We leave for later discussion the question of causes, as well as normative questions such as to what extent elite journals in various languages should be citing mostly from the same language tradition versus to what extent they should aim instead to cite more broadly from work written in a range of languages.

Stay tuned for some similar analyses of journals in other languages!

------------------------------------------

Note 1: The journals are: 臺灣大學哲學論評 (National Taiwan University Philosophical Review), 政治大學哲學學報 (NCCU Philosophical Journal), and 東吳哲學學報 (Soochow Journal of Philosophical Studies), which are ranked as the Tier I philosophy journals by Research Institute for the Humanities and Social Sciences Ministry of Science and Technology, Taiwan; and 哲学研究(Philosophical Researches), 哲学动态 (Philosophical Trends), 自然辩证法研究 (Studies in Dialectics of Nature), 道德与文明 (Morality and Civilization), 世界哲学 (World Philosophy), 自然辩证法通讯 (Journal of Dialectics of Nature), 伦理学研究 (Studies in Ethics), 现代哲学 (Modern Philosophy), 周易研究 (Studies of Zhouyi), 孔子研究 (Confucius Studies), 中国哲学史 (History of Chinese Philosophy), 科学技术哲学研究 (Studies in Philosophy of Science and Technology), which are ranked as the core philosophy journals in the Chinese Social Sciences Citation Index by Institute for Chinese Social Sciences Research and Assessment, Nanjing University, China. We sampled the research articles of their first issues in 1996, 2001, 2006, 2011, and 2016, generating a list of 208 articles. A coder fluent in both Chinese and English and with a PhD in philosophy (Linus Huang) coded the references of these articles, generating a list of 2952 citations to examine. For each reference, we noted its original publication language. Translated works were coded based on original language in which it was written rather than the language into which it had been translated. If that information was not available in the reference, Linus hand-coded by searching online or based on his knowledge of the history of philosophy. The original language was determinable in 2929 of the 2952 citations.

Thursday, August 24, 2017

Eric has previously argued that almost any answer to the problem of consciousness involves “crazyism” – that is, a commitment to one or another hypotheses that might reasonably be considered bizarre. So it’s in this spirit of openness to wild ideas I’d like to throw out one of my own longstanding “crazy” ideas concerning our identity as conscious subjects.

To set the scene, imagine that we have one hundred supercomputers, each separately running a conscious simulation of the same human life. We’re also going to assume that these simulations are all causally coupled together so that they’re in identical functional states at any one time – if a particular mental state type is being realized in one at a given time, it’s also being realized in all the others.

The question I want to ask now is: how many conscious subjects – subjective points of view – exist in this setup? A natural response is “one hundred, obviously!” After all, there are one hundred computers all running their own simulations. But the alternate crazy hypothesis I’d like to suggest is that there’s just one subject in this scenario. Specifically, I want to claim that insofar as two physical realizations of consciousness give rise to a qualitatively identical sequence of experiences, they give rise to a single numerically identical subject of experience.

Call this hypothesis the Identity of Phenomenal Duplicates, or IPD for short. Why would anyone think such a crazy thing? In short, I’m attracted by the idea that the only factors relevant to the identity and individuation of a conscious subject are subjective: crudely, what makes me me is just the way the world seems to me and my conscious reactions to it. As a subject of phenomenal experience, in other words, my numerical identity is fixed just by those factors that are part of my experience, and factors that lie outside my phenomenal awareness (for example, which of many possible computers are running the simulation that underpins my consciousness) are thus irrelevant to my identity.

Putting things another way, I’d suggest that maybe my identity qua conscious subject is more like a type than a token, meaning that a single conscious subject could be multiply instantiated. As a helpful analogy, think about the ontology of something like a song, a novel, or a movie. The Empire Strikes Back has been screened billions of times over the years, but all of these were instantiations of one individual thing, namely the movie itself. If the IPD thesis is correct, then the same might be true for a conscious subject – that I myself (not merely duplicates of me!) could be multiply instantiated across a host of different biological or artificial bodies, even at a single moment. What *I* am, then, on this view, is a kind of subjective pattern or concatenation of such patterns, rather than a single spatiotemporally located object.

Here’s an example that might make the view seem (marginally!) more plausible. Thinking back to the one hundred simulations scenario above, imagine that we pick one simulation at random to be hooked up to a robot body, so that it can send motor outputs to the body and receive its sensory inputs. (Note that because we’re keeping all the simulations coupled together, they’ll remain in ‘phenomenal sync’ with whichever sim we choose to be embodied as a robot). The robot wakes up, looks around, and is fascinated to learn it’s suddenly in the real world, having previously spent its life in a simulation. But now it asks us: which of the Sims am I? Am I the Sim running on the mainframe in Tokyo, or the one in London, or the one in Sao Paolo?

One natural response would be that it was identical to whichever Sim we uploaded the relevant data from. But I think this neglects the fact that all one hundred Sims are causally coupled with one another, so in a sense, we uploaded the data from all of them – we just used one specific access point to get to it. To illustrate this, note that in transferring the relevant information from our Sims to the robot, we might wish (perhaps for reasons of efficiency) to grab the data from all over the place – there’s no reason we’d have to confine ourselves to copying the data over from just one Sim. So here’s an alternate hypothesis: the robot was identical to all of them, because they were all identical to one another – there was just one conscious subject all along! (Readers familiar with Dennett’s Where Am I? may see clear parallels here.)

I find something very intuitive about the response IPD provides in this case. I realize, though, that what I’ve provided here isn’t much of an argument, and invites a slew of questions and objections. For example, even if you’re sympathetic to the reading of the example above, I haven’t established the stronger claim of IPD, which makes no reference to causal coupling. This leaves it open to say, for example, that had the simulations been qualitatively identical by coincidence (for example, via being a cluster of Boltzmann brains) rather than being causally coupled, their subjects wouldn’t have been numerically identical. We might also wonder about beings whose lives are subjectively identical up to a particular point in time, and afterwards diverge. Are they the same conscious subject up until the point of divergence, or were they distinct all along? Finally, there’s also some tricky issues concerning what it means for me to survive in this framework – if I’m a phenomenal type rather than a particular token instantiation of that type, it might seem like I could still exist in some sense even if all my token instances were destroyed (although would Star Wars still exist in some relevant sense if every copy of it was lost?).

Setting aside these worries for now, I’d like to quickly explore how the truth or falsity of IPD might actually matter – in fact, might matter a great deal! Consider a scenario in which some future utilitarian society decides that the best way to maximize happiness in the universe is by running a bunch of simulations of perfectly happy lives. Further, let’s imagine that their strategy for doing this involves simulating the same single exquisitely happy life a billion times over.

If IPD is right, then they’ve just made a terrible mistake: rather than creating a billion happy conscious subjects, they’ve just made one exquisitely happy subject with a billion (hedonically redundant) instantiations! To rectify this situation, however, all they’d need to do would be to introduce an element of variation into their Sims – some small phenomenal or psychological difference that meant that each of the billion simulations was subjectively unique. If IPD is right, this simple change would increase the happiness in the universe a billion-fold.

There are other potential interesting applications of IPD. For example, coupled with a multiverse theory, it might have the consequence that you currently inhabit multiple distinct worlds, namely all those in which there exist entities that realize subjectively and psychologically identical mental states. Similarly, it might mean that you straddle multiple non-continuous areas of space and time: if the same identical simulation is run at time t1 and time t2 a billion years apart, then IPD would suggest that a single subject cohabits both instantiations.

Anyway, while I doubt I’ve convinced anyone (yet!) of this particular crazyism of mine, I hope at least it might provide the basis for some interesting metaphysical arguments and speculations.

Wednesday, August 16, 2017

'Dial 888,' Rick said as the set warmed. 'The desire to watch TV, no matter what's on it.

'I don't feel like dialling anything at all now,' Iran said.

'Then dial 3,' he said.

'I can't dial a setting that stimulates my cerebral cortex into wanting to dial! If I don't want to dial, I don't want to dial that most of all, because then I will want to dial, and wanting to dial is right now the most alien drive I can imagine.’

(PHILIP K. DICK, Do Androids Dream of Electric Sheep)

--------------------------

We don’t have direct control over most our beliefs and attitudes, let alone most of our drives and desires. No matter how much money was offered as an incentive, for example, I couldn’t will myself to believe in fairies by this evening. Similarly, figuring out how to rid ourselves of our involuntary prejudices and biases is tricky (see here for an attempt), and changing our basic drives (such as our sexual orientation) is almost certainly impossible.

That’s not to say that we have zero control over any of these things. If I wanted to increase the likelihood of having religious beliefs, for example, I might decide to start hanging out with religious people, or attending services. But it’s a messy and indirect path to acquiring new beliefs and values.

Imagine, then, how useful it would be if we had some kind of more direct ability to control our minds. In thinking about this possibility, a useful analogy comes from the idea of Administrator access on a computer. What if – perhaps for just a few hours a month – you could delve into your beliefs, your values, and your drives, and reconfigure them to your heart’s content, before ‘logging back in’ as your (now modified) self?

Some immediately tempting applications of this possibility are fairly clear. For one, we’d perhaps want to eliminate or tone down our most egregious cognitive biases: confirmation bias, post-purchase rationalization, the sunk cost fallacy, and so on. Similarly, we might want to rid ourselves of implicit prejudices that we may have against groups or individuals. Prejudiced against elderly people? Just go into Settings Menu and adjust your slider to correct it. Irrationally resentful of a colleague who accidentally slighted you? A quick fix to remove the relevant emotion and you’re sorted.

Another attractive application might be to bring our immediate desires into line with our higher-order desires. Crave cigarettes and wish you didn’t? Tamp down the relevant first-order desire and you’re sorted. Wish you had the motivation to run in the mornings? Then ramp up the slider for “desire to go jogging”. We might even want to give ourselves some helpful false beliefs or ‘constructive myths’. Disheartened by the fact that you as an individual can do little to prevent climate change? Maybe a false belief that you can be a powerful agent for change will help you do good.

Finally, we come to the most controversial stuff, like values, drives, and memories. Take values first. Imagine that you find yourself trapped in a small town where you’re ostracised for your deviant political beliefs. One easy option might be to simply tweak your values to come into line with your community. Or imagine if you could adjust things like your sexual drives and orientation. Certainly, some people might feel relief at ridding themselves of certain kinks or fetishes that they found oppressive, while others might enjoy experimenting with recalibrating their sexuality. But we could also find that people were pressured or tempted to adjust their sexuality to bring it into line with the bigoted social expectations of their community, and it’s hard not to find that a morally troubling idea. Finally, imagine if we could wipe away unpleasant memories at will – the bad relationships, social gaffes, and painful insults could be gone in a moment. What could possibly go wrong with that?

As much as I like the idea of tweaking my mind, I feel uncomfortable about lot of these possibilities. First, at the risk of sounding cliched, it seems like the gains of personal growth are often as much in the journey as the destination. So, take someone who learns to become more patient with others’ failings. Along the way, she’s probably going to pick up a bunch of other important realizations – of her own fallibility, perhaps, or of the distress she’s caused in the past by dismissing people. Skipping straight to the outcome threatens to cheat her, and us, of something valuable. Similarly, sometimes along the road of personal change, we realize that we’ve been aiming for the wrong thing. Someone who desperately wants to fit in with their peer group, for example, might slowly and painfully realize that they don’t like their peers as much as they thought. Skipping out the journey, then, not only robs us of potential goods we might find along the way, but also of the capacity to change our mind about where we’re going.

There might also be some kinds of extrinsic goods that would be lost if we could all tweak our minds so effortlessly. Take the example of someone who wishes he could fit in with his more conservative community. Even though he might relish not having values that are different from those around him, by holding onto them, he could be providing encouragement and cover for other political deviants in his town. In much the same way, diversity of opinion, outlook, and motivation may be valuable for the community at large, despite not always being pleasant for those in the minority. This can be true even if the majority perspective in the community is in the right: dissenters can helpfully force the dominant voices to articulate and justify their views.

Finally, we could run into serious unexpected consequences – maybe getting rid of the availability heuristic would turn out to drastically slow down my reasoning, for example, or perhaps making myself more prosocial could backfire on me if I live in an antisocial community. Still more catastrophic consequences might involve deviant paths to fulfilment of desires. If (in Administrator mode) I give myself an overriding desire to be “fitter than the average person in my town”, for example, I might (as a normal user) go on to decide that the fastest way to achieve that goal is to kill all the healthy people in my community! More prosaically, it’s also easy to imagine people being tempted to reconfigure their difficult-to-achieve desires (like becoming rich and famous) and instead replacing them with stuff that’s easy to achieve (collecting paperclips, say, or counting blades of grass). Perhaps they would be well advised to do so, but this is philosophically controversial to say the least!

While Administrator Access to our own minds is of course just science fiction for now, I think it’s a useful tool for probing our intuitions about well-being, rationality, and personal change. It could also potentially guide us in situations where do have more powerful ways of influencing the development of minds. This may be a big deal in the development of future forms of artificial intelligence, for example, but something similar arguably applies even when we’re deciding how to raise our children (should we encourage them to believe in Santa Claus?).

For my part, I doubt I could resist making a few tweaks to myself (maybe I’d finally get to make good use of that gym membership). But I’d do so carefully... and likely with a sense of trepidation and unease.

Tuesday, August 08, 2017

As any parent can readily testify, little kids get upset. A lot. Sometimes it’s for broadly comprehensible stuff - because they have to go to bed or to daycare, for example. Sometimes it’s for more bizarre and idiosyncratic reasons – because their banana has broken, perhaps, or because the Velcro on their shoes makes a funny noise.

For most parents, these episodes are regrettable, exasperating, and occasionally, a little funny. We rarely if ever consider them tragic or of serious moral consequence. We certainly feel some empathy for our children’s intense anger, sadness, or frustration, but we generally don’t make a huge deal about these episodes. That’s not because we don’t care about toddlers, of course – if they were sick or in pain we’d be really concerned. But we usually treat these intense emotional outbursts as just a part of growing up.

Nonetheless, I think if we saw an adult undergoing extremes of negative emotion of the kind that toddlers go through on a daily or weekly basis, we’d be pretty affected by it, and regard it as something to be taken seriously. Imagine you’d visited a friend for dinner, and upon announcing you were leaving, he broke down in floods of tears, beating on the ground and begging you not to go. Most of us wouldn’t think twice about sticking around until he felt better. Yet when a toddler pulls the same move (say, when we’re dropping them off with a grandparent), most parents remained, if not unmoved, then at least resolute.

What’s the difference between our reactions in these cases? In large part, I think it’s because we assume that when adults get upset, they have good reasons for it – if an adult friend starts sobbing uncontrollably, then our first thought is going to be that they’re facing real problems. For a toddler, by contrast – well, they can get upset about almost anything.

This makes a fair amount of sense as far as it goes. But it also seems to require that our moral reactions to apparent distress should be sensitive not just to the degree of unhappiness involved, but the reasons for it. In other words, we’re not discounting toddler tantrums because we think little kids aren’t genuinely upset, or are faking, but because the tantrums aren’t reflective of any concerns worth taking too seriously.

Interestingly, this idea seems at least prima facie in tension with some major philosophical accounts of happiness and well-being, notably like hedonism or desire satisfaction theory. By the lights of these approaches, it’s hard to see why toddler emotions and desires shouldn’t be taken just as seriously as adult ones. These episodes do seem like bona fide intensely negative experiences, so for utilitarians, every toddler could turn out to be a kind of negative utility monster! Similarly, if we adopt a form of consequentialism that aims at maximizing the number of satisfied desires, toddlers might be an outsize presence – as indicated by their tantrums, they have a lot of seemingly big, powerful, intense desires all the time (for, e.g., a Kinder Egg, another episode of Ben and Holly, or that one toy their older sibling is playing with).

One possibility I haven’t so far discussed is the idea that toddlers’ emotional behavior might be deceptive: perhaps the wailing toddler, contrary to appearances, is only mildly peeved that a sticker peeled off his toy. There may be something to this idea: certainly, toddlers have very poor inhibitory control, so we might naturally expect them to be more demonstrative about negative emotions than adults. That said, I find it hard to believe that toddlers really aren’t all that bothered by whatever it is that’s caused their latest tantrum. As much as I may be annoyed at having to leave a party early, for example, it’s almost inconceivable to me that it could ever trigger floods of tears and wailing, no matter how badly my inhibitory control had been impaired by the host’s martinis. (Nonetheless, I’d grant this is an area where psychology or neuroscience could be potentially informative, so that we might gain evidence that toddlers’ apparent distress behavior was misleading).

But if we do grant that toddlers really get very upset all the time, is it a serious moral problem? Or just an argument against theories that take things like emotions and desires to be morally significant in their own right, without being justified by good reasons? As someone sympathetic to both hedonism about well-being and utilitarianism as a normative ethical theory, I’m not sure what to think. Certainly, it’s made me consider whether, as a parent, I should take my son’s tantrums more seriously. For example, if we’re at the park, and I know he’ll have a tantrum if we leave early, should I prioritize his emotions above, e.g., my desire to get home and grade student papers? Perhaps you’ll think that in reacting like this, I’m just being overly sentimental or sappy – come on, what could be more normal than toddler tantrums! – but it’s worth being conscious of the fact that previous societies normalized ways of treating children that we nowadays would regard as brutal.

There’s also, of course, the developmental question: toddlers aren’t stupid, and if they realize that we’ll do anything to avoid them having tantrums, then they’ll exploit that to their own (dis)advantage. Learning that you can’t always get what you want is certainly part of growing up. But thinking about this issue has certainly made me take another look at how I think about and respond to my son’s outbursts, even if I can’t fix his broken bananas.

Note: this blogpost is an extended exploration of ideas I earlier discussed here.

Thursday, August 03, 2017

In 2014, as a beginning writer of science fiction or speculative fiction, with no idea what magazines were well regarded in the industry, I decided to compile a ranked list of magazines based on awards and "best of" placements in the previous ten years. Since people seemed to find it useful or interesting, I've been updating it annually. Below is my list for 2017.

Method and Caveats:

(1.) Only magazines are included (online or in print), not anthologies or standalones.

(3.) I am not attempting to include the horror / dark fantasy genre, except as it appears incidentally on the list.

(4.) Prose only, not poetry.

(5.) I'm not attempting to correct for frequency of publication or length of table of contents.

(6.) I'm also not correcting for a magazine's only having published during part of the ten-year period. Reputations of defunct magazines slowly fade, and sometimes they are restarted. Reputations of new magazines take time to build.

(7.) Lists of this sort do tend to reinforce the prestige hierarchy. I have mixed feelings about that. But since the prestige hierarchy is socially real, I think it's in people's best interest -- especially the best interest of outsiders and newcomers -- if it is common knowledge.

(1.) The New Yorker, Tin House, McSweeney's, Conjunctions, Harper's, and Beloit Fiction Journal are prominent literary magazines that occasionally publish science fiction or fantasy. Cosmos, Slate, and Buzzfeed are popular magazines that have published a little bit of science fiction on the side. e-flux is a wide-ranging arts journal. The remaining magazines focus on the F/SF genre.

(2.) It's also interesting to consider a three-year window. Here are those results, down to six points:

(3.) One important thing left out of these numbers is the rise of good podcast venues such as the Escape Artists' podcasts (Escape Pod, Podcastle, Pseudopod, and Cast of Wonders), Drabblecast, and StarShipSofa. None of these qualify for my list by existing criteria, but podcasts are an increasingly important venue. Some text-based magazines, like Clarkesworld, Lightspeed, and Strange Horizons also regularly podcast their stories.

(5.) Philosophers interested in science fiction might also want to look at Sci Phi Journal, which publishes both science fiction with philosophical discussion notes and philosophical essays about science fiction.

(6.) Other lists: The SFWA qualifying markets list is a list of "pro" science fiction and fantasy venues based on pay rates and track records of strong circulation. Ralan.com is a regularly updated list of markets, divided into categories based on pay rate.

(7.) The "Sad Puppy" kerfuffle threatens to damage the once-sterling reputation of the Hugos, but the Hugos are a small part of my calculation and the results are pretty much the same either way.

Tuesday, August 01, 2017

Time travel is a more or less ubiquitous feature of modern sci-fi. Almost every long running SF show – Star Trek, Futurama, The X-Files – will have a time travel episode sooner or later, and some, like Doctor Who, use time travel as the main narrative device. The same applies to novels and, of course, to Hollywood – blockbuster SF franchises like the Terminator and Back to the Future employ it, as do quirkier pictures like Midnight in Paris. And of course, there’s no shortage of time travel novels, including old favorites like A Christmas Carol, and perhaps most influentially, HG Wells’s wonderful social sci-fi novella The Time Machine.

I don’t find it particularly surprising that we’re so interested in time travel. We all engage in so called ‘mental time travel’ (or Chronaesthesia) all the time, reviewing past experiences and imagining possible futures, and the psychological capacities required in our doing so are the subject of intense scientific and philosophical interest.

Admittedly, the label “mental time travel” may be a bit misleading here; most of what gets labelled mental time travel is quite different from the SF variant, consisting in episodic recall of the past or projection into the future rather than imagining our present selves thrown back in time. But I think we also do this latter thing quite a lot. To give a commonplace example, we’re all prone to engage in “coulda woulda shoulda” thinking: if only I hadn’t parked the car under that tree branch in a storm, if I only I hadn’t forgotten my wedding anniversary, if only I hadn’t fumbled that one interview question. Frequently when we do this, we even elaborate how the present might have been different if we’d just done something a bit differently in the past. This looks a lot like the plots of some famous science fiction stories! Similarly, I’m sure we’ve all pondered what it would be like to experience different historical periods like the American Revolution, the Roman Empire, or the age of dinosaurs (you can even buy a handy t-shirt). More prosaically, I imagine many of us have also reflected on how satisfying it would be to relive some of our earlier life experiences and do things differently the second time round – standing up to high school bullies, or studying harder in high school (again, a staple of light entertainment).

Given the above, I had always assumed that time travel was part of fiction because it was simply part of us. Time travel narratives, in other words, were borrowed from the kind of imaginings we all do all the time. It was with huge surprise, then, that I discovered (while teaching a course on philosophy and science fiction) that time travel doesn’t appear in fiction until the 18th century, in the short novel “Memoirs of the Twentieth Century”. Specifically, this story imagines letters from the future being brought back to 1728. The first story of any kind (as far as I’ve been able to find) that features humans being physically transported back into the past doesn’t come until 1881, in Edward Page Mitchell’s short story “The Clock That Went Backwards”.

Maybe this doesn’t seem so surprising – isn’t science fiction in the business of coming up with bizarre, never before seen plot devices? But in fact, it’s pretty rare for genuinely new ideas to show up in science fiction. Long before we had stories about artificial intelligence, we had the tales of Pinocchio and the Golem of Prague. Creatures on other planets? Lucian's True History had beings living on the moon and sun back in the 2nd century AD. For huge spaceships, witness the mind-controlled Vimanas of the Sanskrit epics. And so on. And yet, for all the inventiveness of folklore and mythology, there’s very little in the way of time travel to be found. The best I’ve come up with so far is some time dilation in the stories of Kakudmi in the Ramayanas, and visions of the past in the Book of Enoch. But as far as I can tell, there’s nothing that fits the conventional time travel narratives we’re used to, namely physically travelling to ages past or future, let alone any idea that we might alter history.

What’s going on here? One possibility is that something changed in science or society in the 18th century that paved the way for stories about time travel. But what would that be, and how would it lead to time travel seeming more plausible? For example, if the first time travel literature had accompanied the emergence of general relativity (with all its assorted time related weirdness), then that would offer a satisfying answer. However, Newtonian physics was already in place by the late 17th century, and it’s not clear which of Newton’s principles might pave the way for time travel narratives.

I’m very open to suggestions, but let me throw out one final idea: time travel narratives don’t show up in earlier fiction because they’re weird, unnatural, and counterintuitive. Even weirder than the staples of folklore and mythology, like people being turned into animals. Time travel is just not the kind of thing that naturally occurs to humans to think about at all, and it’s only via a few fateful books in the 18th century and its subsequent canonisation in The Time Machine that it’s become established as a central plot device in science fiction.

But doesn’t that contradict what I said earlier about how we all often naturally think about time travel related scenarios, like changing the past, or witnessing historical events firsthand? Not necessarily. Maybe these kinds of thought patterns are actually inspired by time-travel science fiction. In other words, prior to the emergence of time travel as a trope, maybe people really didn’t daydream about meeting Julius Caesar or going back and changing history. Perhaps the past was seen simply as a closed book, rather than (in the memorable words of L. P. Hartley) just “a foreign country”. That’s not to suggest, of course, that people didn’t experience memories and regrets, but maybe they experienced them a little differently, with the past seeming simply an immutable background to the present.

I’m excited the idea that a science fiction trope might have birthed a new and widespread form of thinking. Partly that’s because it suggests that science fiction may be more influential than we realize, and partly it’s because, as a philosopher, I’m interested in where patterns of thought come from. However, I’m very happy to proven wrong in this conjecture – perhaps there are letters from the Middle Ages in which writers engage in precisely this kind of speculation. Or perhaps the emergence of science fiction in the 18th century can be explained in terms of some historical event I’ve missed. Or who knows: maybe there’s an untranslated gnostic manuscript out there where Jesus has a time machine....