Monday, June 30, 2014

Hannah Sullivan’s outstanding book The Work of Revision came out last year and got less attention than it deserves — though here’s a nice article from the Boston Globe. My review of the book has just appeared in Books and Culture, but it’s behind a paywall — and why, you may ask? Because B&C needs to make ends meet, that’s why, and if you haven’t subscribed you ought to, post haste.

Anyway, here’s the link and I’m going to quote my opening paragraphs here, because they relate to themes often explored on this blog. But do find a way to read Sullivan’s book.

Once upon a time, so the village elders tell us, there reigned a gentle though rather dull king called Literary Criticism, who always wore tweed and spoke in a low voice. But then, on either a very dark or very brilliant day, depending on who's telling the story, this unassuming monarch was toppled by a brash outsider named Theory, who dressed all in black, wore stylish spectacles, and spoke with a French accent. For a time it seemed that Theory would rule forever. But no king rules forever.

One can be neither definitive nor uncontroversial about such matters, given the chaotic condition of the palace records, but if I were in the mood to be sweeping, I would suggest that the Reign of Theory in Anglo-American literary study extended from approximately 1960 (Michel Foucault's Madness and Civilization) to approximately 1997 (Judith Butler's Excitable Speech: A Politics of the Performative). Its period of absolute dominance was rather shorter, from 1976 (Gayatri Spivak's English translation of Jacques Derrida's Of Grammatology) to 1989 (Stephen Greenblatt's Shakespearean Negotiations: The Circulation of Social Energy in Renaissance England). Those were heady days.

The ascendance of Theory brought about the occlusion of a set of humanistic disciplines that had for a long time been central to literary study, especially the various forms of textual scholarship, from textual editing proper to analytical bibliography. To take but one institutional example: at one time the English department of the University of Virginia, under the leadership of the great textual scholar Fredson Bowers, had been dominant in these fields, but Bowers retired in 1975, and by the time I arrived at UVA as a new graduate student in 1980, almost no one on the faculty was doing textual scholarship, and I knew no students who were interested in it. This situation would begin to be rectified in 1986 with the hiring of Jerome McGann, who renewed departmental interest in these fields and played a role in bringing Terry Belanger's Rare Book School from Columbia to Virginia (in 1992). Now Virginia is once more seen as a major player in textual scholarship, bibliography, the history of the book, and what was once called "humanities computing" — a field in which McGann was a pioneer — but is now more likely to be called "digital humanities."

Theory is still around; but its skeptical, endlessly ramifying speculations can now seem little more than airy fabrications in comparison to the scrupulous study of material texts and the very different kind of scrupulosity required to write computer programs that data-mine texts. The European theorist in black has had to give way to new icons of (scholarly) cool. Literary textual scholarship is back: more epistemologically careful, aware of the lessons of theory, but intimately connected to traditions of humanistic learning that go back at least to Erasmus of Rotterdam in the 16th century — and maybe even Eusebius of Caesarea in the 4th.

Sunday, June 29, 2014

The largest effect size reported had a Cohen’s d of 0.02–meaning that eliminating a substantial proportion of emotional content from a user’s feed had the monumental effect of shifting that user’s own emotional word use by two hundredths of a standard deviation. In other words, the manipulation had a negligible real-world impact on users’ behavior. To put it in intuitive terms, the effect of condition in the Facebook study is roughly comparable to a hypothetical treatment that increased the average height of the male population in the United States by about one twentieth of an inch (given a standard deviation of ~2.8 inches). Theoretically interesting, perhaps, but not very meaningful in practice.

This seems to be missing the point of the complaints about Facebook’s behavior. The complaints are not “Facebook successfully manipulated users’ emotions” but rather “Facebook attempted to manipulate users’ emotions without informing them that they were being experimented on.” That’s where the ethical question lies, not with the degree of the manipulation’s success. “Who cares if that guy was shooting at you? He missed, didn’t he?” — that seems to be Yarkoni’s attitude.

Here’s another key point, according to Yarkoni:

Facebook simply removed a variable proportion of status messages that were automatically detected as containing positive or negative emotional words. Let me repeat that: Facebook removed emotional messages for some users. It did not, as many people seem to be assuming, add content specifically intended to induce specific emotions.

It may be true that “many people” assume that Facebook added content, but I have not seen even one say that. Does anyone really believe that Facebook is generating false content and attributing it to users? The concern I have heard people express is that they may not be seeing what their friends or family are rejoicing about or lamenting, and that such hidden information could be costly to them in multiple ways. (Imagine a close friend who is hurt with you because you didn’t commiserate with her when she was having a hard time. After all, the two of you are friends on Facebook, and she posted her lament there — you should have responded.)

But here’s the real key point that Yarkoni makes — key because it reveals just how arrogant our technological overlords are, and how deep their sense of entitlement:

It’s not clear what the notion that Facebook users’ experience is being “manipulated” really even means, because the Facebook news feed is, and has always been, a completely contrived environment. I hope that people who are concerned about Facebook “manipulating” user experience in support of research realize that Facebook is constantly manipulating its users’ experience. In fact, by definition, every single change Facebook makes to the site alters the user experience, since there simply isn’t any experience to be had on Facebook that isn’t entirely constructed by Facebook.... So I don’t really understand what people mean when they sarcastically suggest — as Katy Waldman does in her Slate piece — that “Facebook reserves the right to seriously bum you out by cutting all that is positive and beautiful from your news feed”. Where does Waldman think all that positive and beautiful stuff comes from in the first place? Does she think it spontaneously grows wild in her news feed, free from the meddling and unnatural influence of Facebook engineers?

Well, I’m pretty sure that Katy Waldman thinks “all that positive and beautiful stuff comes from” the people who posted the thoughts and pictures and videos — because it does. But no, says Yarkoni: All those stories you told about your cancer treatment? All those videos from the beach you posted? You didn't make that. That doesn't “come from” you. Yarkoni completely forgets that Facebook merely provides a platform — a valuable platform, or else it wouldn't be so widely used — for content that is provided wholly by its users.

Of course “every single change Facebook makes to the site alters the user experience” — but all changes are not ethically or substantively the same. Some manipulations are more extensive than others; changes in user experience can be made for many different reasons, some of which are better than others. That people accept without question some changes while vigorously protesting others isn’t a sign of inconsistency, it’s a sign that they’re thinking, something that Yarkoni clearly does not want them to do. Most people who use Facebook understand that they’ve made a deal in which they get a platform to share their lives with people they care about, while Facebook gets to monetize that information in certain restricted ways. They have every right to get upset when they feel that Facebook has unilaterally changed the deal, just as they would if they took their car to the body shop and got it back painted a different color. And in that latter case they would justifiably be upset even if the body shop pointed out that there was small print in the estimate form you signed permitting them to change the color of your car.

One last point from Yarkoni, and this one is the real doozy: “The mere fact that Facebook, Google, and Amazon run experiments intended to alter your emotional experience in a revenue-increasing way is not necessarily a bad thing if in the process of making more money off you, those companies also improve your quality of life.” Get that? In Yarkoni’s ethical cosmos, Facebook, Google, and Amazon — and presumably every other company you do business with, and for all I know the government (why not?) — can manipulate you all they want as long as they “improve your quality of life” according to their understanding, not yours, of what makes for improved quality of life.

Why do I say their understanding and not yours? Because you are not consulted in the matter. You are not asked beforehand whether you wish to participate in a life-quality-improving experiment, and you are not informed afterwards that you did participate. You do not get a vote about whether your quality of life actually has been improved. (Our algorithms will determine that.) The Great Gods of the Cloud understand what is best for you; that is all ye know on earth, and all ye need know.

In addition to all this, Yarkoni makes some good points, though they're generally along the other-companies-do-the-same line. I may say more about those in another post, if I get a chance. But let me wrap this up with one more note.

Tal Yarkoni directs the Psychoinformatics Lab. in the Psychology department at the University of Texas at Austin. What do they do in the Psychoinformatics Lab? Here you go: “Our goal is to develop and apply new methods for large-scale acquisition, organization, and synthesis of psychological data.” The key term here is “large-scale,” and no one can provide vast amounts of this kind of data as well as the big tech companies that Yarkoni mentions. Once again, the interests of academia and Big Business converge. Same as it ever was.

Saturday, June 28, 2014

Over on Twitter, Robin Sloan pointed me to this post about the Fermi paradox, which got me thinking about that idea again for the first time in a long time. And I find that I still have the same question I’ve had in the past: Where’s the paradox?

That Wikipedia article (which is a pretty good one) puts the problem that Fermi perceived this way: “The apparent size and age of the universe suggest that many technologically advanced extraterrestrial civilizations ought to exist. However, this hypothesis seems inconsistent with the lack of observational evidence to support it.” But we have no telescopes powerful enough to see what might be happening on any of the small number of exoplanets that have been directly observed. So there’s no “observational evidence” one way or the other.

Unless, of course, we mean alien civilizations that might be observed right here on earth.

That’s where this way of formulating the problem comes in (again from the Wikipedia article): “Given intelligent life's ability to overcome scarcity, and its tendency to colonize new habitats, it seems likely that at least some civilizations would be technologically advanced, seek out new resources in space and then colonize first their own star system and subsequently the surrounding star systems.” Does “intelligent life” really have a “tendency to colonize new habitats”? Wouldn't it be more accurate to say simply that some human societies have this tendency?

The assumptions here are, it seems to me, pretty obvious and pretty crude: that the more intelligent “intelligent life” becomes, the more likely it will be to have an expansionary, colonizing impulse. In other words, superior alien civilizations will be to us as Victorian explorers were to the tribes of Darkest Africa. Higher intelligence is then identified with (if we’re inclined to be critical) the British Empire at its self-confident apogee or (if we’re inclined to be really critical) the Soviet Union or Nazi Germany in their pomp. (It’s all about the galactic Lebensraum, baby!)

But I see no reason whatsoever to grant this assumption. Why would the drive to become a “hegemonising swarm” — as Iain M. Banks refers to this kind of society in his Culture novels — be a mark of high intelligence? Though the Culture itself has strong hegemonizing tendencies, which it tries with partial success to keep under control, the most sophisticated societies in those books are the ones who have chosen to “sublime”, that is, opt out of ordinary space/time altogether.

Perhaps the impulse to colonize is, or could be, merely a stage in the development of intelligence — a stage to be gotten over. Maybe truly great intelligence manifests itself in a tendency towards contemplation and a calm acceptance of limits. Maybe there are countless societies in the universe far superior to our own who are invisible to us because they have learned the great blessings to be had when you just mind your own damned business.

Tuesday, June 24, 2014

The one great impression I have from this much-lauded film — which I just got around to watching — is how imperceptive, and even incurious, it is about what makes Calvin and Hobbes the best of its genre. There are a good many vague mumbles about its being well-drawn and well-told, and imaginative, and “intimate” (whatever that means), and so on and so forth.

The film doesn’t seem to know what it’s about: the history of cartooning? The death of newspapers? Chagrin Falls, Ohio? The promise and peril of marketing?

Sunday, June 22, 2014

Laptops are not a “new, trendy thing” as suggested in the final sentence of the article – they are a standard piece of equipment that, according to the Pew Internet and American Life Project, are owned by 88% of all undergraduate students in the US (and that’s data from four years ago). The technology is not going away, and professors trying to make it go away are simply never going to win that battle. If we want to have more student attention, banning technology is a dead end. Let’s think about better pedagogy instead.

Sigh. It should not take a genius to comprehend the simple fact that the ongoing presence and usefulness of laptops does not in itself entail that they should be present in every situation. "Banning laptops from the shower is not the answer. Laptops are not going away, and if we want to have cleaner students, we need to learn to make use of this invaluable resource."

And then there's the idea that if you're not more interesting than the internet you're a bad teacher. Cue Gabriel Rossman:

Honestly.

Robert Talbert, the author of that post, assumes that a teacher would only ban laptops from the classroom because he or she is lecturing, and we all know — don't we? —that lecturing is always and everywhere bad pedagogy. (Don't we??) But here's why I ban laptops from my classrooms: because we're reading and discussing books. We look at page after page, and I and my students use both hands to do that, and then I encourage them to mark the important passages, and take brief notes on them, with pen or pencil. Which means that there are no hands left over for laptops. And if they were typing on their laptops, they'd have no hands left over for turning to the pages I asked them to turn to. See the problem?

I've said it before, often, but let me try it one more time: Computers are great, and I not only encourage their use by my students, I try to teach students how to use computers better. But for about three hours a week, we set the computers aside and look at books. It's not so great a sacrifice.

Search This Blog

About

Commentary on technologies of reading, writing, research, and, generally, knowledge. As these technologies change and develop, what do we lose, what do we gain, what is (fundamentally or trivially) altered? And, not least, what's fun?