I take KK's core assertion to be this: Technology is a (the?) chief means by which God now intervenes in history to help people to realize their full potential. My problem with that assertion starts long before we get to the question of what technology does (or doesn't do) to make our lives better (or worse). KK's planted axiom, as the logicians used to say, is that common beliefs about what counts as "potential" and what counts as "fulfilling" that potential are perfectly adequate, and that God's job in the universe is ancillary, i. e., to help us along a path that we already see pretty clearly.

I don't believe any of that. I don't think that, left to our own devices, people have a very good idea of what human flourishing, eudaimonia, really is; and I don't think of God as a celestial helpmeet, an omnipotent enabler of our desires. My theology starts, more or less, with the message Dietrich Bonhoeffer articulated most succinctly: "When Christ calls a man, he bids him come and die." And that means dying to our pre-existing understanding of what our potential is and what realizing it would mean.

Now, I believe that whatever dies in Christ will be reborn in him — but, as T. S. Eliot put it, will "become renewed, transfigured, in another pattern." And from that vantage point everything will look different. As far as I can tell, in KK's theology the life of Francis of Assisi was deficient in potential, in choices, was impoverished in a deep sense — and yet Francis believed that by embracing Lady Poverty, by casting aside his wealth and intentionally limiting his choices, he found riches he could not have found in any other way. This is, I hope, not to romanticize material poverty, or to say that we would all be better off if we lived in the Middle Ages. I disagree strongly with such nostalgia. But I think the example of Francis suggests that we cannot simply equate choices and riches in the material realm with human flourishing. The divine economy is far more complicated than that, and any serious theology of technology has to begin, I think, by acknowledging that point.

Friday, July 22, 2011

I don't get this article by Edd Dumbill. He wants to argue that "The launch of Google+ is the beginning of a fundamental change on the web. A change that will tear down silos, empower users and create opportunities to take software and collaboration to new levels." He tries to support that bold claim by arguing that Google+ is a big step towards "interoperability":

Currently, we have all [our social] groups siloed. Because we have many different contexts and levels of intimacy with people in these groups, we're inclined to use different systems to interact with them. Facebook for gaming, friends and family. LinkedIn for customers, recruiters, sales prospects. Twitter for friends and celebrities. And so on into specialist communities: Instagram and Flickr, Yammer or Salesforce Chatter for co-workers.

The situation is reminiscent of electronic mail before it became standardized. Differing semi-interoperable systems, many as walled gardens. Business plans predicated on somehow "owning" the social graph. The social software scene is filled with systems that assume a closed world, making them more easily managed as businesses, but ultimately providing for an uncomfortable interface with the reality of user need.

An interoperable email system created widespread benefit, and permitted many ecosystems to emerge on top of it, both formal and ad-hoc. Email reduced distance and time between people, enabling rapid iteration of ideas, collaboration and community formation. For example, it's hard to imagine the open source revolution without email.

Dumbill seems not to have noticed that the various services he mentions, from Facebook to Twitter to Instagram, are already built around an "interoperable system": it's called the World Wide Web. Those aren't incompatible platforms, they are merely services you have to sign up for — just like Google.

Ah, but, "Though Google+ is the work of one company, there are good reasons to herald it as the start of a commodity social layer for the Internet. Google decided to make Google+ be part of the web and not a walled garden." Well, yes and no. You can see Google+ posts online, if the poster chooses to make them public, but you can't participate in the conversation without signing up for the service. In other words: just like Facebook, Twitter, Flickr, and so on.

In the end, it seems to me that Dumbill is merely saying that if all of us decide to share all our information with just one service, we'll have a fantastic "social backbone" for our online lives. And that may be true. Now, can we stop to ask whether there may be any costs to that decision?

Mr. Gleick is right to say that the digitization of precious materials gives them another life on the Web, and that research libraries can and should make these materials available to the broadest possible audience. But if we are interested in what an early document like Magna Carta or a Shakespeare First Folio really means, it is vital to place it among other like objects to know how it was created, used and valued.

If the Folger Shakespeare Library were to digitize all 82 copies of the First Folio that we possess — each of them unique — we would not have made the book fully accessible. Access is a matter of understanding, and that means, in this case, knowing how such a treasured volume was physically distinguished from its peers.

It is one thing to look at a digital photograph taken at the top of Mount Everest and feel the thrill of “being there.” It is quite another to pore over the broad pages of Shakespeare’s First Folio (1623) and ask what such a luxurious book meant to those who bought and read it.

While I want to be on Witmore's side in this dispute, I'm not sure that this response offers much of substance. For instance:

• But if we are interested in what an early document like Magna Carta or a Shakespeare First Folio really means, it is vital to place it among other like objects to know how it was created, used and valued. Right — but can't that be done digitally? If we look at, and carefully compare, high-resolution images of "other like objects," aren't we getting the same information? (Especially if those images are accompanied by information about dimensions, or if two similar books are photographed together.) I need Witmore to tell me in more detail what, precisely, makes the encounter with the physical text superior.

• Access is a matter of understanding, and that means, in this case, knowing how such a treasured volume was physically distinguished from its peers. Again, this can be done digitally, can it not?

• It is quite another to pore over the broad pages of Shakespeare’s First Folio (1623) and ask what such a luxurious book meant to those who bought and read it. Why can't I look at the digitized pages of the Folio and ask the same question? In fact, I know I can — so once more, where is the difference?

These are genuine, not rhetorical questions. If the digital images are poor, we all know what the problems are; I've done a good deal of archival research that would have been impossible had I had images significantly less precise than my own eyesight. (Pray that you never have to do archival work on a writer whose handwriting is as bad as W. H. Auden's.)

But as digital images increase in quality, I can see all sorts of ways in which being able to spend as much time as I want "poring over" pages on my computer — zooming in on troublesome areas, say, or juxtaposing two pages on one large monitor for purposes of careful comparison — could be not just equal but superior to seeing the "real thing." Help me out here, proponents of on-site research!

Monday, July 18, 2011

One interesting thing I’ve learned during this visit to England is that my pleasure in using Twitter is directly proportional to the number of people who are on it when I am. My unscientific read of my Twitter feed is that more tweets arrive in the morning (U. S. Eastern and Central time) than any other, followed by evening and then late afternoon. But since I’m in England, I’m asleep when those morning evening tweets come in; and then when the afternoon morning ones arrive I’m teaching or studying or leading a tour somewhere.

Now, it’s true that I’m now in the same time-frame as my European tweeps; but there aren't as many of them, and some of them are late-to-rise and late-to-bed and therefore keep schedules that aren't that different than East Cost Americans.

One more factor: this whole summer I’ve been on the computer less often than usual and more irregularly.

The result of all this temporal dislocation is that when I’m online, not much is happening in my little corner of the Twitterverse — and, it turns out, browsing through tweets that are ten or twelve hours old isn't all that interesting. I look with envy at conversations that sprang up while I was away: while I could join in belatedly, that usually feels pointless. (Imagine remembering a funny joke the day after a dinner party with friends and emailing it to them.)

So it turns out that, for me anyway, much of the value of Twitter comes from actually being in the flow of it. This is perhaps why I like separating my Twitter feed from my RSS feeds: a few months ago I experimented with trying to get everything into Twitter and setting RSS aside, but I didn't like it. Whatever turns up in my RSS feed I can read later, can read whenever; but with Twitter, well, you just had to be there.

Thursday, July 14, 2011

Yes, I know that I’ve had my say on this topic, but I still have some questions. I start with the ones that drove me out of Google+, and then move gradually into the realm of metaphysical contemplation. . . .

What circle should I put this person in?

Oh wait, I can put people in more than one circle — so how many circles should I put this person in?

Do I even want this person to be in any of my circles?

How many circles should I have, anyway? This subdividing thing can go too far, can’t it? and what should be the core principles I use to design my circles? Degrees of intimacy? Spheres of interest? An elementary division between Work and Play?

I can't even use this service unless I create a public profile, so what do I want to reveal on my public profile? How detailed should it be?

I’m ready to post something . . . but should this be a public post? Who would be interested in it? Maybe it should just go to this one circle? Though there are people in other circles who might be interested also . . . but others in that circle who wouldn’t be interested . . . so maybe before I post it I need to rearrange my circles a bit.

Wait . . . if I move that guy out of one circle will be still see the posts and photos he saw when he was in that circle? If not, then do I want to do that to him? What will he think when he figures out that I’ve removed him from a circle (especially if he doesn't know what my circles are)? Will he be able to see that?

When I signed up I discovered that my two choices were “Link Google+ with Picasa Web” or “Don’t Join Google”? Why can't I join without linking my Picasa photos to the service?

Google asks me if I want to be notified when someone “shares a post with me directly” — but what if I don't want people to share posts with me directly at all? Can I keep anyone from doing that? Or by using the service do I make myself vulnerable to anyone and everyone who wants to “share” with me? Is there no refuge from oversharers?

Google also asks me if I want to be notified when someone comments on one of my posts — but what if I don't want anyone to comment on my posts at all? There appears to be no option for turning off comments — why not?

I believe that if I turn off every single one of these (email or text) notifications I still see a badge numbering everything people have tried to do with me or to me on Google+ at the top of every single Google page when I am logged in. What if I don't want to see that badge?

Can I prevent someone from starting a Huddle conversation with me? I can, I suppose, just decline to reply, but what if I just don't want to Huddle at all? What if Huddling kinda grosses me out?

In short, what if I want to start by having minimal social interactions on Google+, interactions over which I have a great deal of control, and I want to have very few and very simple decisions to make about whom I interact with? In that case, I can't see that Google+ is the service for me.

A while back Jonathan Zittrain tweeted a suggestion about academic grading that I like, so I’m adapting it for my classes in England this summer. Formal papers are difficult to do in these circumstances, so I’m having my students write journal-like responses to what we read, responses in which they need to quote the texts and quote critics but are not obliged to formulate a thesis. Their writing must remain text-centered but they are free to be more speculative and personally responsive than is usual in my classes. But how do you grade such writing? Here’s the explanatory email I recently sent out:

So, friends, here's how you can interpret the grading of your journals — which is not easy, I grant you, since I'm encouraging you to write conversationally and I'm tending to respond conversationally:

1) If I use words like "excellent," "outstanding," "first-rate," and the like to describe your entry, your grade is W00T.

2) If I say the entry is "solid," or "good," or if I don't make a qualitative comment but just respond to the content in some way — by adding information, or offering a correction, or the like — your grade is WIN.

3) If my comment is of the "yes, but" variety — which happens primarily if you either don't offer enough of your own responses or if you stray too far from the text you're supposed to be writing about — your grade is MEH.

4) If I tell you that you're just off-track — which happens primarily if you offer no responses of your own (instead summarizing either one of our writers or a critic) or if you don't really talk about the literary text at all — your grade is FAIL.

Friday, July 8, 2011

When Google asked me why I chose to delete my Google+ service, here’s what I wrote:

First of all, I am not especially attracted to social media. I deactivated my Facebook account years ago, and find that Twitter is all the social I need.

Second, Google+ gives me too many decisions to make. With Twitter, I say "Let me know if someone replies to me or DMs me, but otherwise leave me alone." (I don't even know how many followers I have or who those followers are.) Google+ defaults to sending me an email about everything, but even if I uncheck all those options, I still find new people showing up in my Stream that I didn't ask to see and that I have to make decisions about. That's exactly what I hated about Facebook: the constant need to make decisions about how I am going to manage my online relations, especially with people I don't know well.

Third, I don't fully trust Google to treat my information responsibly, so I would prefer not to implicate myself further in the company. If Gmail weren't so far superior to every other implementation of email, I would have already deleted my Google account.

I really do appreciate how easy Google makes it to escape Google+ — they wouldn't have done it so well a year ago, which shows that they’re learning, as Facebook is not. I completely understand what people like about Google+, but it didn't take me long to realize that it's just not my cup of tea at all.

One last word: trying out Google+ has reminded me once again of how much I like, and admire, the radical simplicity of Twitter. So if my Twitter friends start abandoning Twitter for Google+ I'm going to be really sad.

Tuesday, July 5, 2011

Returning to an earlier theme, and having read the comments on this follow-up post by Joe Carter, I’d like to note a couple of points many people fail to understand about online anonymity:

1) There’s an enormous difference between anonymity and pseudonymity: the person who posts or comments under a consistent pseudonym is assuming a level of responsibility for his or her words that the anonymous poster does not. Consider Yoni Appelbaum, who commented widely, but especially at The Atlantic, for a long time under the moniker “Cynic” — and thereby got himself a job blogging for The Atlantic. Everybody who read that site knew who Cynic was, could respond to him directly either in agreement or disagreement, could point out what what he said in one comment contradicted what he said in another, and so on. Any conversation with a completely anonymous poster is comparatively impoverished. Indeed, if you have ten anonymous comments you can't know whether you’re dealing with one person or ten different people. Thus sock puppetry and the like are born.

2) People like to say that what matters is the quality of the ideas, not the person who utters them. But suppose the topic is something you don't know much about — a subject requiring certain technical expertise — and you’re not sure how to assess the varying positions. In such a case it helps to know who is making the arguments. Consider the debate on James Fallows’s blog a few months back about the likely effects on human health of the radiation emitted by the TSA’s backscatter-radiation machines. Fallows can affirm that the writer he's quoting is “a physics professor from a college in the East,” and while I’d like to know who he is and what college he’s employed by, even that much gives me reason to take the argument seriously. If the same argument had been presented anonymously in a comment thread, why should anyone take it seriously? why should anyone even read it? If I made the argument why should anyone pay attention?

Now, the fact that an unnamed physics professor made a set of claims did not settle the question — but knowing even a little bit about the physicist, and the qualifications of the person disagreeing with him, helps us to think more clearly about the issues raised. One of the really interesting questions raised is: What sort of scientist would be a reliable source about backscatter-radiation machines? Unless you think that on every conceivable subject one person’s opinion is as good as any other’s, or that eloquence alone counts, you need to think not just about what’s being said but also who’s saying it.

It’s not the only factor — in many cases it won't be the most important factor — but it helps, because knowledge is good, and the more of it we have the better off we are. Even when we’re arguing online. Remember that it doesn't count for much when Woody Allen himself (or his stand-in Alvy Singer) tells the obnoxious blowhard in the movie line that he doesn't understand Marshall McLuhan; but when McLuhan himself shows up and weighs in. . . .

Search This Blog

About

Commentary on technologies of reading, writing, research, and, generally, knowledge. As these technologies change and develop, what do we lose, what do we gain, what is (fundamentally or trivially) altered? And, not least, what's fun?