It’s no secret that I’m not a fan of Twitter. I have no account, and despite the number of friends who “tweet” with vigour, no desire to acquire one. If I can conveniently ride out this latest bandwagon to the next, Google Wave, I’ll consider myself very lucky.

From this vantage point, it’s very easy to seize upon any awful news about Twitter and twist it to further my stance. Which is what I was quick to do, when I learned Ashton Kutcher and wife Demi Moore (with 3 million Twitter followers between them) tweeted last week that they would have to leave the site in protest if Twitter pursued plans to make a reality TV show out of the website.

Yes, you read that right: Twitter has in many ways usurped the role of paparazzi, allowing celebrities more direct control over their interaction with fans (so we can all follow the tedious minutiae of their day-to-day lives) — and even leading celebrities to do the unthinkable: post pictures of themselves in less than flattering lights. They’ve become, in other words, almost human.

But, hey, there’s no money in that sort of social convergence, right? So why not turn that nigh-on-egalitarian collective into citizen paparazzi, pitting twitterers against one another in an epic competition to stalk celebrities through the website? Wouldn’t that be fun?!

Do I have a deep and abiding concern for Ashton Kutcher and Demi Moore? No. Do I find it typical of the application to progress actively in directions that yield financial gain at the expense of the community itself (and the welfare of members therein)? Yes.

Heaven knows, Twitter wouldn’t be the first website to invade people’s privacy. One need look no further than the origins of Facebook — the initial website a vicious Harvard version of Hot-or-Not? entitled “Facemash,” drawing from the official photos of students at the university and tasking site visitors to decide which student in a pair was hotter than the other — to realize that, even in our purported age of enlightenment, technological advancements don’t always emerge from altruistic roots.

Users of LiveJournal, for instance — a blogging site that has remained conspicuously off the grid despite the readiness of most sites to link up through Facebook, YouTube, Digg, de.licio.us, VodPod, and other aggregation modules — know the latter fight all too well. Though founded on a pro-user model wherein developers promised to listen to the needs of actual users, and protect them from the pressures of outside interests, LiveJournal eventually found itself compromising these promises time and again — and not just for financial gain.

Many of these changes arose from a simple transition of ownership: for instance, when Six Apart first bought Danga Interactive, Livejournal’s operator, it introduced a sponsored ad system — despite the site’s earlier promise of remaining advertisement free — and eliminated basic accounts for half a year so only paid users could be assured of ad-free space, before eventually reversing the decision. (The above link has a far more nuanced list of compromises therein.)

And yet, oddly enough, the case of LiveJournal allowed me some measure of perspective in response to Twitter’s misfiring play at a reality TV show — because when LiveJournal was sold to SUP, it wasn’t added costs users feared: it was the possibility of censoring and curtailing the expansive voices of Russian dissent that had gathered on the website. As the SUP owner is closely tied to the Russian government, many feared that the sale would serve to break down the walls of freedom of speech and, well, a kind of assembly that had emerged in LiveJournal’s walls.

Similarly, Twitter has done incontestable good in providing a public forum for countries that otherwise lack the same extensive rights to freedom of speech and assembly. In countries like Moldova, for instance, Twitter provided a means for outsmarting government censors, allowing protesters to co-ordinate a rally against “disputed legislative elections.”

And you needn’t ask Jean Ramses Anleu Fernandez if he thinks governments are starting to realize Twitter’s democratic power: For a single tweet urging citizens to withdraw all their money from the state-run bank in response to charges of government involvement in a series of related murders, the Guatemalan faces a ten year sentence for “inciting financial panic.”

Of course, no new technology is completely safe from censorship — especially from pros. So, yes, China censors Twitter content — big surprise there! Nonetheless, Twitter’s use and reach in many other regions is quite striking, and deserves to be taken into account.

At the end of the day, though, I still chafe at the direction in which Twitter leads journalistic narrative. It especially dismays me that while we as a society claim awareness of the complexity of contemporary socio-political and cultural issues, members of the media have nonetheless latched on to a medium that allows no more than 140 characters to summarize the gist of any one story.

As a big proponent of the philosophy that writers teach readers what to expect of the media (i.e. with an excess of short pieces acclimatizing readers to shorter attention spans), this seems an agonizing exercise in the death of sustained interest. Studies like this one, amply represented in graph form, serve only to confirm the frenzy with which Twitter allows people to latch on to, and then drop off from, topics of note.

So, no, you won’t find me on Twitter. Like I said at the start, I’m hoping to ride out this service to the next big thing. But in the meantime, is Twitter really all that bad?

Like so much of Web 2.0 technology, it depends what its users make of it.

May 27, 2009

Yes, this is about Wolfram|Alpha. For those of you who’ve heard nothing of this search engine yet, let me answer your first question upfront: Wolfram|Alpha shouldn’t be compared to Google; they’re apples and oranges in the world of internet data-trawling.

What, then, is Wolfram|Alpha, and why on earth would it be useful when we already have Google? I’d usually tell people to go look for themselves: from the main page, for instance, it’s clearly identified as a “computational search engine.” But what does that mean? Doesn’t Google already use algorithms for its searches? And though the About page provides a little more insight, it still stymied a few people I’ve already introduced to the website. Such confusion isn’t surprising, either, when you take a good look at how expansive the language is:

Wolfram|Alpha’s long-term goal is to make all systematic knowledge immediately computable and accessible to everyone. We aim to collect and curate all objective data; implement every known model, method, and algorithm; and make it possible to compute whatever can be computed about anything. Our goal is to build on the achievements of science and other systematizations of knowledge to provide a single source that can be relied on by everyone for definitive answers to factual queries.

I have to smile at this kind of language: it reminds me very much of my own writing, which though intending to convey a lot of information, might be considered so complexly worded as to limit, instead of enhancing, general knowledge about the topic at hand. (I’m working on it!)

And, alack, there is no Simple Wikipedia entry to explain this new site in layman’s terms. Even Wolfram|Alpha itself, though designed in part for comparative queries, lists only a few rudimentary details when tasked to explain what makes it different from Google.

So. If you’ll permit the blind to lead the blind, here’s Wolfram|Alpha in a nutshell:

It does not search web pages. You will not get top hits. You will not get related searches. At present, while the system learns, even misspelling something will give you limited returns.

It is, in other words, just the facts. No blog commentary. No video response. No forums or wikis in sight. Pulling from hard data sources, Wolfram|Alpha provides the basics about anything that can be quantified and computed, in whatever ways are available for said thing to be computed. Truly, anything: Here’s the entry for god.

So what on earth is this good for? In an age of sprawling participatory encyclopedias, interactive learning through participation on internet forums, and a whole slew of multimedia ventures — to say nothing of Google itself, which commingles basic search functionality with meta-searching, specialized searches (books, shopping, blogs, news), interactive maps and more — do we really need a website that provides us with “just the facts”?

Heck. Yes.

It may come as a surprise, but there are still a great many websites engaged in the whole search engine struggle for survival. There’s Google, of course, and Yahoo, but also Cuil.com — an underdog created by a former Googler who grew dissatisfied with how big the company had grown. (I, personally, have trouble believing anyone would give up access to their incredible catering services.)

Cuil.com is said to accumulate and store information more efficiently, by linking and dropping similar subject hits on the same computer, so future search bots can find the bulk of their search results in one location. It also has a different layout, prioritizing the presentation of more content from each search result on the search results page. The comments on this Slashdot entry, however, match my own feelings of being underwhelmed by the quality of its search results.

And then there are niche market tools like Regator.com, which searches the internet for blog posts relating to any topic in question, and Google minus Google, a filter for web searchers tired of seeing Google subsidiary sites (YouTube, Blogger, Knol, etc) prioritized in their searches. Further amendments, like filtering out Wikipedia from search results, are also present — which in my opinion is a nice touch.

In short, a great many websites are geared towards making the vast stores of information on the internet as accessible as possible by ranking other websites on the basis of information quality and relevance. In doing, so, however, the very definitions of quality and relevance have changed dramatically from the notions they imbued years earlier, in the heyday of Altavista and AskJeeves. How could they not, when far from just being a tool for education the internet exploded into the complex social realm it now is?

So now, perhaps, the most accurate information about a subject is not foremost on a search list about it: now, perhaps, it has been supplanted by the most popular website other people visited in relation to that topic. And the most relevant information about another topic might easily be transposed by the most popular piece of entertainment riffing off its theme. And, of course, there still lies the question of Wikipedia: Should it come first in Google searches; is it always the most accurate response to whatever query you may have typed in; are more accurate responses buried farther down the list?

While there is no discounting the incredible developments we’ve made in expanding internet functionality — day in and day out adding to the human element of online operations — it also cannot be denied that there will always be a need for straight answers, too. Think of Wolfram|Alpha as a reminder, then, that for all our dallying in the online realm there still exists a real world — a concrete place with numerous quantifiable attributes just waiting to be described.

May 26, 2009

When I wrote last Friday that all investigative reporting carries with it a measure of risk, but no kind more so than war journalism, I hope I stressed enough that the ability of most all other subjects to destroy reputations, companies, job prospects, jobs themselves, property values, and even whole livelihoods is still quite considerable. Even in these realms, lives too are sometimes lost.

Thus it was with little surprise, but great sadness, that I read this past weekend of South Korea’s president Roh Moo-Hyun, who threw himself off a mountain after enduring what has been typified as “relentless” pursuit by the media following allegations of bribery in the past year.

Sadness, because I do generally believe in the redeemable life, and for someone so sensitive as to realize and react to the weight of his indiscretions must especially be seen as having had in him the capacity, also, to apply awareness of the past to more positive future actions. More disconcerting by far, for me, are those will not even make allowances for the possibility of error, and so forward the argument that, as George Santayan once wrote, “Those who cannot remember the past are condemned to repeat it.”

Yes, this matter of reflection brings me right back to my original thesis, on the need to treat war journalism and reporting on the military as distinct tasks, and in this way overcome the limitations on truth-telling during a time of armed conflict.

But so too does a quotation from the aforementioned article on Moo-Hyun:

NYT — “It has become a bad political habit for presidents in South Korea to try to gain support by punishing the former president,” said Kang Won-taek, a politics professor at Seoul’s Soongsil University. “What happened to Roh Moo-hyun shows that it is time to break this habit.”

The tendency to define a presidency by the failings of the one that came before took root as the country struggled to redefine itself in the early 1990s as a young democracy after years of dictatorships. Many Koreans were exhilarated as the first democratically elected governments punished the men who had resisted democracy for so long.

No good, in other words, comes either from denial of the past or the outright demonization of all that came before: The former leaves us no room to learn from our actions; the latter, no room to accept that the same seeds of indiscretion and abuse lie in us just as much as they did in those who came before.

What remains, then, is the need for nuance; and any journalist will tell you nuance only emerges when there is consistency and longevity to the issue being addressed. Here, then, lies the primary distinction between war journalism and reporting on the military: war journalism exists so long as the conflict does, while reporting on the military would extend across conflicts, and through the long stretches of peace besides.

An analogy might lie in the chronicling of small mining towns: During production booms there would be plenty to report upon, in terms of speculation, quality, corporate practices and corruption, union issues, housing markets, immigration, and emergent family issues pertaining to social services, community development, opportunity costs, and secondary job fields. But at times of little to moderate production output and community growth there would seem to be fewer dramatic matters to comment upon. And yet there are still issues — there are always issues: from the impact of employment and poverty levels on drug and domestic abuse rates, to the disintegration of a social net, to the rise of hunting to offset low wages, to reduced educational opportunities, health matters, religious communities, and impossibly high relocation costs.

So it also is with the military, and I would even go so far as to say that what’s omitted from our reports on the military during peace time, or what systemic comparisons we neglect to construct between different armed conflicts, considerably weakens our overall understanding of the role and culture of defense in contemporary society.

Take, for example, our treatment of military rape — a topic much on my mind since the New York Times‘s Bob Herbert wrote an opinion piece entitled “The Great Shame” back in March. Herbert notes a lot of the most difficult aspects about military rape that go under the radar in current reporting on the wars in Iraq and Afghanistan — specifically, that the soldiers rape both citizens and their own. Last year alone, according to Herbert, saw a 25 percent increase in reported rapes of female soldiers. Considering that rape is one of the most under-reported crimes in our society, it chills me to the bone to wonder how much deeper these offenses go.

The column furthermore put me in mind of a piece I read in 2007, on Salon.com, entitled “The private war of women soldiers.” Though a strong, culture-building piece, its position in an online, sociology-leaning magazine sadly made sense at the time: I had difficulty imagining the same emblazoned as a features news story on the cover of most mainstream print newspapers — even though, were we to treat rape with the same severity as business coverage, it would be.

Which is why Herbert’s piece was so striking. How was Herbert able to tackle an issue this demoralizing and potentially demonizing to troops presently stationed in Iraq and Afghanistan, when so many suicides, friendly fire incidents, and criminal behaviour in the same context and region were barely addressed outside of hard news reports?

The answer, I’m convinced, lies in his approach: Herbert started with an incident with no clear date stamp, and few to no concrete details. He wrote broadly, wedging a couple pertinent facts about current rises in rape statistics amid a vaguer, more expansive discourse about how rape in the military manifests, why, and what can be done about it. By couching the subject in so many generalizations, he was therefore able to draw this stinging conclusion:

NYT — The military is one of the most highly controlled environments imaginable. When there are rules that the Pentagon absolutely wants followed, they are rigidly enforced by the chain of command. Violations are not tolerated. The military could bring about a radical reduction in the number of rapes and other forms of sexual assault if it wanted to, and it could radically improve the overall treatment of women in the armed forces.

There is no real desire in the military to modify this aspect of its culture. It is an ultra-macho environment in which the overwhelming tendency has been to see all women — civilian and military, young and old, American and foreign — solely as sexual objects.

Real change, drastic change, will have to be imposed from outside the military. It will not come from within.

And you know what? I’m okay with this approach, so long as it produces serious discussion and follow-up. After all, do we really need to drag every rape victim in the military out into the open in order to bare the truth of its existence? I should think not — especially as that in and of itself can impose undue added harm on the victims. Similarly, do we need to parade every suicide case in order to prove it happens? Must every soldier who accidentally shot one of his own in a high-stress combat position be splayed across the papers of the nation?

No. The rules of war journalism are understandable: In reporting on any immediate conflict, writers and photographers need to minimize their negative impact on the sources at hand — the soldiers, primarily, but also any alternative sources they might seek out from the region, civilian or otherwise — while simultaneously conveying the essential facts of any one news story.

But we journalists still have meta-data, spanning this conflict and many others besides, at our disposal. And to report once a month on suicide rates, reported rapes, friendly fire incidents, mental health walk-in clinic figures, tour extension numbers, and other such statistics — both at home and abroad, and kept in close relation to a study of historical statistics as well — would in and of itself go a long way to entrenching a dialogue on military culture that no one can perceive as a direct threat to our soldiers overseas. No extensive parade of bodies and names needed!

Because, really, all this reporting on the military isn’t meant to be a threat: rather, it’s meant to help eliminate those threats most often propagated by ignorance; and ultimately, to help the rest of us truly understand. Not, perhaps, so that one day the entire sub-culture will no longer be needed — that’s far too much a pipe dream for even a young’un like me to humour. But at least so that, one day, we can apply this distinct sub-culture to the relief of inevitable global conflicts with the full knowledge of just what it is we’re giving up in the pursuit — we hope — of a greater common good.

—

Post Script:

*Apologies for the lateness of this entry: In all honesty, my cat deleted the original yesterday — ironically leaving in its place only the letters “un.” She evidently doesn’t quite agree with my position on this issue, a disagreement I hope she understands I can end swiftly by denying her supper. … Then again, who knows what she’d delete in retaliation. Best not to chance it!

War journalism has to be the toughest media gig around. You go out, you get the facts, you tell a very complex story as best you can. And then you have to sit on it. Or the censors get to it. Or your editor just tells you to take it down a notch. Why? Because if you’re too detailed — about intentions, about army locations — you put more lives at risk. Every day finding the balance between two difficult end-goals (telling the whole story, and doing as little harm in the process as possible) carries much greater risks than just about any other kind of news.

It’s not as though plain old local investigative reporting doesn’t come with its own risks: damaging an individual or a community’s reputation can have very dire consequences in and of itself. But in a war, on the ground, those consequences are much more immediate, and lie almost invariably in further casualties.

For all these stories, whether they be about suicide, rape, vandalism, brutality and torture, corpse mutilation, unnecessary civilian casualties, or “friendly fire” incidents, anything that casts our own soldiers, or their allies, in a poor light during war time is immediately deemed a danger to their safety, either through internal morale issues or the provocation of heightened aggression from enemy combatants. And often this status leads to more delicacy, more omission, and more neglect in the realm of story updates.

This is a problem.

It’s a problem when incidents keep happening that, with or without the help of the media sphere, make it to the public consciousness — creating in their wake a mythology that, in its vagueness, ends up implicating the good right along with the bad. And after all the horrific military abuses that emerged during and after Bush’s presidency, I highly doubt further censorship, in the aim of keeping a damper on such rumours, would either be effective or without backlash. So what options are we left with?

The story of Sgt Russell had a news cycle of a scant two days; I’ve given it over a week, and no follow-up exists. To be fair, though, the media’s had its hands full in the last couple days especially, with the case of Steven D. Green, the “ex-soldier” who instigated the gang rape and murder of a 14 year old Iraqi girl, alongside the murders of her father, her mother, and her younger sister, while a private for Bravo Company, First Battalion, 502nd Infantry, Second Brigade Combat Team of the 101st Airborne Division. This is too sick a case to refer to without more vile details, because the news broke just yesterday that Green is getting life in prison for his role in this heinous attack; he, along with four other soldiers implicated in this incident, will be up for parole in ten years:

New York Times — The March 2006 murders in Mahmudiya, 20 miles south of Baghdad, were so bloody that American and Iraqi authorities first thought they were the work of insurgents. The American soldiers were implicated after at least one acknowledged to fellow soldiers a role in the crimes.

At the time, the Iraq insurgency was near its violent apex, and American forces were suffering heavy casualties. Private Green’s unit, Bravo Company, First Battalion, 502nd Infantry, Second Brigade Combat Team of the 101st Airborne Division, was sent to a particularly violent area that soldiers called the Triangle of Death soon after arriving in Iraq in the fall of 2005.

The battalion quickly suffered casualties, including a sergeant close to Private Green. In December, Private Green, along with other members of his platoon, told an Army stress counselor that he wanted to take revenge on Iraqis, including civilians. The counselor labeled the unit “mission incapable” because of poor morale, high combat stress and anger over the deaths, and said it needed both stronger supervision and rest. It got neither, testimony at Mr. Green’s trial showed.

On March 11, 2006, after drinking Iraqi whiskey, Private Green and other soldiers manning a checkpoint decided to rape an Iraqi girl who lived nearby, according to testimony. Wearing civilian clothing, the soldiers broke into a house and raped Abeer Qassim Hamza al-Janabi. Soldiers in the group testified that Private Green killed the girl’s parents and a younger sister before raping and then shooting the girl in the head with the family’s own AK-47, which it had kept for self defense.”

Two things came to mind when I read this story: First, and most prominently, was the blatant labelling of Green as an “ex-soldier” in the headline: “Ex-Soldier Gets Life Sentence for Iraq Murders.” Well, yes, clearly the army would dishonourably discharge him after such an incident. I could see that getting a sentence or two inside the actual article. But as the primary fact in a headline about the heinous crime, its consequences, and the systemic mental health issues it brings yet again to the surface? Not on your life: Green was a soldier when he committed those acts — a soldier whose entire unit was deemed unfit for duty, and yet was left by its superiors without adequate resources for stress and grief management. The moment we veer from these facts, even for a second, we start shifting our attention from the continual immediacy of mental health issues on the ground in Iraq, and permit the build-up to more — more killings, more rapes, more suicides.

… Which leads me to the second thought this article prompted — a throwback to something I’d read last week in relation to Sgt Russell. “At a Senate hearing Tuesday,” ABC News reported, “Army Secretary Pete Geren and chief of staff Gen. George Casey diverged from a discussion of the Army’s budget to weigh in on what is being done for soldiers like Russell. … Casey said it isn’t true most soldiers suffer from post traumatic stress disorder following combat, instead making the point that ‘the vast majority of people that go to combat have a growth experience because they are exposed to something very, very difficult and they succeed.'”

Honestly, I don’t know quite how to take this argument: I’m sure there are plenty of people who cope perfectly with the taking of enemy lives, the knowledge of civilian casualties, children or otherwise, an awareness of the brutality wrought by others in their ranks, and exposure to the deaths or crippling injuries of their comrades. I’m just not entirely sure I’d be comfortable around them.

The fact is, war is not meant to be pretty, and it cannot be managed with the board-room efficiency of a business. Nor should it be: No amount of spin and rhetoric should ever take away from the importance of protecting human life, and the gravity of its loss in a time of war. Sadly, it looks very much as though each generation needs to live through a time of conflict before that lesson truly hits home.

And yet, surely we can do better. Surely there is a way, with all of the channels available to us today, to be better in our reporting. Better by our fellow civilians, who are represented to the world by the actions of our troops, and our public condemnation (or lack thereof) of any wrongdoing on the field. Better to the civilians whose lives we claim we’re trying to protect from insurgency and tyranny in the war zones we’re fighting in, by holding military abuses on their soil to higher account. And better still to the soldiers themselves, who for better or worse place themselves in the line of fire — external and internal, in the course of duty — in search of a better peace than the one we already know.

I think the road to this goal lies with a stronger division between war journalism and reporting on the military. But I also think this argument is one for another day — Monday, to be specific.

Today I just want to end off reflecting on the five lives ended by Sgt Russell, and the four, equally innocent, lives cut short by Ex-Private Green. How much future bloodshed could we ward off, I wonder, if we truly gave ourselves over to the solemn remembrance of all that’s come before?

In an undergad political science course a few years back, I recall being challenged to present explanations for public apathy in Canadian politics. Out of a class of some thirty students, I was the only one to argue that there wasn’t apathy — that low voter turnout among youth was readily offset, for instance, by far higher youth turnout in rallies, discussion forums, and the like. Youth were absolutely talking politics: they just weren’t applying this talk in the strictest of official senses.

My professor always did love such counterarguments, but my classmates never seemed to buy them. Rather, many argued that the “fact” of disengagement was not only accurate, but also healthier, because it meant that only those who “actually cared” about policy would set it. (We were working, at the time, with figures like only 2 percent of the Canadian population being card-carrying party members.) Many of these same students likewise believed that economics was not only the ultimate driving force in our culture, but also the only driving force that could lead; and also that true democracy was unwise because only a select few (I could only assume they counted themselves among this number) were able to govern wisely.

At the time, Facebook was two years old. YouTube was one. And the online landscape, though unfurling at a mile a minute, was still light years from its present levels of group interaction. My sources for the presentation in 2006 were therefore an uncertain medley of old and new media: news articles and statistics; online party forums and Green Party doctrine.

I didn’t have at my disposal, for instance, incredible videos like Us Now, a documentary encapsulating the many ways in which average citizens — seeing truly accessible means of interacting on a collective level with their environment — are achieving great success breaking down the representative government model to something much more one-on-one.

Nor did I have The Point, which provides anyone with an account and an idea the means to start a campaign, co-ordinate fundraising, organize group activities, and otherwise influence public change. (Really, check it out — it’s fantastic.)

As a huge policy geek, and a member of the new media generation to boot, I saw this as a goldmine of opportunity — and yet there is plenty else on the website for other policy development, too: discussion forums and wiki projects alike. So of course, in my excitement, I sent the link to a few members of the old generation — only to receive a curious collection of responses, dismissing the above as an exercise in anarchy, while simultaneously criticizing old-school committees as never accomplishing anything properly.

Well, old guard, which is it? Is our present model of representative government failing us in certain regards, and should we thus try to engage different policy-building models? Or is the same model which, despite early challenges to legitimacy, created an online encyclopedia as powerful as the Encyclopedia Britannica, by its very nature as an open-source community project unfit for political consideration?

Us Now makes the point that the internet’s promise of a more dynamic and accessible global community has had many false starts (spam, scams, and the proliferation of child pornography rings come personally to mind). But long before we became cynical of the internet’s capacity to improve our social impact, we as a society were already well used to doubting the potential of our fellow citizens to act intelligently and in the pursuit of the communal good. You can thank Machiavelli’s The Prince, Italo Calvino’s Crowds and Power, and bastardized readings of Adam Smith’s The Wealth of Nations in part for this.

A little while ago, however, I got around to reading John Ralston Saul’s The Unconscious Civilization, a CBC Massey Lecture Series essay collection about the rise of the management class and the utter reversion of the democracy/free market equation to the extent that the notion of democracy itself has suffered massive political distortion. Written just before the first real explosion of online communal projects — be they open source software, open-access socio-political groups, or information-dissemination tools — what Saul wasn’t able to account for in his work was the balancing force of technology itself. Rather, when he wrote these essays, technology was still very much a cornerstone of continued economic distortions in lieu of real democracy. Now, though, it’s clear that technology created through the corporate model has itself emerged as a platform for participatory government — and thus also as the undoing of those same, hierarchical economic forces. Coming full circle is fun!

So, to get back to this matter of “trusting in the intelligence of individuals, and their capacity to act in the common good,” yes, there is a lot of circumstantial evidence to the contrary on the internet. Heaven knows, for instance, that the low-brow interactions which inspired CollegeHumor.com’s We Didn’t Start The Flame War are in fact a daily, persistent reality online, and make up a substantial percentage of commentary therein.

Yet any parent will tell you that the way to raise a responsible child is to give her responsibilities to live up to; a child entrusted with none will invariably continue to act like one. So rather than using, as a test of our group potential online, those sites that in no way engender a sense of responsibility for our actions, why not look at those sites that do — like ThePoint.com, and the Globe and Mail Policy Wiki?

Furthermore, if our current model of representative government no longer yields the level of public engagement we crave (read: in the ways the government wants to see), maybe it’s because citizens at large haven’t been given the opportunity to feel like real participants at all levels of the democratic process. And maybe, just maybe, the internet not only can change that perception, but already is.

After all, those same students who, in the comfort of a political science classroom just three years back, so boldly proclaimed that collective decision making was a waste of time? You’ll find every last one on Facebook and LinkedIn today.

Within two months after Last.fm, a music streaming service, signed into partnership with four major record labels, Amazon.com saw a 119 percent increase in online music sales. Through an ad-based revenue model Last.fm was able to offer free access to a database of songs numbering in the millions, and to group them into “stations” wherein your tastes would yield similar artists or songs in that vein. The catch was that after three iterations of one song, Last.fm would display an advertisement directing listeners to affiliate partners selling the tune. All in all, it was a sweet deal: We got free music, the big labels got paid, the small labels got exposure, and contrary to popular wisdom about downloaders detracting from music profits, online sales were through the roof.

So, of course, Last.fm switched to a subscription model on April 22, 2009: Now International Users have to pay “three” every month — three euros, three dollars: whatever is regionally appropriate. And honestly? This makes tremendous business sense: Last.fm has to pay for every track you listen to from a major label, and when it can’t negotiate adequate terms for payment with a label, sometimes that label just cuts out.

Nonetheless, as part of the Napster generation I can’t help but note how, the more things change online, the more they’ve ultimately stayed the same. From Napster to Pandora to Muxtape to Seeqpod and, of course, a slew of others, the introduction of free big-label music under any number of guises has always, invariably ended in a curtailing of services (at best), or else a complete redirection of the site’s aims and/or bankruptcy.

Notice anything funny there? Take a look at how this cycle begins: With the desire to give something away for free. Not to make a profit on it; just to scrape by — and only when profit margins drop deep into the red, to impose fees on the consumers. Yeah, you might say, it’s easy not to try to make money on something you didn’t create (the music). But… if history serves us well, it’s not. People just don’t pass up the opportunity to exploit the work of others for their own profit. So how is it that models like the ones listed above ever existed in the first place?

The answer perhaps lies in our generation’s unique conditioning: if as individuals we still demanded that our own creative output be viewable solely through a pay system (as Amazon is proposing in blog subscriptions for Kindle), we’d be hypocrites to demand free content from others. But growth on the internet has proven instead too nuanced for such hypocrisy: while some services have always tried to charge for content, the blogosphere, YouTube, GoogleVideo, MySpace, DeviantArt, Flickr, news aggregates, and other such websites have always run on a free viewing model. In short, by now we’re more than used to posting a piece of writing, a photo, a video, or a song online and expecting nothing monetary from it. Art and entertainment have entered into a free-for-all creation domain, and while this doesn’t mean we don’t still hold in high regard those artists and entertainers who dedicate the whole of their lives to such work, it certainly means we have different expectations for our engagement with them.

As such, the story of those aforementioned music services means just what seems to mean: That our first push out into the world of the internet is just as likely to be in the pursuit of free access as it is to be about exploitation — and thus, that we as consumers can forever expect to find ourselves latching on to free content, taking it for granted, and having subsequent power plays or business models then wrest that freedom away. A cry of foul will emerge, we’ll flood a comments page with angry protests… and then most of us will clear off, find a new free music service, and repeat.

Rest assured, this isn’t as hard to stomach as it sounds: we’re already quite used to learning to pay for goods we’d always taken for granted — how else can you explain bottled tap water? But the story of free music is a fast-paced tale that also speaks volumes about deeper, more complex payment issues at work on the internet.

Because while the struggle for survival of music streaming services cater to our more immediate fears about The Man, there is a longer, more drawn-out battle being waged in turn for the whole of the internet. Yes, I’m talking about the attempts of Internet Service Providers to make heavy internet users pay more, or divest the whole medium of its equal playing field by allowing some companies to pay for prioritized access, effectively shutting small companies and websites out of the mass market. Or what about Bell Canada, which last year found an ally in the CRTC when the Canadian Association of Internet Providers complained that Bell was “throttling” access for peer-to-peer applications — a direct challenge to net neutrality? When the CRTC sided with Bell in the case, they likewise permitted, and set precedent for, the legality of an ISP interfering with an individual’s use of the service he’s paid for, through “traffic-shaping.”

And then, of course, there is the anti-piracy bill passed by the French National Assembly on May 12, 2009: anyone caught downloading or sharing copyrighted files three times can now be suspended from the internet for two months to a year on that third notice. Chillingly, the law would not require a trial or court order: All the ISPs need do is send you your warnings, making this a huge win for corporate control of the medium.

This, then, is the real conflict of the internet — an on-going negotiation being fought in a much more protracted, expansive way than any music streaming service need fear: but a negotiation, nonetheless, that will shape the future of the internet for us and those to come.

For now we take our freedoms and equality online for granted — just as we do our free music moment by moment. The question is, if the lesson of music streaming services has taught us anything, what can we really say about how free or equal the internet as a whole will be just ten years down the line?

May 6, 2009

There is reason to think positively about the strength of citizens en masse. There is reason, too, to think positively about the benefits of our new networking technologies. And one need look no farther for proof of this than the confrontation between panic and perspective in relation to the swine flu epidemic.

Swine flu had, and still has, all the earmarks for a perfect shock story: The strain, H1N1, afflicts the healthy, the strong, by over-stimulating the immune system’s response. It’s an inter-species mutant, so you can imagine the inference that it must surely be three times as strong as its avian, human, and swine strain predecessors. And the outbreak has been tied to Mexico — just one more illegal immigrant to worry about, right? (It’s even being called the “killer Mexican flu” in some circles.)

As I write this, according to the Canadian Public Health Agency, there are 165 reported cases of this H1NI strain in humans in Canada. The U.S. claims 403 cases, and between the two of us we have exactly two confirmed deaths. According to WHO statistics (current to May 5) Mexico has 822 cases, with 29 deaths; in the whole world, 21 countries share a collective case count of 1,490, with no other confirmed deaths.

If scientists declare that the strain has established itself outside of North America, the flu will reach pandemic status. In theory, that sounds terrifying, but really, the meaning extends no further than the fact that the illness can be found across the globe. The term pandemic says nothing, for instance, about how lethal or non-lethal said condition is; and though some sourcesare fond of speculatingworst case scenarios, this means that the death rate is still very low. How low? Let’s take the U.S. numbers to illustrate: Annually, there are some 200,000 cases of hospitalization due to typical flu types in the U.S. — and 36,000 deaths. By this measure, swine flu has a long way to go before being anywhere near as serious a threat as its local, home-grown competitors.

And yet all this, for me, isn’t where it gets interesting. Not even close. Rather, what continues to surprise and impress me is our capacity for self-regulated response to the initial panic invoked around this illness. Yes, the media was talking up a storm about Influenza A H1N1. Yes, doomsday speculation was abounding. And yes, many industries — sanitation and pharmaceutical groups especially — have profited greatly in terms of market shares and business from all this panic.

In other words, for all the panic we’ve had thrown at us about this illness, many have responded with a measure of fearlessness at least a hundred times as infectious. Does this mean everyone is rid of that panic? No, of course not: these reactive trends are often regional and compartmentalized due to varying interests and complex investments. The mass killing of all pig herds in Egypt, for instance — a perfectly rational response to a disease that, at that time, had no cases of pig-to-human infection manifested in the world, and absolutely no cases of human infection in the country itself — leaves huge consequences for the pig farmers, who with 300,000 animals killed have lashed back at the government in the form of protests: doubtless this panic attack on the part of officials will leave a long list of social consequences in its wake.

But think back, for comparison’s sake, to our global reaction to SARS — the extreme panic, the devaluation of tourism in heavily affected cities and regions, the dramatic quarantining procedures. Globally, the disease racked up 8273 cases, with 775 direct deaths (a death rate of 9.6 percent, weighted heavily toward seniors). Though clearly a more serious disease than Influenza A H1N1, the overall death rate of Americans due to seasonal influenza was still much higher; and yet our panic was long-standing and far-reaching, in large part because we were given no room for questions of doubt: only more panic.

Similarly, I’m not convinced the relative calm in this case emerged from the ground up: rather, I suspect news articles first had to present seeds of doubt about this issue, as forwarded by scientists reacting to the extent of media spin. I think room for doubt had to emerge from these sources first; and then the average reader, artist, and blogger could follow after — in turn serving to create more room to maneouver, rhetoric-wise, in future works by the mainstream media. But regardless of speculation about just how, and in what order, these groups fed off each other — the scientists, the media, and the participatory citizenry as a whole — what’s more striking is that they fed off each other at all to produce this ultimately calming effect.

We have, in the last 8 years, kicked ourselves over and over again for allowing flimsy excuses for war-mongering to stand; for allowing freedoms to be stripped from us in the name of security; for permitting, in general, the hard polemics of with-us-or-against-us to divide the population. And rightly so: When we go along with fear-mongering, we can be, en masse, pathetic excuses for an advanced and critically thinking civilization.

But cases like our reaction to swine flu should likewise give us cause for hope — and should be treated as such, with praise for measured response wherever it emerges. For as much as we can act like sheep if treated like sheep, it nonetheless takes precious little in the way of tempered social rhetoric for us to realize our own, independent engagements — fearless, inquisitive, and inspired alike — with the world instead.

Major Michelle Mendez, a Canadian soldier stationed in Afghanistan, was on her second tour in the region when found dead in her sleeping quarters at Kandahar Airfield. Hers marks the third death of a Canadian woman, and the 118th fallen Canadian, in Afghanistan since our involvement in the conflict began. The media has done an exemplary job of presenting Mendes in the respectful light afforded all Canadian soldiers lost in this conflict — and perhaps with extra care, too, because hers marks the second female fatality in as many weeks — but one word is pointedly absent from all talk of her “non-combat death”:

Suicide.

According to the Canadian military, an investigation into the circumstances of her death is still ongoing: evidently the possibility of her firearm accidentally discharging has not been entirely ruled out, though The Globe and Mailreports that “a Canadian government source said ‘all evidence points toward a self-inflicted gunshot wound.'”

The prominence of this story, and the blatancy of the aforementioned omission, have piqued my interest. The debate about whether or not to talk about suicide in newspapers, and in what ways, with which emphases, has been waged for decades. The argument ultimately centers on two points: the quest for greater public understanding, and the fear of inducing a copycat effect among readers. To this end, there are fierce defenders of different approaches — each backed by their own body of research and professional opinion. Last year The Newspaper Treewrote an editorial responding to reader concerns over the term’s use in relation to one case: therein they noted that certain organizations of mental health professionals agreed it was better to tell readers the cause of death, but that the stories needed to be presented with the “valuable input of well-informed suicide-prevention specialists” in order to be effective. In that same year, Media Standards Trust published a firm condemnation of suicide stories, citing the high statistical correlation between published stories and copycat suicides.

My problem with the omission approach, however, is its selectivity: Suicides are deemed taboo, but the publishing of violent domestic deaths? murder-suicides? school shootings? isn’t — and all of these stories arguably pertain to people in even more disturbed mindsets (one, because I do not hold that everyone who commits suicide is “disturbed” in the sense of having lost their ability to reason; and two, because their acts take the lives of others, too). A recent Times article asked if the copycat effect was being felt here, too, pointing to the lone study that has been completed to date on the theme. The article also developed a short history of the copycat effect in media, which reads as follows:

The copycat theory was first conceived by a criminologist in 1912, after the London newspapers’ wall-to-wall coverage of the brutal crimes of Jack the Ripper in the late 1800s led to a wave of copycat rapes and murders throughout England. Since then, there has been much research into copycat events — mostly copycat suicides, which appear to be most common — but, taken together, the findings are inconclusive.

In a 2005 review of 105 previously published studies, Stack found that about 40% of the studies suggested an association between media coverage of suicide, particularly celebrity suicide, and suicide rates in the general public. He also found a dose-response effect: The more coverage of a suicide, the greater the number of copycat deaths. (See pictures of an exhibit of Columbine evidence.)

But 60% of past research found no such link, according to Stack’s study. He explains that the studies that were able to find associations were those that tended to involve celebrity death or heavy media coverage — factors that, unsurprisingly, tend to co-occur. “The stories that are most likely to have an impact are ones that concern entertainment and political celebrities. Coverage of these suicides is 5.2 times more likely to produce a copycat effect than coverage of ordinary people’s suicides,” Stack says. In the month after Marilyn Monroe’s death, for example, the suicide rate in the U.S. rose by 12%.

Journalists have a responsibility to the living. We have a responsibility to give readers the best means necessary to make informed decisions about the world around them. This also means doing the least amount of harm. In the case of suicide, this measure of harm is difficult to assess at the outset, as even the very language of the event is against us. To “commit suicide” bears with it the gravitas of an age when suicide was deemed a crime, not a tragedy — and not, in some cases, a release from untreatable pain. To “take one’s own life” is a step up — dramatic, but delicately put — though it is unclear if one term is preferable to the other in keeping the copycat effect to a minimum.

That effect itself also plagues me, because I have to wonder if it occurs in part because there isn’t enough reporting: if all suicides were listed as such (3,613 in Canada in 2004; 32,439 in the U.S. — roughly 10/100,000 for each population), and those suicides were contextualized by similar tallying of all deaths (drownings, the flu, and other causes of death with much higher population tolls) would that copycat effect drastically diminish over time?

I can only speculate. Meanwhile, another telling question has a more interesting answer: Can the news provide the requisite depth and breadth of coverage on mental health issues without the direct mention of suicide? In answer, I refer you to this piece from The Globe And Mail, which delicately tackles mental health in the Canadian military as a hot topic arising from Mendes’ “non-combat death,” while the Canadian Pressapproaches the issue from the vantage point of the female chaplain who presided over Mendes’ ramp ceremony.

There are, then, ways to nod to the issues surrounding suicide without using that word directly. But are they enough? Or does the omission of the word, in conjunction with so much open commentary about related issues, create a different reality — one in which suicide, lacking its public face, becomes at best a vague and theoretical matter?

These are difficult questions, and they grow more difficult when addressing systemic suicides — as exist among many Aboriginal communities in Canada, as well as among military personnel — and when suicide strikes the very young. To whom does the journalist owe her ultimate allegiance: the grief-stricken families, the immediately affected communities, or the public at large? How can we use the fact of suicide to better our understanding of this world we live in? Are we forever doomed to make things worse by the mere mention of suicide’s existence?

Two days ago I watched Rachel Getting Married, a film about a woman who comes home from rehab to take part in her sister’s wedding. A great many difficulties unfold as this woman struggles with guilt and self-hatred, coupled with depression and suicidal tendencies. Watching this film, I registered numerous “triggers” in myself, and cycled for a day and a half back into certain, terribly familiar mental routines. It was then, as I reminded myself that most people likely wouldn’t have had the same reaction to this stimulus, that it struck me: I will never be completely rid of these thoughts, these propensities to cycle between contentment and depression. Anything — a movie, a newspaper article, an off word from a close friend — might trigger them, and then it will be my responsibility to take control of these impulses: acknowledge them, experience them, and move past them.

I know, too, that eight percent of Canadians live with depression, and that at least 16 percent will experience a period of depression at some point in their life. I know I’m on the lucky side of this spectrum: I’ve learned how to counter the anxiety that often pushes depression to the brink, and after years of very extreme engagements with my mental health issues, they are manageable for me. I know this isn’t the case for everyone. I think to myself, what if someone in a much more agitated or suggestible state of mind watched this film instead — or others, with far more tragic endings? What if that was all it took, and the film pushed them to the brink?

Yes, a film or song or book could move someone to suicide. Most likely, it has already happened a lot. In short, anything could be a trigger; anything might be the last straw. But art, like the media, has as its higher purpose the construction of conversations about the world we live in, and how we live within it. So if there is a way to address suicide directly in the news — with the aid of suicide prevention experts; with a fully conveyed understanding of the context in which suicide operates; and with absolute respect for the families and friends each instance affects — I think we need to take it. To do otherwise, for me, is to leave each victim as alone in death as they surely felt in the lives they chose to end.