May 26, 2009

When I wrote last Friday that all investigative reporting carries with it a measure of risk, but no kind more so than war journalism, I hope I stressed enough that the ability of most all other subjects to destroy reputations, companies, job prospects, jobs themselves, property values, and even whole livelihoods is still quite considerable. Even in these realms, lives too are sometimes lost.

Thus it was with little surprise, but great sadness, that I read this past weekend of South Korea’s president Roh Moo-Hyun, who threw himself off a mountain after enduring what has been typified as “relentless” pursuit by the media following allegations of bribery in the past year.

Sadness, because I do generally believe in the redeemable life, and for someone so sensitive as to realize and react to the weight of his indiscretions must especially be seen as having had in him the capacity, also, to apply awareness of the past to more positive future actions. More disconcerting by far, for me, are those will not even make allowances for the possibility of error, and so forward the argument that, as George Santayan once wrote, “Those who cannot remember the past are condemned to repeat it.”

Yes, this matter of reflection brings me right back to my original thesis, on the need to treat war journalism and reporting on the military as distinct tasks, and in this way overcome the limitations on truth-telling during a time of armed conflict.

But so too does a quotation from the aforementioned article on Moo-Hyun:

NYT — “It has become a bad political habit for presidents in South Korea to try to gain support by punishing the former president,” said Kang Won-taek, a politics professor at Seoul’s Soongsil University. “What happened to Roh Moo-hyun shows that it is time to break this habit.”

The tendency to define a presidency by the failings of the one that came before took root as the country struggled to redefine itself in the early 1990s as a young democracy after years of dictatorships. Many Koreans were exhilarated as the first democratically elected governments punished the men who had resisted democracy for so long.

No good, in other words, comes either from denial of the past or the outright demonization of all that came before: The former leaves us no room to learn from our actions; the latter, no room to accept that the same seeds of indiscretion and abuse lie in us just as much as they did in those who came before.

What remains, then, is the need for nuance; and any journalist will tell you nuance only emerges when there is consistency and longevity to the issue being addressed. Here, then, lies the primary distinction between war journalism and reporting on the military: war journalism exists so long as the conflict does, while reporting on the military would extend across conflicts, and through the long stretches of peace besides.

An analogy might lie in the chronicling of small mining towns: During production booms there would be plenty to report upon, in terms of speculation, quality, corporate practices and corruption, union issues, housing markets, immigration, and emergent family issues pertaining to social services, community development, opportunity costs, and secondary job fields. But at times of little to moderate production output and community growth there would seem to be fewer dramatic matters to comment upon. And yet there are still issues — there are always issues: from the impact of employment and poverty levels on drug and domestic abuse rates, to the disintegration of a social net, to the rise of hunting to offset low wages, to reduced educational opportunities, health matters, religious communities, and impossibly high relocation costs.

So it also is with the military, and I would even go so far as to say that what’s omitted from our reports on the military during peace time, or what systemic comparisons we neglect to construct between different armed conflicts, considerably weakens our overall understanding of the role and culture of defense in contemporary society.

Take, for example, our treatment of military rape — a topic much on my mind since the New York Times‘s Bob Herbert wrote an opinion piece entitled “The Great Shame” back in March. Herbert notes a lot of the most difficult aspects about military rape that go under the radar in current reporting on the wars in Iraq and Afghanistan — specifically, that the soldiers rape both citizens and their own. Last year alone, according to Herbert, saw a 25 percent increase in reported rapes of female soldiers. Considering that rape is one of the most under-reported crimes in our society, it chills me to the bone to wonder how much deeper these offenses go.

The column furthermore put me in mind of a piece I read in 2007, on Salon.com, entitled “The private war of women soldiers.” Though a strong, culture-building piece, its position in an online, sociology-leaning magazine sadly made sense at the time: I had difficulty imagining the same emblazoned as a features news story on the cover of most mainstream print newspapers — even though, were we to treat rape with the same severity as business coverage, it would be.

Which is why Herbert’s piece was so striking. How was Herbert able to tackle an issue this demoralizing and potentially demonizing to troops presently stationed in Iraq and Afghanistan, when so many suicides, friendly fire incidents, and criminal behaviour in the same context and region were barely addressed outside of hard news reports?

The answer, I’m convinced, lies in his approach: Herbert started with an incident with no clear date stamp, and few to no concrete details. He wrote broadly, wedging a couple pertinent facts about current rises in rape statistics amid a vaguer, more expansive discourse about how rape in the military manifests, why, and what can be done about it. By couching the subject in so many generalizations, he was therefore able to draw this stinging conclusion:

NYT — The military is one of the most highly controlled environments imaginable. When there are rules that the Pentagon absolutely wants followed, they are rigidly enforced by the chain of command. Violations are not tolerated. The military could bring about a radical reduction in the number of rapes and other forms of sexual assault if it wanted to, and it could radically improve the overall treatment of women in the armed forces.

There is no real desire in the military to modify this aspect of its culture. It is an ultra-macho environment in which the overwhelming tendency has been to see all women — civilian and military, young and old, American and foreign — solely as sexual objects.

Real change, drastic change, will have to be imposed from outside the military. It will not come from within.

And you know what? I’m okay with this approach, so long as it produces serious discussion and follow-up. After all, do we really need to drag every rape victim in the military out into the open in order to bare the truth of its existence? I should think not — especially as that in and of itself can impose undue added harm on the victims. Similarly, do we need to parade every suicide case in order to prove it happens? Must every soldier who accidentally shot one of his own in a high-stress combat position be splayed across the papers of the nation?

No. The rules of war journalism are understandable: In reporting on any immediate conflict, writers and photographers need to minimize their negative impact on the sources at hand — the soldiers, primarily, but also any alternative sources they might seek out from the region, civilian or otherwise — while simultaneously conveying the essential facts of any one news story.

But we journalists still have meta-data, spanning this conflict and many others besides, at our disposal. And to report once a month on suicide rates, reported rapes, friendly fire incidents, mental health walk-in clinic figures, tour extension numbers, and other such statistics — both at home and abroad, and kept in close relation to a study of historical statistics as well — would in and of itself go a long way to entrenching a dialogue on military culture that no one can perceive as a direct threat to our soldiers overseas. No extensive parade of bodies and names needed!

Because, really, all this reporting on the military isn’t meant to be a threat: rather, it’s meant to help eliminate those threats most often propagated by ignorance; and ultimately, to help the rest of us truly understand. Not, perhaps, so that one day the entire sub-culture will no longer be needed — that’s far too much a pipe dream for even a young’un like me to humour. But at least so that, one day, we can apply this distinct sub-culture to the relief of inevitable global conflicts with the full knowledge of just what it is we’re giving up in the pursuit — we hope — of a greater common good.

—

Post Script:

*Apologies for the lateness of this entry: In all honesty, my cat deleted the original yesterday — ironically leaving in its place only the letters “un.” She evidently doesn’t quite agree with my position on this issue, a disagreement I hope she understands I can end swiftly by denying her supper. … Then again, who knows what she’d delete in retaliation. Best not to chance it!

Major Michelle Mendez, a Canadian soldier stationed in Afghanistan, was on her second tour in the region when found dead in her sleeping quarters at Kandahar Airfield. Hers marks the third death of a Canadian woman, and the 118th fallen Canadian, in Afghanistan since our involvement in the conflict began. The media has done an exemplary job of presenting Mendes in the respectful light afforded all Canadian soldiers lost in this conflict — and perhaps with extra care, too, because hers marks the second female fatality in as many weeks — but one word is pointedly absent from all talk of her “non-combat death”:

Suicide.

According to the Canadian military, an investigation into the circumstances of her death is still ongoing: evidently the possibility of her firearm accidentally discharging has not been entirely ruled out, though The Globe and Mailreports that “a Canadian government source said ‘all evidence points toward a self-inflicted gunshot wound.'”

The prominence of this story, and the blatancy of the aforementioned omission, have piqued my interest. The debate about whether or not to talk about suicide in newspapers, and in what ways, with which emphases, has been waged for decades. The argument ultimately centers on two points: the quest for greater public understanding, and the fear of inducing a copycat effect among readers. To this end, there are fierce defenders of different approaches — each backed by their own body of research and professional opinion. Last year The Newspaper Treewrote an editorial responding to reader concerns over the term’s use in relation to one case: therein they noted that certain organizations of mental health professionals agreed it was better to tell readers the cause of death, but that the stories needed to be presented with the “valuable input of well-informed suicide-prevention specialists” in order to be effective. In that same year, Media Standards Trust published a firm condemnation of suicide stories, citing the high statistical correlation between published stories and copycat suicides.

My problem with the omission approach, however, is its selectivity: Suicides are deemed taboo, but the publishing of violent domestic deaths? murder-suicides? school shootings? isn’t — and all of these stories arguably pertain to people in even more disturbed mindsets (one, because I do not hold that everyone who commits suicide is “disturbed” in the sense of having lost their ability to reason; and two, because their acts take the lives of others, too). A recent Times article asked if the copycat effect was being felt here, too, pointing to the lone study that has been completed to date on the theme. The article also developed a short history of the copycat effect in media, which reads as follows:

The copycat theory was first conceived by a criminologist in 1912, after the London newspapers’ wall-to-wall coverage of the brutal crimes of Jack the Ripper in the late 1800s led to a wave of copycat rapes and murders throughout England. Since then, there has been much research into copycat events — mostly copycat suicides, which appear to be most common — but, taken together, the findings are inconclusive.

In a 2005 review of 105 previously published studies, Stack found that about 40% of the studies suggested an association between media coverage of suicide, particularly celebrity suicide, and suicide rates in the general public. He also found a dose-response effect: The more coverage of a suicide, the greater the number of copycat deaths. (See pictures of an exhibit of Columbine evidence.)

But 60% of past research found no such link, according to Stack’s study. He explains that the studies that were able to find associations were those that tended to involve celebrity death or heavy media coverage — factors that, unsurprisingly, tend to co-occur. “The stories that are most likely to have an impact are ones that concern entertainment and political celebrities. Coverage of these suicides is 5.2 times more likely to produce a copycat effect than coverage of ordinary people’s suicides,” Stack says. In the month after Marilyn Monroe’s death, for example, the suicide rate in the U.S. rose by 12%.

Journalists have a responsibility to the living. We have a responsibility to give readers the best means necessary to make informed decisions about the world around them. This also means doing the least amount of harm. In the case of suicide, this measure of harm is difficult to assess at the outset, as even the very language of the event is against us. To “commit suicide” bears with it the gravitas of an age when suicide was deemed a crime, not a tragedy — and not, in some cases, a release from untreatable pain. To “take one’s own life” is a step up — dramatic, but delicately put — though it is unclear if one term is preferable to the other in keeping the copycat effect to a minimum.

That effect itself also plagues me, because I have to wonder if it occurs in part because there isn’t enough reporting: if all suicides were listed as such (3,613 in Canada in 2004; 32,439 in the U.S. — roughly 10/100,000 for each population), and those suicides were contextualized by similar tallying of all deaths (drownings, the flu, and other causes of death with much higher population tolls) would that copycat effect drastically diminish over time?

I can only speculate. Meanwhile, another telling question has a more interesting answer: Can the news provide the requisite depth and breadth of coverage on mental health issues without the direct mention of suicide? In answer, I refer you to this piece from The Globe And Mail, which delicately tackles mental health in the Canadian military as a hot topic arising from Mendes’ “non-combat death,” while the Canadian Pressapproaches the issue from the vantage point of the female chaplain who presided over Mendes’ ramp ceremony.

There are, then, ways to nod to the issues surrounding suicide without using that word directly. But are they enough? Or does the omission of the word, in conjunction with so much open commentary about related issues, create a different reality — one in which suicide, lacking its public face, becomes at best a vague and theoretical matter?

These are difficult questions, and they grow more difficult when addressing systemic suicides — as exist among many Aboriginal communities in Canada, as well as among military personnel — and when suicide strikes the very young. To whom does the journalist owe her ultimate allegiance: the grief-stricken families, the immediately affected communities, or the public at large? How can we use the fact of suicide to better our understanding of this world we live in? Are we forever doomed to make things worse by the mere mention of suicide’s existence?

Two days ago I watched Rachel Getting Married, a film about a woman who comes home from rehab to take part in her sister’s wedding. A great many difficulties unfold as this woman struggles with guilt and self-hatred, coupled with depression and suicidal tendencies. Watching this film, I registered numerous “triggers” in myself, and cycled for a day and a half back into certain, terribly familiar mental routines. It was then, as I reminded myself that most people likely wouldn’t have had the same reaction to this stimulus, that it struck me: I will never be completely rid of these thoughts, these propensities to cycle between contentment and depression. Anything — a movie, a newspaper article, an off word from a close friend — might trigger them, and then it will be my responsibility to take control of these impulses: acknowledge them, experience them, and move past them.

I know, too, that eight percent of Canadians live with depression, and that at least 16 percent will experience a period of depression at some point in their life. I know I’m on the lucky side of this spectrum: I’ve learned how to counter the anxiety that often pushes depression to the brink, and after years of very extreme engagements with my mental health issues, they are manageable for me. I know this isn’t the case for everyone. I think to myself, what if someone in a much more agitated or suggestible state of mind watched this film instead — or others, with far more tragic endings? What if that was all it took, and the film pushed them to the brink?

Yes, a film or song or book could move someone to suicide. Most likely, it has already happened a lot. In short, anything could be a trigger; anything might be the last straw. But art, like the media, has as its higher purpose the construction of conversations about the world we live in, and how we live within it. So if there is a way to address suicide directly in the news — with the aid of suicide prevention experts; with a fully conveyed understanding of the context in which suicide operates; and with absolute respect for the families and friends each instance affects — I think we need to take it. To do otherwise, for me, is to leave each victim as alone in death as they surely felt in the lives they chose to end.

Though friends send me links about the most striking losses in the print and broadcast journalism fields, the truth is that I’ve been following job losses, mergers, bankruptcies, and budget cuts for three years now. Anyone with any sustained interest in journalism has done the same for at least as long. The first post on this blog, written a year ago, especially notes the lengthy articles written in various publications over the past few years about the future of journalism — though many will say the discussion began in the ’90s, or earlier, and they’re likely right: I just haven’t been in the field that long yet myself.

In any case, it’s not a new debate, and that’s precisely what I feel many people don’t realize when they weigh in. Especially striking is how conversation on this topic is framed around these most prominent losses, such that my friends’ questions become “What are you going to do in journalism [in light of this]” or “What do you think the future of journalism will be [in light of this]?” These are good questions, but troubling ones, because they guide the answers as reactions to these events — when really, the answers at this point should stand incident-independent.

But before I get to these specific answers I should note that those outside the realm of journalism are not the only ones framing their questions and concerns in direct response to these monolithic collapses. Rather, quite a few disgruntled journalism majors and members of the industry are weighing in, too — to tell everyone how they’re “getting out.” In the case of journalism majors, boy, let me tell you: the reaction of many in this regard does nothing to placate my dislike for their programs; rather, these petulant grads make it clear just how many acquire their degrees with an unhealthy dose of entitlement — to jobs, to stability, to automatic repute in the journalism world. The journalism greats of old did not learn journalism in classrooms; they came from other degree programs, or else no university or college education at all, and in either case plucked up enough courage to engage their city publications and start as low-level reporters, working their way up to notoriety.

This lacking drive can be felt in the newsrooms just as much as it can in classes: members of the traditional media corps are also jumping ship to other careers. While the financial imperative is understandable, especially among those with families to support, less so is their blind endorsement of others doing the same. I would, for instance, kill to hear someone say “I don’t have the means to pursue the profession in this changing environment — but I wish all the best to those who stick with it anyway.” But of course, to say something like this, one would have to grasp a basic, underlying tenet of this whole transformation: specifically, that its existence is the one fact we can and should rely upon.

Yes, journalism is changing. Is the human desire for information about the world we live in changing, too? No. Not at all. So there will always be a need for news — and with it, people to acquire, distribute, and analyze this news. As such, our questions as journalists are the same as they’ve ever been:

1) What is the quality of current news reporting?
2) What can we do to improve or maintain this quality of reporting?
3) What areas of our world are under-reported?
4) How can we address these lapses?

Some commentators are so mired in questions one and three — their fears about what is being lost in the midst of this dramatic upheaval — that they can’t progress to questions two and four. Frankly, this is more troubling than the answers to one and three themselves. Yes, the quality of current news reporting is greatly diminished by newsroom and foreign bureau cuts, to say nothing of explicit job losses. Yes, huge lapses in the quality of investigative journalism, both at home and abroad, are already being noted by journalism organizations. All right, we get it.

But as two notable bloggers, Clay Shirky and Steven Berlin Johnson both note in very lengthy, but exceptionally potent essays, the fixation on these problems as signs of The End of Things is narrow-minded and fear-based.

What we exist in is a time of transition, a time in which new vehicles for reporting will rise up to supplant the old. This means that, as the old forms of media are diminished (I hesitate to suggest they will ever fully disappear: I find that doubtful, myself), gaps in coverage, and sponsorship for this coverage, necessarily emerge. Johnson likens this void to the expectant nature of computer magazine readers in the mid to late eighties: the potential for a huge new data stream was there, but as of yet unformed; and though many a reader would eagerly await the next issue of, say, MacWorld, surely none could fathom the modern day equivalent: an excess of Apple news in print and daily online. Shirky emphasizes this point in his own essay: We cannot see the shape of things to come, but in the meantime, the only thing to fear is fear itself.

And so, like the realm of computer information in the eighties, so too is the world of journalism now increasingly an expectant void — with many current lapses noted in response to the first and third questions, and few concrete, tried-and-true answers to these lapses in the second and fourth. And yet the need for answers to these questions persists — as does humanity’s overarching, all-consuming need for more information about the world we live in.

The moment that unrelenting inquisitiveness disappears, then we can talk about the sky falling in Journalism Land. Until then, it’s only our own pride and complacency that needs to be checked: For journalists today the task is not to moor ourselves to any one vehicle for the acquisition, distribution, or analysis of news that matters: rather, it’s to stay adaptable, keep learning, maintain humility, and engage the changing media landscape with an open mind and a loyal heart.

And any who can’t manage this (for reasons other than their need to attend to lives in their care) probably weren’t pursuing journalism for the right reasons in the first place — so to them I say, thank you. Thank you for getting the hell out.

“You know what the worst part of that is? It’s not that the speeches have gotten better; it’s [that] media criticism isn’t as good as it used to be.”

— Robert Schlesinger, guest author, The Daily Show

Within the last two years, the Columbia Journalism Review, the Ryerson Review of Journalism,Adbusters, the Tyee, the New Yorker, and the Walrus have written extensively about the challenges facing contemporary media in its on-going bid at maintaining relevance as both political watch-dog and central arbiter of social discourse. Newsroom cutbacks, expansive media monopolies, weak protectionist policies from the government, social pressures from extremist interest groups, and the advent of New Media (and with it, a rapidly transforming revenue structure) are all aspects of a journalism culture that is presently tasked with re-branding itself without ready access to all the resources such an effort requires.

And indeed, many feel this lack of resources is ultimately to blame for the deficit of effective media criticism at crucial North American turning points in the last fifteen years, but one could just as easily argue — and I would — that a lack of effective media criticism in and of itself marked the industry as “ripe for the picking” by corporations increasingly unfamiliar with journalism’s non-entertainment responsibilities. To elaborate on that reversal, though, I should first deliberate a little on what constitutes “good journalism.”

To that end, consider a recent Globe & Mail article, which notified readers of the paper’s dominance at the 2008 National Newspaper Awards. One online respondent commented: “take it easy globe, you’re faaaaaar from perfect.” But is perfection even a reasonable aim for journalism? When by its very nature news media is tested every single day, with every single news report it issues, it can’t be: stories necessarily develop over time, new facts regularly emerge to supplant the old, and self-correcting mechanisms are an intrinsic part of the process, thereby confirming the necessary incompleteness of any one day’s product, no matter how thoroughly researched or reasonably presented. No, there is no resting on one’s laurels in an organization constantly tasked with proving itself anew, and so the measure of good media has to be based more on its commitment to that process itself. How tireless is it? How well does it resist complacency, revisit entrenched internal biases, question assumptions, and respond to outside criticism? Good journalism is fallible; but good journalism also knows how fallible it is, and strives very hard to account for subsequent lapses. And when good journalists internalize this state of constant questioning, this aversion to complacency, they can fight even the most aggressive of pressures to the contrary.

In 2001, for instance, CanWest Global Communications tried to impose a national editorial in its constituent papers — the same editorial, written at CanWest headquarters, for papers all across Canada. Its inclusion would be mandatory, and while local op-ed pieces would still be accepted, they were not allowed to contradict the opinions expressed in the corporate editorial. In the name of maintaining an open forum for public debate, reporters and editors resisted: they went on a byline strike and raised public awareness — especially when a spate of CanWest firings were tied to similar attempts at curtailing different opinions and approaches to the news (with criticism of the Liberal Party and pro-Palestinian comments proving especially dangerous for CanWest staff).

The CanWest corporation embodies a series of on-going problems for Canadian journalists, but at least where corporate editorials are concerned, journalists can — for the moment — claim victory: CanWest dropped that intended policy the moment public pressure became too much. But here, too, there is no such thing as a “perfect” victory: the freedom of the press, as the fourth pillar of democracy, must be tested and affirmed on a regular and rigorous basis. This is where media criticism comes in — journalism’s answer to the ancient question, “Who will watch the watchers?”

I can’t say for certain that media organizations would have suffered fewer newsroom cutbacks, or that corporate owners wouldn’t have interfered as much with their editorial decisions, if there had been a more entrenched culture of media criticism in the early 1990s. But to have someone keeping tabs on other organizations, and teaching readers to keep tabs too — this, to me, is a crucial part of journalism’s internal, self-correcting mechanisms, and one I hope very much to participate in throughout my life.

It is also one that has flourished, oddly enough, in its own absence. When mainstream publications proved unable to provide this public service, the public — settling very easily, and very prominently, into the age of New Media — began supplying this service on their own. Now, in 2008, we see military blogs about the wars in Afghanistan and Iraq rivaling information released through standard channels; the Huffington Post and the Drudge Report heading a broad spectrum of “Second Gen” blog aggregate sites (ones which, unlike Digg or Redditt, have an editorial team setting the front page content); and the Talking Points Memo especially empowering citizens by showing how public pressure can, in fact, improve political accountability.

Whether or not journalists within mainstream publications are ready, the realm of discourse has broadened, and readers today are far from their passive cousins of yesteryear. To this end, the role of traditional journalism is still changing — still being “re-branded” — but not in any way that really lies outside of its original precepts. Journalism has always been something taken day-by-day — something that requires regular adaptation, and constant self-correction. And so long as Canadian journalists are willing to avail themselves to the new demands and needs of our population — and especially to acknowledge and make up for the lack of entrenched media criticism within its walls — we’ll never be perfect, but at least we’ll be far more likely never to forget that fact.