The Peer Review

15 August 2013

In my next few posts, I'm going to be commenting on the interactions between public health, social stigma and activism. Here, I address some basic questions. How do public health interventions cause stigma? When does moralizing become stigmatizing? And what are the implications for anti-stigma activism?

Public health and stigma
Many public health interventions aim to promote awareness of disease and reduce its occurrence. However, there is often tension between what Petersen & Lupton have called a 'modernist, science-based approach to dealing with health issues' (i.e., public health as an applied science) and the effect of stigmatizing the people living with such disease. Link & Phelan argue that 'when people are labelled, set apart, and linked to undesirable characteristics, a rationale is constructed for devaluing, rejecting, and excluding them'. In this sense, public health research undeniably provides fodder for prejudicial attitudes -- what seems at first glance to be collateral damage wrought by efforts to contain disease.

Indeed, some scholars of bioethics and public health argue that stigma is an inevitable consequence of a public health communications tradition that is 'moralizing' and that has 'escaped the scrutiny of ethical discussions'. A defining challenge of health communication is therefore to give the public memorable, simple knowledge about disease without flattening out nuance that encourages compassion for the afflicted and honest discussion, particularly among at-risk populations.

HIV/AIDS management is likely the most widely studied example of this tension. Almost everyone, including all mainstream advocacy organizations, agrees that HIV infection is undesirable and public health interventions to prevent the spread of the virus are warranted. However, as awareness of HIV spiked in the 1980s, so too did the stigmatization of the HIV-positive. As public health practitioners realized that stigma was undermining efforts at containment (by making at-risk populations less likely to learn their status and discuss it with sexual partners), HIV stigma research and anti-stigma initiatives flourished. In the case of HIV, there is broad consensus among interventionists that stigmatization jeopardizes containment and that stigma itself is a public health liability -- not just collateral damage.
Beyond the public health implications of disease stigma, there is the human rights dimension: discrimination on the basis of HIV status, for example, is unfair and unhelpful and only compounds the quality of life impacts of those living with HIV/AIDS. This argument is however less persuasive when people are perceived to have a choice in the stigmatizing characteristic, that is, when it is seen as behavioral. For example, tobacco control initiatives are not subject to the same level of debate about the stigmatizing potential, notwithstanding evidence that 'social policies exacerbate smoker-related stigma'.

The preconditions for stigmaLink & Phelan argue that 'stigmatization is entirely contingent on access to social, economic, and political power'. That is -- stigma without social/economic/political agency isn't truly stigma, in the sense that it does not further identify, isolate and devalue already vulnerable groups.

Tobacco-control practitioners are deploying stigma as a public health tool in a way that would be unimaginable in the setting of HIV/AIDS, or indeed, any non-tobacco addiction. However, this phenomenon has not historically faced much scrutiny. It therefore makes sense that criticism of smoker-shaming coincides with strong demographic shifts in smoking patterns. In the last 20 years, smoking has become strongly associated with lower socioeconomic strata that already face obstacles in such arenas as employment, health. Smoking, when seen as a truly stigmatizing habit, risks exacerbating these obstacles, for example, by making smokers less likely to seek healthcare and access resources to quit the habit.

Smoking is therefore an example of how a public health argument against stigma seems to emerge when moralizing turns into stigmatizing by virtue of these power dynamics. The cases of HIV/AIDS and smoking are well-developed examples that stigma is more than just moralizing: it is a self-reinforcing tragedy that necessarily exacerbates existing cleavages and inequalities.

Stigma is not arbitrary: implications for modern social justice

Social justice advocates are inspired by this discourse but often seem to be working outside the thesis that stigma is necessarily tied to attributes that are inherently undesirable. Whether from the perspective of public health or human rights, stigma is problematic. Nobody in the mainstream scholarly conversation, however, denies the public health liability of stigmatizing attributes like HIV/AIDS, smoking or obesity.

Insight into the dynamics of stigma described above are often packaged with the criticism that public health interventions are judgmental and moralizing. Fat acceptance advocates in particular have zoomed in on this as the defining human rights challenge of public health interventions. But because modern fat acceptance rejects mainstream thinking on health implications of obesity, it declines engagement in the deeper conversation on the dynamics of stigma that might give opposition to 'fat shaming' real traction in the mainstream.Whereas there is a real argument to be made against fat stigma from the perspective that obesity disproportionately affects people who are already the least likely to seek help, this argument is in effect short-circuited by a denial that obesity is a negative characteristic.

Conversely, other streams of the social justice movement fail to describe the stigma they oppose as in any way reflective of deeper social tensions. I believe this to be a factor that limits the success of the anti-"R-word" advocates. They are right that it is rude to make jokes at the expense of the disadvantaged, but being a jerk isn't a complex sociological phenomenon in need of an awareness campaign. With respect to the fight against the word "retarded" as a clinical descriptor, I would simply observe that fighting to change vocabulary is futile and arbitrary. 'Intellectually disabled' replacing 'retarded' in clinical usage merely adds a link to the euphemism treadmill that has already seen 'moron', 'idiot', 'imbecile' and 'cretin' pass into everyday use.

Stigma is a complex phenomenon -- to invoke it in the context of social justice should mean to appreciate this complexity, not ignore it.

15 June 2013

The incidence of liver cancer in Canada has tripled since the 1970s. If this headline-style statistic causes anxiety, the media may have achieved their goal in a recent spate of news stories.

The CBC and other sources cite a recent report published by The Canadian Cancer Society, Statistics Canada and the Public Health Agency of Canada. The report points out that the incidence of non-metastatic liver cancer has increased substantially since the 1970s as a result of the changing occurrence of known risk factors, notably infection by hepatitis B virus (HBV) or hepatitis C virus (HCV), but also alcoholism, aflatoxin exposure and others.

This is interesting from the perspective of public health managers: if liver cancer rates continue to rise, certain population-level interventions may become worthwhile, for example, screening for HBV and HCV.

However, it is not interesting from the perspective of individual decision-making, because increased rates of liver cancer are explained by known risk factors. Regardless of population-level changes that have caused the incidence to rise, an individual's risk of liver cancer remains the same for a given exposure to environmental and genetic risk factors. In fact, the same report shows that the five-year survival ratio (fraction of cases still alive after five years) nearly doubled between 1992 and 2003, which justifies optimism about the prospects for individuals affected.

News articles mention the risk factors for liver cancer but do not make explicit that the rise in cancer follows from changes in the prevalence of these risk factors that are not in themselves news. The media portray the increase in cancer rates with a tone that would fit better a surge in property crimes or terrorism. Indeed, the very frame of a news article anticipates some response on an individual level; in this case, however, the reader seems left to guess what an appropriate reaction might be.

Wilkins and Patterson have written about this precise phenomenon, noting that the news media treat risk situations "as novelties, failing to analyze the entire system, and using insufficiently analytical language". Some scholars (e.g., TF Saarinen) argue, as I do here, that the media have a responsibility to put news on risk statistics into some larger context . Others (e.g., S Dunwoody) contend that media efforts in this direction have not been successful, and that the media should refrain from adding commentary or providing instruction. However, when news reports on risk statistics do not include any context, commentary or instruction, it is far easier to indulge in sensationalism. Indeed, Wilkins and Patterson note that 'a journalist's definition of a good news story means a catastrophe for someone else.'

Reader comments on the liver cancer story show how the public reach for explanations obviously bound up in their own worldview and preconceptions when faced with such a lack of context (see samples below, copied from the CBC story linked to above). In this sense, the news seems to self-sensationalize; it is not necessary for the media to exaggerate the story, but merely to present it without context.

Reader comments range from muddled remarks on increased exposure to 'toxins' to specific attributions of increased risk to the use of nuclear power. Public suspicion of nuclear power has been central to the stagnation of this source of fuel in the US, which it suffices to comment, has been bad from a public health perspective, and probably from an environmental perspective, too (depending on semi-philosophical valuations pertaining to long-term management of nuclear waste). Disease outbreaks attributable to the anti-vaccination movement are an even more tangible illustration of the power that misplaced distrust in technological interventions can have on the public.

The vaccination and nuclear power sagas show that vocal minorities of skeptics can have large externalities. Research on the cultural associations of cancer has shown the high potential for an emotional, techno-skeptic response to a perceived increase in threat, and research on risk perception has shown that cancer is particularly likely to draw attention. Meanwhile, the media seem to benefit from a lack of consensus on their responsibilities in risk communication to generate interest in their content -- interest that would vanish if the public knew the underlying facts.

03 June 2012

Growing
up, I was taught formally and informally to consider myself part of the first
generation born on the lee side of a mountain of social progress made by virtue
of technological advance. The impact of technology and science on society and
the extent of the overlap between technological advance and social progress have
always been questions of perception. However, these particular perceptions are remarkable
for the frequency with which they are invoked by makers of political and
historical narratives and projected, contemporaneously or retrospectively, onto
their respective subjects. Social scientists of all stripes depend on a uniform
zeitgeist to resolve disarticulated and inconsistent realities into a legible description
of our environment; but sometimes, the contexts used to frame more particular
arguments become clichéd and inhibit alternative narratives that could give us
greater insight into the dynamics of our world. While anti-vaccination
advocates and homeopaths are locked in a narrative that puts modern science at
odds with human welfare, they are facilitated by clichéd histories of whole
eras entranced by the bounty of technological advance.

Complementary
to claiming a new divergence between social and technological progress is the
process of ascribing to previous generations a mindset that holds social and
technological progress to be coterminous. The public’s great confidence in
technology at the end of the 19th century is anecdotally attested in
specific histories tangential to the perception of science, for example, with
respect to the ‘unsinkable Titanic’ and the exaggerated confidence of the
actors in the lead-up to World War I. Although this is the conventional,
received understanding, it is not a well-referenced account. On the contrary, a
search of literature on the history of the perception of science attests to the
timelessness of such confidence, especially on the part of scientists (e.g. Badash
1971), with dissenters throughout the
century portraying themselves as being on the cusp of a confidence crash (e.g. Merton 1938;
Handlin
1965; Mazur 1977;
Marx
1987). Midcentury nostalgia is a
well-drawn cultural cliché, but popular representations and conceptions of the
postwar period tend to be rather rosier than the reality may have justified;
the looming threat of nuclear war, rampant, institutionalized discrimination
and worse health outcomes do not usually counterbalance attractive, invented
images of drive-in movies and hearty cooking. This inspection leads me to
suspect that the great, past public confidence in science may be at least
partially a contemporary, retrospective invention borne out of a false sense
that mistrust in science is new. Indeed, a review of studies of the perception
of science show that skepticism is likely to be much more continuous than
periodic; society does not go through cycles of trust and mistrust, but rather
always accommodates a varyingly vocal skeptical cohort. My grandmother was not the
first to lament the loss of the good old days; to an extent, that’s just what
grandmothers do. However, examples of grandmotherly dissenters tend to work
their way into modern consciousness as historical oddities instead of as
examples of a broader trend. It is easier to consider the Luddites, for
example, as irrational reactionaries, than as the by-product of more
generalized anxiety about the implications of the industrial revolution.

One
anxiety that does seem to be particular to our time is an explicit worry of
unintended consequences. Previously, mistrust of science and technology
reflected uneasiness with the direct implications of advancement. But now, we
have a certain awareness of how technologies can backfire in unexpected ways: global
warming, thalidomide babies and dead
birds have demonstrated over the latter
half of the 20th century that the impacts of technology on society
and the environment are not only unpredictable, but inconceivable. We are
increasingly aware that negative consequences are possible via unknown causal
pathways: common anxieties extend past car accidents and labour obsolescence to grey goo and feature creep. Merton, cited in another context above,
formalized the notion of unintended consequences in the 1930s, but the
generalization of such mistrust appears to be much more recent. Cautiousness
along the lines of the universal adage ‘better safe than sorry’ (Fr: ‘mieux
vaut prévenir que guérir’; De: ‘Vorsicht ist besser als Nachsicht’, etc.) is a
well-established philosophy with respect to known risks. However, widespread
appreciation of the inconceivability of certain negative outcomes attached to
new technologies has promoted a cautiousness that now extends to unknown risks.

An
appreciation of such unintended consequences effectively removes the presumption
of benignity on the part of new technologies. The Precautionary Principle in
environmental and public health management grew out of the 1992 Rio
Declaration and puts the burden of proof on the proponents
of a technology or proposal. Otherwise said, the Precautionary Principle holds
that the default assumption is that a technology or proposal is unsafe, pending
scientific consensus to the contrary. The codification of this logic as a
policy to deal with technology’s role in society has empowered a calculus on
the part of the public to the effect that avoiding the potential but unknown
risks of a given technology may well be worth forgoing its benefits. The Precautionary
Principle, as formalized and embraced by regulatory bodies, calls for inaction
only when there is legitimate unsettled science as to the safety of a proposal
or technology; however, scientific ‘controversy’ is routinely manufactured by
vocal laymen, and the Precautionary Principle is appropriated to encourage
inaction based on as-yet unknown causal pathways. While specific anxiety about
unintended consequences is justifiable, it is almost impossible to channel it
into a rational response: it is based in
principle on our incomplete knowledge of cause and effect. And yet, there
is a long list of technologies subject to very vocal opposition based on possible
impacts via pre-hypothetical modes of action: GMOs, wi-fi, vaccines (partially)…
Even though there is little debate as to the safety of these things within
mainstream science, the possibility of as-yet inconceivable risks still
dominates decision-making. Forgoing the benefits of these technologies due to
potential unintended consequences is much closer to ‘better the devil you know’
than ‘better safe than sorry’ and correspondingly closer to paranoia.

Scientific
advance has always coincided with an increase in the interplay between
technology, society, economy and environment. Varying perceptions of these
relationships explains the fragmented conception of implications for social
progress that have been a constant fixture. The reasons behind the false sense
of newness attached to this conception are elusive, but this phenomenon does
explain the projection of an artificial sense of confidence onto times gone by.
While these phenomena have been constant, an explicit and legitimized anxiety
over inconceivable consequences arising via pre-hypothetical causal pathways is
new. To see this is to disentangle convenient, clichéd narratives used to frame
specific histories and social commentary.

03 April 2012

My friend Mike told me a story about how his colleague described someone as ‘Oriental’. We laughed about the colleague’s barbarism until we realized it was hard to explain why ‘Asian’ would have been any better a choice of words.

There’s nothing innately offensive about either. As a term, ‘Oriental’ seems as bad as ‘Middle Eastern’: both are logically useless because they depend on a western perspective to make sense, but whereas ‘Oriental’ is now the thing that racist dinosaurs say, ‘Middle Eastern’ doesn’t seem to have undergone widespread stigmatization.

Still, ‘Oriental’ does seem rather more sweeping. It lumps a vast array of cultures together into the same vague concept we apply to decisions on rug purchases, and unlike ‘Middle Eastern’, there is no real consensus on the geography the term describes. But does geographical descriptiveness matter to the discussion?

‘Oriental’ is offensive because it lumps everyone together, not because it lacks geographical clarity. There is a long list of regional names with no single definition. Eastern Europe, the Deep South... So why is ‘Asian’ any better than ‘Oriental’? As Mike said, Korea is in Asia, but so is Kazakhstan.
The logic seems largely irrelevant, though. My reading of this phenomenon is that self-described mindful people keep up with the politically correct terms and haughtily shun out-of-date words without giving any real thought to the issues they’re passing judgement on.

This reminds me a lot of what Steven Pinker calls the ‘euphemism treadmill’. This is like how ‘moron’, ‘imbecile’ and ‘cripple’ used to be neutral descriptors of medical conditions until prevailing negative connotations triggered the promulgation of new terms. This kind of semantic escalation is widely discussed in the literature and also, was a theme of George Orwell’s first book, Down and Out in Paris and London (thanks, Wikipedia!).

Now consider that even technically correct and well circumscribed terms are liable to offend. Some Quebecers object to being called ‘Canadian’ despite being Canadian citizens, just as some in the UK object to the label ‘British’. These are some of the most objective terms that exist, and their ability to offend shows that vocabulary development is not a good strategy to demonstrate mindfulness.

Most people are fine with staying on the treadmill, keeping current on the ‘safe’ words to describe other people ‘politely’ and without sounding ignorant. But one wonders how meaningful that process is. Generally speaking, people in a majority interact most with other people in the majority. So using the in-style treadmill words is a kind of signal that says, ‘I’m with it’ to people who aren’t likely to have a personal stake in the usage of words that describe social subsets. In that sense, I think we get used to clinging to words we tell each other are safe, without considering the fact that it’s not up to anyone but the people or person described to decide how he, she or they want to identify. By the time Mary Majority interacts with Amy Asian, it may not occur to Mary that Amy could realistically object to that label. ‘Actually, I’m from Toronto...’

Keeping current on the intensely emotional and nuanced usage of words that describe personal or cultural identity is a daunting task. General semantic escalation only generates politeness; it does not produce concepts that are inherently less marginalizing or presumptuous than their forerunners. Still, mindful and sensitive people buy into that process to stay ‘safe’ without generally giving a lot of thought to the underlying issues. This process should be de-emphasized and supplemented, on an individual level, with a recognition that it is not anyone’s right to categorize anyone else, no matter how politely. I think there's something more to sensitivity than updating our labels...