Recent Articleshttp://prospect.org/authors/126464/rss.xml
The American Prospect - articles by authorenThe Good in Good Politicshttp://prospect.org/article/good-good-politics
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><blockquote><p><strong>The Moral Center</strong> by David Callahan <em>(Harcourt, 260 pages, $24.00)</em></p></blockquote>
<p>Ever since the 2004 exit polls, progressives have been puzzling over how to reclaim so-called values voters. Or, to put the problem another way, how can Democrats satisfy Americans' interests (the economy, stupid, and bring those troops home alive) while also appealing to their desires for moral direction? In <em>The Moral Center</em>, David Callahan tackles this conundrum with some fresh and provocative insights in the hope of advancing, as he says in the preface, "a different way of thinking about values." </p>
<p>Callahan accepts that most people worry about morality when they think about politics. Conservatives have distorted moral discussion, however, and reduced moral concerns to a narrow set of fleshy hot-button issues while ignoring justice and equality and giving little but lip service to compassion. But liberals, Callahan argues, have failed to recognize, understand, and speak to the "moral anxiety" citizens feel. Ordinary Americans, he says, "can't shake the feeling that American life is getting meaner and more degraded, and that everyone is out for themselves." Add to this his personal "sense of constantly being tugged away from my real values." After moving to New York City, Callahan found himself coveting fabulous townhouses and tuning out the beggars on the subway, all the while working for Demos, a liberal think tank he helped start. (Disclosure: Although I am not personally acquainted with him, Callahan, among other accomplishments, was the <em>Prospect</em>'s first managing editor in 1990.)</p>
<p>Callahan translates this personal dissonance into a remarkable sensor for the moral anxieties of middle America. His first political insight is elegantly simple: People are troubled not only about what's going on outside in the public sphere, but also about what's happening to the internal moral compass in each of us. They're worried about whether they are good people, whether their children will grow up to be good, and whether our social environment permits and encourages people to be good.</p>
<p>Throughout the book, he shows that religious conservatives speak to moral anxieties. Can parents, schools, and churches shape children's values any longer in the face of television, the Internet, movies, video games, and consumer culture? Are the rich and powerful held to the same standards of behavior as everyone else? Is hard work rewarded and really a path to independence and security? Does anyone care about the poor? The solutions conservatives offer may be extreme -- sexual abstinence for the unmarried, back to the kitchen for mom, crude censorship for the media, grotesquely harsh punishment for minor infringements and wrist slaps for corporate disasters, to name a few -- but at least they say something's wrong and offer the possibility of changing our culture. Their appeal, Callahan says, resides not in their policy prescriptions, but in their moral concern, their hope, and their call to goodness. "Whatever you may think about Christian conservatives at least they offer a plan to get America on a different moral path," he writes.</p>
<p>Callahan's second political insight is perhaps obvious, but clever nonetheless. There's a "vacuum in the center" of moral debate. On every issue, the right and left have staked out extreme positions. Surely there's a moral position on sexuality between abstinence and it's-up-to-you-what-you-do-with-your-body. If Callahan is right that more and more people find the conservative message attractive merely because it addresses morality, liberals can win some of those people back by staking out centrist moral positions.<br />
Positioning one's platform as the middle between two extremes is one of the rudiments of political strategy, so the idea of locating and occupying the moral center is strategically appealing. But is it intellectually and politically workable? Popular notions of morality, as well as the right's disdain for "moral relativism," rest on Immanuel Kant's categorical imperative. Something is either right or wrong, and everyone should either do it or not do it, always. In practice, though, there are very few questions on which Americans can agree on a moral absolutes. Take almost any policy debate and thinking people are apt to become utilitarians, balancing costs and benefits and weighing trade-offs. They're apt to perceive and tolerate ambiguity, too. They might, for example, see abortion as a tragic decision, but also as a liberating and benevolent one that helps ensure that children will be well-loved and cared for. So much for bright lines.</p>
<p>At times, Callahan seems to recognize this dilemma, but he doesn't face it head on. In a reproachful tone, he blames liberals for not having moral absolutes. After stating that comprehensive sex education is superior to abstinence-only programs at delaying teenage sex and reducing pregnancy and disease, he says disdainfully, "Abstinence crusaders are rolling over Planned Parenthood types who have social science on their side but offer no moral bottom line." Yet most of Callahan's own proposals entail moral jawboning about personal responsibility combined with lots of individual freedom, more social spending, and punishment only for transgression of the law. There are (thank heavens) few moral bottom lines in this book.</p>
<p>Instead, Callahan articulates what he thinks are mainstream moral beliefs, such as marriage is a good thing, "abortion is bad," people should honor family ties, crimes should be punished no matter who commits them, the poor and vulnerable should be helped, people should be able to earn personal freedom through hard work, and everyone should make some sacrifice or service to a "common higher purpose." Then, under the rubric of supporting these moral values, he pulls some standard remedies from the liberal medicine cabinet. For example, he's big on social insurance but packages it as something that "will make it easier for people to pursue personal freedom through entrepreneurial risk-taking." To address sexuality and family preservation, he proposes better sex education and health insurance for contraceptives; economic supports for working parents; marriage education and responsible fatherhood initiatives; and better regulation of corporate media. The question in my mind is whether packaging moderate remedies in moral language can satisfy the primitive hunger for moral bottom lines. </p>
<p>Callahan's third insight is the most hard-hitting. The essence of morality is considering others' interests as well as one's own, but moralists on the right "refuse to confront the force that increasingly fans an extreme ethos of self-interest, namely our free-market economy." Early in the book Callahan warns, "We ride the tiger of selfishness. Untamed, it will eat everything we care about." In chapters on sex, family, media, crime and punishment, work, poverty, and patriotism, he deftly outlines how the unbridled pursuit of self-interest in a market economy overwhelms concern for others and the subsidiary moral values that people say they care so much about. Take the media. Many people are revolted by the sex and violence of popular culture and frightened for their children, but sex and violence draw viewers (ironically, especially young adults of parenting age), and large audiences draw ad revenue, which feeds the commercial bottom line.</p>
<p>Take corporate crime. The Food and Drug Administration fails to stop pharmaceutical companies from peddling harmful drugs. The Occupational Safety and Health Administration looks the other way while employers allow their workers to be killed in unsafe workplaces. Callahan sees the extreme self-interest at work here but thinks it can be controlled by "better enforcement of existing laws that protect the safety, health, and finances of Americans." Not bloody likely. That tiger now owns the FDA and OSHA and virtually writes their regulatory standards. </p>
<p>If Callahan has failed to offer convincing methods of taming the capitalist tiger of selfishness, he has faced it, and that in itself is an act of courage for anyone who hopes to be heard in American political debate. He may not have found a moral center, either, but he has pointed us toward something far more important, a moral core composed of concern for others. This is how liberals can call upon people to be good.</p>
<p><em>Deborah Stone's next book on altruism and public life will be published by Nation Books in late 2007.</em></p>
</div></div></div>Mon, 20 Nov 2006 02:57:24 +0000145864 at http://prospect.orgDeborah StoneDie-Hardshttp://prospect.org/article/die-hards
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><blockquote><p><b>Taking Care: Ethical Caregiving in Our Aging Society</b> by President's Council on Bioethics (<i>309 pages, free at <a href="http://www.bioethics.gov">www.bioethics.gov</a></i>)</p></blockquote>
<p>When George W. Bush appointed the President's Council on Bioethics in 2001, he stacked it with conservatives who had already taken stands against abortion, embryonic stem-cell research, euthanasia, and assisted suicide. Nonetheless, I approached the council's sixth report, <i>Taking Care</i>, with an open mind after reading a column in <i>The New York Times</i> by David Brooks, who described the report as “a rebuke to the economic individualism of the right and to the moral individualism of the left,” and an assertion of “mutual obligation.” For once, I thought, conservatives and liberals might find common ground. </p>
<p>The report opens with a rich account of the “caregiving crisis” in America, and it was in these passages that Brooks found what he called a “declaration of dependence.” A healthy old age filled with tennis and travel, the council warns, is more likely to occur in advertisements for assisted living than in real life. In fact, “the most common trajectory toward death is the path of ‘dwindling,' of progressive debility, enfeeblement, and dementia, often lasting up to a decade.” As a result, most of us will eventually depend on others for care. And if we think we're going to be able to count on paid caregivers to make up for our smaller, scattered extended families, think again. Home health care is an unattractive field -- strenuous work, very low pay, and rarely with health benefits. </p>
<p>But after portraying the rise of dependence and the lack of care providers, the council declares all the related policy issues “beyond the scope of this report” and recommends that the president appoint another commission to study them. Instead, <i>Taking Care</i> focuses on choices at the end of life and not surprisingly opposes euthanasia, assisted suicide, and anything that smacks of hastening death for terminally ill patients. What is surprising, however, is just how reactionary the report is. It opposes living wills and aims to reverse the trend toward giving people greater control over their end-of-life care.</p>
<p>That trend goes back to the 1960s, when clinicians and laypeople began to question medicine's drive to keep patients alive as long as possible, even at the cost of painful and ultimately futile treatments. The movement for hospice care as an alternative to dying in hospitals grew from this impetus. In the 1970s, Derek Humphrey's suicide manual, <i>Final Exit</i>, became a bestseller because it offered the possibility of dying on one's own terms and of choosing an early death over a prolonged period of decline, incapacity, and suffering. Then came the movement for living wills and other advanced directives that allow people to specify their wishes about end-of-life care while they are competent to express them. The prospect of a lingering painful, immobile, demented, or brain-injured phase of life spurred the movement for assisted suicide. Why shouldn't people be able to get help doing what they want to do but can't perform themselves? Or why shouldn't they have help dying more peacefully and less horribly for their families than, say, blowing their brains out with a handgun or jumping off a roof? Together, these four movements -- hospice, Hemlock Society, living wills, and assisted suicide -- represent a powerful democratic expression of the wish for individual autonomy in old age, chronic illness, and disability. </p>
<p><i>Taking Care</i> is a sly package. The preface states, “Our purpose is to provide a humanely rich account of the caregiving dilemmas -- social, familial, and personal -- and to offer some important ethical guidelines for the care of persons who can no longer care for themselves.” But actually, the council's main goal seems to be to prevent people from choosing death earlier than medical technology might allow or nature might ordain. Of all the ethical dilemmas in caregiving, the report zeroes in on one -- whether it is ever moral to hasten death, one's own or somebody else's. The council's answer is “no.” And of all the possible policy questions surrounding care in an aging society, the report addresses only one -- advance directives. The council wants to curb their use and have people cede their decision-making power to their caregivers if they become incapacitated. </p>
<p>In its first “recommendation” -- really a declaration -- the council says, “Euthanasia and assisted suicide are antithetical to ethical caregiving for people with disability … [and] should always be opposed.” The report talks incessantly of the need for “moral boundaries” and for putting certain actions -- helping people to die on their own terms -- “beyond the pale.” </p>
<p><i>Taking Care</i> casts helping people to die as “betrayal” and “abandonment.” (These two words appear seven times in the language under the first recommendation; I lost count in the report as a whole.) A caregiver who helps hasten the death of the person she cares for, according to the council, invariably betrays a trust, and probably does so for her own convenience, to end the burdens of caregiving. In the report, merely considering helping another person to die is always cast as a “temptation.” Quasi-religious language pervades the report, giving it the feel of a sermon. The council introduces its concluding “moral guidelines” by saying, “We highlight three crucial teachings.” Ummm ... of whom?</p>
<p>Nowhere in the report do we glimpse the kind of tragic situations that gave rise to the social movements I mentioned earlier. In those situations, a caregiver who helps carry out a patient's strongly and clearly expressed wishes should be seen as fulfilling a trust, not betraying it. Many people feel that carrying out such wishes requires extraordinary moral courage. When my mother was dying of cancer, even though she had signed Do-Not-Resuscitate/Comfort-Care-Only papers and we'd had explicit discussions about her living will, I feared I wouldn't have the strength simply to sit by her side if she were to have a heart attack. For me, the temptation would have been to call an ambulance. </p>
<p>The most insidious feature of <i>Taking Care</i> is its attempt to stem the patient self-determination movement and to undermine the respect for individual autonomy that has been the bedrock of medical ethics -- and liberalism. What ties together hospice care, the Hemlock Society, assisted suicide, and living wills is the quest for personal autonomy. But, according to the council, “Self-determination has intrinsic limits in a civilized and decent society… Even a competent person's wishes should be limited by such moral boundaries… Our caregivers are not obligated to execute our wishes if those wishes seem morally misguided.” The council sees itself as the guardian of the moral boundary.</p>
<p>The council does make some concessions toward public uneasiness about heroic, last-ditch medical intervention. Its second recommendation asserts that the goal of ethical caregiving is not to prolong the patient's life but “to benefit the life the patient still has,” and its third recommendation states that caregivers have an “obligation to avoid inflicting treatments that are unduly burdensome” or not “efficacious.” Despite these concessions to humane standards of caregiving, however, the council's overriding message is to reject personal autonomy in end-of-life care by limiting the role of advanced directives. </p>
<p>Here's what the council recommends. First, people who write living wills should be asked to insert a provision allowing their proxy to override their wishes. Second, we should prefer “proxy directives” to living wills: “Instead of attempting to specify <i>what</i> should be done, advance proxy directives should specify <i>who</i> should make crucial decisions on our behalf.” (emphasis added) Third, ethics committees and judges should give less consideration to “what the incapacitated patient would want done, were he now to be consulted in his own case,” and more to “discerning what the incapacitated patient now needs in order to serve best the ongoing, if dwindling, life he now has.” These decision-makers shouldn't “overvalue ‘precedent' autonomy, or past wishes.” Fourth, state legislators should “be cautious about putting more state authority and resources behind advance instruction directives.” Last, Congress should amend the Patient Self-Determination Act of 1990, which requires hospitals to give patients an opportunity to complete advance directives. The council wants this law to recognize the authority of surrogate decision-makers to act in the best interests of the patient instead of honoring the patients' expressed wishes. </p>
<p>Granted, there are thorny issues in making judgments about what one might want in the future under circumstances one can't possibly imagine. But these recommendations would gut the purpose of advance directives -- to strengthen individual autonomy. I can surmise only that the council wants to limit advance directives because so many people use them to ask for help hastening their deaths. </p>
<p>For me and most of the people I know who've carried care responsibilities for dwindling spouses or parents, the most salient ethical dilemma we face is balancing autonomy and care long before incapacitation. How and when should you override someone's wishes not to have care if you believe her health and safety require it?</p>
<p>But this problem never shows up in <i>Taking Care</i>, which doesn't engage the real-life dilemmas of mutual dependence and obligation with the same moral passion it lavishes on preventing people from exercising control over their deaths. Conservatives are caught in a strange contradiction. They favor policies that would extend costly medical care at the end of life, while opposing the social policies that would enable many people to afford that care. As a policy analyst, were I to be on a presidential council on caregiving, I'd insist on one overriding moral duty -- society's obligation to fund decent care for those who need it and decent pay for those who give it. </p>
<p><i>Deborah Stone, research professor of government at Dartmouth College, is the author of</i> The Disabled State <i>and</i> Policy Paradox<i>. Her next book is</i> The Good Samaritan's Secret.</p>
</div></div></div>Mon, 20 Mar 2006 05:08:17 +0000145298 at http://prospect.orgDeborah StoneImporting Governmenthttp://prospect.org/article/importing-government
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>As Congress diddles with a Medicare prescription-drug plan, citizens are busing and clicking their way to Canadian pharmacies, where drugs are affordable. U.S. politicians, refusing to control drug prices, are also flocking to Canada for help by endorsing what's euphemistically called "reimportation." But make no mistake: What we are really importing from Canada is effective government regulation.</p>
<p>
Spurred on by tax breaks, patent protections and deregulation, the pharmaceutical industry keeps inventing a bounteous pharmacopoeia that ever fewer Americans can afford. Caught between popular pressure to lower drug prices and industry pressure to preserve profits and free markets, the Bush administration came up with voluntary drug discount cards for senior citizens. But the cards have bewildering eligibility rules and offer only modest savings. As a recent General Accounting Office report found, the cards reflect and perpetuate the massive price discrimination in the drug business. Manufacturers give huge discounts to large purchasers -- Medicare, Medicaid, the Department of Veterans Affairs, HMOs and now discount-card companies -- while gouging the tens of millions of people without effective drug coverage. </p>
<p>
Faced with the default of the U.S. government, Congress wants to import drug policy along with drugs. In July, the House of Representatives, with broad Republican support, passed a bill to allow citizens and retail pharmacies to import drugs from Canada. Unlike the counterpart Senate-passed bill, the House doesn't require the Food and Drug Administration to bless imported drugs as safe. What a great free ride for the United States: Let other governments do the hard work of regulating the American drug industry while our politicians keep their blinders on. </p>
<p>
How does Canada keep its drug prices so much lower than ours? First, as Canadian economist Steven Morgan points out, Canadian prices <i>aren't</i> lower than the prices American bulk purchasers typically pay. They are significantly lower than the inflated prices Americans without drug coverage pay. Second, Canada uses government muscle instead of market muscle to keep drug prices tolerable -- for <i>everyone</i>. Canada's federal Patented Medicine Prices Review Board limits prices for new breakthrough drugs to the median rate charged in seven industrial countries, and it pegs other patented drugs to these prices. Third, provincial governments add their own controls. Ontario, for example, permits new drugs into its formulary only if they meet price criteria, while Quebec refuses to pay more for its drugs than the best available price in the rest of Canada. </p>
<p>
Meanwhile, the reimportation fix is spreading like crack cocaine. In July, the mayor of Springfield, Mass., launched a program to encourage city employees to purchase their drugs from Canada, saving the city money. Springfield skirted the federal ban on buying and reselling pharmaceuticals from abroad by having individual employees do their own purchasing. Buoyed by Springfield's apparent success (the Department of Justice went after it in September), other city and state governments began exploring how they, too, might help themselves and their citizens in the face of federal government abdication. </p>
<p>
The reimportation issue creates a curious alliance between liberals, fiscal conservatives and libertarians. For liberals, opening the Canadian drugstore to American patients is a way to import at least one positive spin-off of Canada's national health insurance: affordable drugs. City and state officials see budget savings in reimportation, and, perhaps, even enhanced local government capacity. Springfield Mayor Michael Albano, fed up with suggestions that he was compromising his citizens' safety with imported drugs, countered, "[A]s mayor, I cannot guarantee my citizens safety with fewer police and firefighters." And while conservative business allies have sided with Big Pharma against reimportation, the pro-import folks invoke free-trade rhetoric, access to world markets and consumers' rights to seek best prices.</p>
<p>
For decades conservatives have been promising that if only government gets off corporate backs, free enterprise will make life better for everyone. Republicans, abetted by New Democrats, have so gutted government that Americans are now dependent on other governments to keep us secure. With prescription drugs, most foreign governments do a better job securing the welfare of their citizens by structuring a healthy relationship between business and the rest of society. In strapping itself to industry and ideology, the U.S. government has literally sacrificed its independence, not only from big business but from foreign rule as well.</p>
</div></div></div>Thu, 16 Oct 2003 14:36:39 +0000143110 at http://prospect.orgDeborah StoneState of the Debate: Work and the Moral Womanhttp://prospect.org/article/state-debate-work-and-moral-woman
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><table align="LEFT" border="0" cellpadding="5" cellspacing="5" valign="TOP" width="150"><tbody><tr><td><font class="nonprinting">WORKS DISCUSSED IN THIS ESSAY</font>
<p><font class="nonprinting"><font size="-1">Kathryn Edin and Laura Lein,<i> <a href="http://www.amazon.com/exec/obidos/ISBN=087154234X/theamericanprospA" target="_amazon">Making Ends Meet: How Single Mothers Survive Welfare and Low-Wage Work</a></i> (Russell Sage Foundation, 1997).</font></font></p>
<p><font class="nonprinting"><font size="-1">Diane E. Eyer, <i><a href="http://www.amazon.com/exec/obidos/ISBN=0812924169/theamericanprospA" target="_amazon">Motherguilt: How Our Culture Blames Mothers for What's Wrong with Society</a></i> (Times Books, 1996).</font></font></p>
<p><font class="nonprinting"><font size="-1">Sharon Hays, <i><a href="http://www.amazon.com/exec/obidos/ISBN=0300066821/theamericanprospA" target="_amazon">The Cultural Contradictions of Motherhood</a></i> (Yale University Press, 1996).</font></font></p>
<p><font class="nonprinting"><font size="-1">Arlie Russell Hochschild, <i><a href="http://www.amazon.com/exec/obidos/ISBN=0805044701/theamericanprospA" target="_amazon">The Time Bind: When Work Becomes Home and Home Becomes Work</a></i> (Henry Holt and Company, 1997).</font></font></p>
<p><font class="nonprinting"><font size="-1">Tera W. Hunter, <i><a href="http://www.amazon.com/exec/obidos/ISBN=0674893093/theamericanprospA" target="_amazon">To 'Joy My Freedom: Southern Black Women's Lives and Labors After the Civil War</a></i> (Harvard University Press, 1997).</font></font></p>
<p><font class="nonprinting"><font size="-1">Elizabeth Perle McKenna, <i><a href="http://www.amazon.com/exec/obidos/ISBN=0385317956/theamericanprospA" target="_amazon">When Work Doesn't Work Anymore: Women, Work and Identity</a></i> (Delacorte Press, 1997).</font></font></p>
<p><font class="nonprinting"><font size="-1">Jennifer Scanlon, <i><a href="http://www.amazon.com/exec/obidos/ISBN=0415911575/theamericanprospA" target="_amazon">Inarticulate Longings: </a></i><a href="http://www.amazon.com/exec/obidos/ISBN=0415911575/theamericanprospA" target="_amazon">The Ladies' Home Journal<i>, Gender, and the Promises of Consumer Culture</i></a> (Routledge, 1995). </font> </font></p>
<hr noshade="noshade" /><font class="nonprinting"><a href="http://www.amazon.com" target="_amazon"><img align="Right" alt="amazon.com" border="0" height="23" hspace="10" src="../../images/amazon2.gif" vspace="10" width="25" /></a><i><font size="-1">You can buy any linked book through our <a href="../amazon.html" target="_top">associate program</a> with <a href="http://www.amazon.com" target="_amazon">Amazon.com</a></font></i> </font></td>
</tr></tbody></table><p><font class="nonprinting"><font size="+2">A</font> good woman is hard to find. Either she's threatening a lawsuit because she was denied a promotion. Or she's expecting the taxpayers to subsidize her illegitimate children. Or she's neglecting, even foregoing, children in favor of a career. Or perhaps she wants it all-work, children, love, leisure, and a flexible schedule-making her entirely unreliable. </font></p>
<p><font class="nonprinting">If women seem confused about what they want, society is even more confused about what it wants from women. Social philosophers have long pondered the meaning of work and its place in our moral lives: whether it is ennobling or degrading, whether virtue requires hard work or hard work inculcates virtue, or whether, in an ideal world, there should be more or less work. But when the worker is imagined as a woman, philosophy goes into a new key and both the questions and the answers are of a different tenor. About women, the questions concern whether they ought to work outside the home, whether they are capable of many kinds of work, and whether tending home, family, and community count as work at all. </font></p>
<p><font class="nonprinting">Such philosophical questions have always been at the heart of both labor politics and gender politics. Protective labor legislation in the U.S. finally got off the ground with assistance from women's reform groups and widely accepted ideas about women.</font></p>
<p><font class="nonprinting">In 1908, after years of striking down protective labor laws, the Supreme Court finally upheld a state maximum-hours law because the law applied only to women; and women, everyone knew, needed special protections to fulfill the "benign and noble office" of motherhood to which history had "destined" them. Driven by passionate philosophical views, politics in turn piled shifting layers of contradictory norms and imperatives on women, and created a legacy of inconsistent cultural ideas, practices, and policies. </font></p>
<p><font class="nonprinting">Let's start with an old chestnut: Is work stultifying, or is it fulfilling and uplifting? [See Alan Wolfe, "<a href="../34/wolfe-a.html" target="_top">The Moral Meanings of Work</a>," <i>TAP</i>, September-October 1997.] Ask that question assuming the worker is a man and the implied comparison is other kinds of jobs or other ways of organizing work. When the worker is a woman, the implied comparison is with unpaid housework and child care, so the measure of work's effect on the female worker has a different yardstick: Compared to dusting and diapering? Compared to rearing the next generation of citizens, soldiers, leaders, parents? And the question has a different moral valence in a world where many people, both men and women, have thought that women should not do certain kinds of jobs or engage in paid work at all except in dire necessity.</font></p>
<p><font class="nonprinting">Since marriage and motherhood have been treated as moral obligations of women, if not sacred callings, the question of whether virtue consists in performing disciplined, paid work is shaped by the alternative moral vision of women as guardians of the family and hearth. Probe yet one level deeper and the debate for women is over something even more fundamental: How to reconcile work and womanhood? </font></p>
<p><font class="nonprinting"><font size="+2">Y</font>ou might have thought that issue was settled by the feminist movement of the 1960s and 1970s, or by the sheer overwhelming scale of women's participation in the workforce. But Sharon Hays and Diane Eyer, authors of two recent books on motherhood and work, each make a chilling case that in the dominant scientific culture, biology is still destiny. According to the three gurus of child rearing, Benjamin Spock, T. Berry Brazelton, and Penelope Leach, women are supposed to be selfless, nurturing, and caring, not selfish, competitive, and climbing. "In an ideal world," Leach wrote in 1989, "no woman would ever have a baby unless she really knew that she wanted to spend two or three years being somebody else's other half." Moreover, mothering is instinctive and what women really want to do, so all the social pressure on mothers to work is unfortunate. From Brazelton in 1983: "We may be ignoring . . . a deep-seated drive in women—a strong feeling that their primary responsibility is to nurture their children and their spouse. It may be unfair to expect a woman to be the fulcrum of her family; but it has always been so, and women feel it instinctively." </font></p>
<p><font class="nonprinting">Spock, Brazelton, and Leach have not been oblivious to the revolution in women's roles, and they each make stabs at accommodating women who need to work, but less so women who just want to work. The latest edition of Dr. Spock's <i>Baby and Child Care</i> (now co-authored with Michael Rothenberg) uses the gender-neutral language of parents instead of mothers, but gives away the game in suggesting that a parent in the doldrums "buy a new dress" or "go to the beauty parlor" to get some relief. In the revised edition of <i>Infants and Mothers</i>, Brazelton acknowledges that he may have inadvertently "added to mothers' feelings of guilt when they were not able to stay at home throughout the first year" (note that he doesn't say "unwilling to stay"), but then he whipsaws them back into guilt: "an understanding of the importance of their role as mothers . . . should help them see mothering as a goal that is as important as anything they can achieve in their professional lives." Leach, too, pronounces raising a child as "more worthwhile than any other job." It may seem like the child experts are asking very little of women, just a couple of years at home with each child. But the routine interruption of a woman's career required by the experts' advice is so taken for granted that it barely rates a mention. In their assertions of the priority of motherhood over job, they simply assign women different life possibilities than men. Moreover, though the advice may seem slightly archaic in 1997, the books by Spock, Brazelton, and Leach are the three top-selling child-rearing manuals, and they create a reservoir of guilt even among today's women. Just glance through <i>Parents</i>, <i>Working Mother</i>, or <i>Redbook </i>to see how much effort they devote to helping women cope with that guilt. </font></p>
<hr size="1" /><center><font class="nonprinting"><a href="/subscribe/"><img alt="Subscribe to The American Prospect" border="0" src="/tapads/mini_subscribe.gif" /></a> </font></center><br /><hr size="1" /><p><font class="nonprinting">Sharon Hays calls these child-rearing manuals "hesitant moral treatises," though it's hard to see from either her or Eyer's reading of them what's so hesitant about them. They are, as Hays argues, moral prescriptions for appropriate child rearing and womanly behavior. They condemn impersonal, competitive market relations—paid child care, for example—and glorify women's unselfish love, relinquishment of personal goals, and, of course, financial sacrifice. The soul of a woman is a giving soul. If she gives at the office, she'll have less to give at home. In the eyes of child-rearing experts, a woman who works while her children are young, who cannot turn off her career interests and aspirations, or presumably, her desire for the luxuries her earnings afford, is morally stunted. </font></p>
<p> </p>
<hr noshade="noshade" size="2" /><h3><font class="nonprinting">WOMAN'S PROPER WORK</font></h3>
<p><font class="nonprinting">One of the ironies of women's history is that the leading white women social reformers of the Progressive Era sought to keep women in the home (despite their own enjoyment of careers as social workers, nurses, teachers, and political activists). They advocated higher wages for men so families could survive on one income, and they submitted every proposed social reform to the litmus test of whether it would encourage women to work or stay home. They opposed child nurseries and employer-provided maternity insurance for working women—policies dearly sought by current advocates of family-friendly employment—because they didn't want to encourage women to work. They established state Mothers' Pensions and later federal Aid to Dependent Children to allow widowed, abandoned, and divorced mothers to be full-time mothers. Black reformers, by contrast, understood the necessity for women to work, and focused their energies on creating the preconditions for women to be wage earners and community leaders. </font></p>
<p><font class="nonprinting">Women's magazines and a new profession of home economists in the early twentieth century provided another source of cultural authority on the questions of women and work. <i>Ladies' Home Journal</i> and its advertisers promoted consumerism as an identity and way of life for women, but, as Jennifer Scanlon argues in <i>Inarticulate Longings</i>, the enterprise was full of contradictions. The editorial line had to be that women were better off—and better women—if they stayed home and consumed all the products that would improve their womanly skills. Yet much of the magazine was written by women, and the magazine frequently recruited female staff, so its editors made an exception for the kind of clerical, sales, and office work it offered. </font></p>
<p><font class="nonprinting">The larger contradiction, Scanlon notes, was that by promising women "better living through purchasing," the magazine made paid work more necessary and more desirable. Scanlon has unearthed how the magazine, against mounting pressures from younger, single, working women, fought a rearguard action to portray the nobility of the homemaker. Writing under the moniker of "A Plain Country Woman," between 1905 and 1918, Juliet Strauss steadily exhorted women to avoid paid employment. "God loves you if He lets you know that the plain is the great. The dish washed, the skillet or the dinner-pot scoured, the hearth swept, the bed made up, these are the great accomplishments." (No one is telling mothers on welfare anything of the sort today.) The magazine's editorial staff and many of its older readers castigated younger women for shunning the "humdrum." Indeed, the Journal's motivating story of societal decline (all social and political magazines have one) was a tale about younger women who hadn't the moral backbone to do society's vital drudge work. </font></p>
<p><font class="nonprinting">The old-fashioned virtues of Kinder, Küche, Kirche might not have held much appeal for the <i>Journal</i>'s more urban and educated readers. For them, the magazine offered the more modern challenge of household engineering: Apply the principles of industrial efficiency to housekeeping, turn it into a science, and thereby professionalize the job of the homemaker. Christine Frederick, a contributing editor much influenced by Frederick Winslow Taylor and his scientific management ideas, promoted a philosophy of housekeeping as both a science and "the greatest business in the world." She advised women to take up home canning "not because you love your family but because it's good business to do so." As a complex science, she believed housekeeping could give women new respect in the eyes of their husbands and a chance to use their brains and their college training. Housekeeping would no longer be drudgery. With just a mite of defensiveness, she vouched: "It is just as stimulating to bake a sponge cake on a six-minute schedule as it is to monotonously address envelopes for three hours in a downtown office." </font></p>
<p> </p>
<hr noshade="noshade" size="2" /><h3><font class="nonprinting">THE SACRED CHILD</font></h3>
<p><font class="nonprinting">Consumerism and the home efficiency movement neatly incorporated middle- and upper-class women into the market world without letting them marketize their labor, without, in other words, granting them full economic citizenship. But alongside these movements, according to Sharon Hays, another resolution to the problem of women's place in a market economy was taking shape. What she calls the "ideology of intensive motherhood" offers society an alternative moral mainspring to the market's values of self-interest, competition, and acquisitiveness. </font></p>
<p><font class="nonprinting">The ideology of intensive motherhood, as Hays describes it, has three elements. First, a child should have one central caregiver, and it should be the mother, not the father. Second, children and their needs should be at the center of child rearing; mothers should lavish time, energy, and material resources on their children. Last, the child is sacred, so that comparisons of worth between child rearing and any other activity are impossible and morally forbidden. </font></p>
<p><font class="nonprinting">The principles of intensive motherhood would seem to render mothering especially difficult for women who also hold down paying jobs. Yet, Hays found, working women try to live up to the ideal of intensive mothering just as much as stay-at-home mothers; and they are more likely than stay-at-home mothers to view home and children as more rewarding than work. Why, Hays asks, does intensive motherhood persist as an ideal in a society where most women work? </font></p>
<p><font class="nonprinting">Historian Karl Polanyi, in <i>The Great Transformation</i>, showed how capitalism's transformation of humans into mobile and fungible elements of labor was a violent wrenching, and how people in eighteenth-century English society created communal mutual aid systems to protect themselves against the brutalities of markets in human labor. Hays argues—quite brilliantly—that the ideology and practice of motherhood are likewise a counterforce to the marketization of all of social life and "the last best defense against . . . the impoverishment of social ties, communal obligations, and unremunerated commitments." Sure, the ideology serves the interests of capitalists (women are more likely to accept lower wages in the market if they believe they have more important work to do at home); men (they get maid service on a grand scale so they can dedicate themselves to paid work, and fewer women compete with them for jobs); and native-born middle- and upper-class women (norms of appropriate child rearing have been their claim to superiority over the unwashed masses and immigrants). But, Hays insists, men and women—even working women—accept and operate according to the values of intensive motherhood because doing so is a way of actively rejecting the supremacy of market logic inside the family. </font></p>
<p> </p>
<hr noshade="noshade" size="2" /><h3><font class="nonprinting">TO WORK OR NOT?</font></h3>
<p><font class="nonprinting">It's conventional wisdom nowadays that most women have to work out of economic necessity, if not as the sole or primary breadwinner, at least to be able to keep up their standard of living in an economy of declining real wages. Yet, there are still a lot of pressures on women (even mothers without husbands) not to work. For one thing, the organization of work is hostile to family life and responsibilities. Work takes time from family life. Inflexible work schedules make it difficult for parents to respond to unscheduled needs of kids like sickness or emotional crises. Overtime, travel, irregular hours, sudden shift changes, and just too much work—all wreak havoc on child care arrangements, not to mention parent-child relationships. The magazines are full of stories of women committed to combining work and motherhood who get shipwrecked on the shoals of work.</font></p>
<p><font class="nonprinting">The more interesting—and sobering—explanations of why women work or don't are cultural rather than structural. Elizabeth McKenna, who inhabits a world of professional women in relatively high-powered jobs, sees a trend for some of these women to quit because they find their work unsatisfying. In her book, <i>When Work Doesn't Work Anymore</i>, she says that a clash of male and female value systems is what sends these women packing. Women brought up under the influence of postwar feminism still carry the traditional expectations of their mothers and grandmothers; they want to have a personal life, a family, and a community role. They find themselves in a male work system, where work comes before personal life and personal success is equated with work success, and before long they're judging themselves by their boss's standards—attendance, long hours, productivity, and ability to suppress their feelings. By the time they hit their professional stride in their thirties or forties, they're fed up or empty or both, so they reclaim their values by cutting back, quitting altogether, or making a career change.</font></p>
<p><font class="nonprinting">While McKenna calls for the usual panoply of family-friendly policies and reformed fathers, her book is mainly an inspirational how-to guide for people who want to step off the treadmill ("Stop trying to be so successful" and "Live by what you treasure" are two of her "new rules for success"), not a blueprint for reforming work. McKenna's rules might offer some directions for a cultural change, however, and that would be all to the good. Unfortunately, many of her rules are practical mainly for women who have high earning power, big savings, or a husband to fall back on.</font></p>
<p><font class="nonprinting">For Diane Eyer, author of <i>Motherguilt</i>, it's not the attractions of a female value system that draw women out of the workforce, but rather a culture of blaming mothers that drives them out. Blame starts with academic psychologists, some of whom transformed the idea of maternal-infant bonding into a requirement for healthy child development that could be met only by exclusive, maternal, stay-at-home care. This dubious scientific claim undergirds the child-rearing manuals, and they in turn perpetuate the notion of the "maternal sculptress." </font></p>
<p><font class="nonprinting">From the psychologists and pediatricians, the torch of "mother-blame" is picked up by social scientists, politicians, pundits, and lawyers, who blame working mothers for just about everything that can go wrong with children and society. Riots, delinquency, crime, drug addiction, homelessness, even terrorism have been pinned on single mothers, divorced mothers, welfare mothers, teen mothers. Note that women in any of these categories almost always have to work to support themselves and their kids, so they are guilty from the get-go. They can't possibly be the dedicated, full-time, by-the-book, meet-your-child's-every-need mothers the culture reveres. Even as politicians are telling them that they are irresponsible and not entitled to public aid just for taking care of their kids, the scientific establishment tells them they are bad mothers if they go to work. The courts are even more threatening, what with judges occasionally removing kids from a mother on grounds that her work prevents her from taking adequate care of them. </font></p>
<p><font class="nonprinting">The tensions between being a good mother and a good worker affect different social classes differently. A minority of affluent women can participate more intensively in their children's lives or balance work and family with the aid of domestic help. But the vast majority of working women can't purchase their way to balancing work, family, and personal fulfillment. Perhaps this explains why business and political elites have so little sympathy for the cross pressures on working parents. (John Clendenin, chairman of BellSouth, recently told a reporter for <i>Fortune</i> magazine, "People have always had to make choices about balancing work and family. It has always been a personal issue, and individuals have to solve it.") </font></p>
<p> </p>
<hr noshade="noshade" size="2" /><h3><font class="nonprinting">THE POOR PAY MORE</font></h3>
<p><font class="nonprinting">The vise of cultural contradictions squeezes low-income women especially hard. Kathryn Edin and Laura Lein set out to learn how low-income single mothers cope with this dilemma and how they choose between welfare and work. Edin and Lein interviewed close to 400 women in 4 cities, half of them "wage reliant" and half of them "welfare reliant." In <i>Making Ends Meet</i>, they came up with a vivid and detailed picture of how women survive in both situations (they all do lots of things they aren't supposed to do by society's laws, rules, and morals) and how these women think about their own moral choices (they all want to be good mothers and good providers, and they struggle constantly to reconcile the two). </font></p>
<p><font class="nonprinting">Wage work actually diminishes poor women's ability to be good mothers. It prevents them from keeping their kids safe. Keeping children out of danger, at least the kinds of dangers that confront poor kids, is something middle-class moms can take for granted while they worry about language acquisition, learning disabilities, and emotional development. Poor mothers worry all the time about their kids' exposure to gangs and drugs, the seductiveness of selling drugs as a way to afford the things they covet, the temptation to skip school, the possibilities of getting pregnant (none of the mothers worried about their sons impregnating anyone, as far as I could tell), the risks of their apartment catching fire. Many mothers go to elaborate lengths to assure their children's safety while they are at work. One held fire drills every night after supper. The best way, and sometimes only way, to keep their kids out of these kinds of trouble is to stay home with them. </font></p>
<p><font class="nonprinting">There are two other big reasons why the responsible choice for a low-income single mother might be welfare rather than work. Welfare provides health insurance for her children, and most low-wage jobs don't. And welfare, however miserly, provides security that most jobs don't—at least it did before 1997. In the jobs available to many low-skilled or unskilled women, such as fast food or home health care, workers can never be sure of getting enough hours to make enough money while they have a job, and they are always subject to firing or layoffs. When insecurity doesn't just mean a little less of something but the possibility of starvation or homelessness, the rational risk-benefit calculation counsels taking the secure but less rewarding option. </font></p>
<p><font class="nonprinting">Why, then, do any poor, single mothers work, especially when, as Edin and Lein demonstrate, wage-reliant mothers usually come out financially way behind welfare-reliant mothers? They work because they accept the dominant political culture. They work, in spite of being materially worse off than on welfare, because when you go to welfare, "they treat you like an animal just because you need a little help getting back on your feet." They work because "it makes you feel good to know you have a job."</font></p>
<p><font class="nonprinting">The low-income single mothers who do work, Lein and Edin found, worked also because special circumstances lowered their costs of working. They had fewer or older children, or free child care from relatives, or day care subsidies, or free or low-cost rent. Several working women had quit better paying jobs for ones with lower pay in order to get better benefits or an employer more sympathetic to child care responsibilities. </font></p>
<p><font class="nonprinting">Very often, a low-income mother has to violate one moral code in order to comply with another. She may have to engage in off-the-books work and lie to the welfare office to be a good provider, or she may have to be a less reliable or punctual worker to be a good nurse to her kids, or she may have to neglect her kids' health and welfare in order to be a good worker. Working and motherhood rope low-income women in a tangle of moral double binds. </font></p>
<p> </p>
<hr noshade="noshade" size="2" /><h3><font class="nonprinting">GOOD WORK AND BAD</font></h3>
<p><font class="nonprinting">Laundry may be the epitome of drudge work. In India until quite recently, the lowest social caste of all, even lower than the Untouchables, was the group who did the Untouchables' laundry. <i>To 'Joy My Work</i>, Tera Hunter's study of African-American laundresses and domestics in Atlanta after the Civil War, puts dirty work in a wholly different light. In most of the nation, domestic work was all there was for a black woman. In the Atlanta of 1880, 98 percent of wage-earning black women were domestics. But to a people inspired to wrest genuine freedom from a still-paper victory, the kind of domestic work everyone else considered drudge offered important moral possibilities. Work was the means to a precious self-sufficiency and a concrete bridge to the abstract ideal of freedom. Hunter demonstrates just what was at stake in a bundle of dirty clothes: "Black women's success or frustrations in influencing the character of domestic labor would define how meaningful freedom would be." </font></p>
<p><font class="nonprinting">Fought at every turn by their white employers, black domestics made their own rules about work—their schedules, their total hours, their duties, their pay. Through what was quaintly called "pan-toting," they adjusted their compensation to accord with their own sense of justice. Cooks took leftovers, perhaps even a few staples, from the kitchen. Laundresses held onto a client's dressy outfit for an extra week so that they or their children might wear it for a special occasion. Most importantly, domestics claimed their expertise and authority by maintaining control over their methods. One cook, accustomed to the detached kitchens of southern homes, disagreed with her Yankee employer's order to wash the dishes in the sitting room. "[I am] gwine to be cook ob dis ere house, and I'se want no white woman to trouble me. . . . We done claned dishes all our days, long before ye Yankees hearn tell of us, and now does ye suppose I gwine to give up all my rights to ye, just cause youse a Yankee white woman?" </font></p>
<p><font class="nonprinting">In America after the Civil War, the job of laundress was relatively autonomous, and since it was typically done in the woman's own home or at communal washtubs, it was rather more accommodating to women's child care responsibilities and community activities. Perhaps for that reason, laundresses were able to form a trade union, stage a strike, and credibly threaten a general strike of domestics on the eve of Atlanta's 1881 International Cotton Exposition. Laundering was still drudge work, still backbreaking, demeaning, and unremunerative, but these laundresses found their dignity and mustered deep personal and collective strength in washing on their own terms.</font></p>
<p><font class="nonprinting">Much of social theorizing about work holds that different kinds of jobs impart different moral possibilities to workers, and that dignity and control are mostly inherent qualities of occupations. Hunter's depiction of black domestics reminds us that to a large extent, people make their own moral meanings in their work, and that autonomy is something workers can exert, even in the most inauspicious conditions. </font></p>
<p><font class="nonprinting"><font size="+2">A</font>s Hunter shows, even the worst kinds of jobs can be "good work" if people make them part of a struggle to achieve larger ideals. In <i>The Time Bind</i>, Arlie Hochschild explores another way work can be good: It might come to take the place of home and family in people's emotional and moral lives. </font></p>
<p><font class="nonprinting">Hochschild studied employees of a large company that offered about as many family-friendly work options as you're likely to find anywhere. She wanted to solve a puzzle: Why do employees eligible for such options gravitate to those that help them spend more time at work (such as on-site child care, child and elder care referral services, and emergency backup child care) and make scant use of family leaves, part-time work, job sharing, and other programs that enable them to spend more time at home? And why do so many people who otherwise complain about being stressed out by the demands of work and family request so much overtime?</font></p>
<p><font class="nonprinting">For all of the company's lip-service to its "Work-Life-Balance" program, top and middle managers continued to judge employees by their willingness to put in long hours. Even those people most staunchly committed to more balanced lives knew that reducing their work time would be a certain career killer. But why didn't people put up more resistance to the company's pull on their time? For many people, Hochschild found, work has become the place where they get their sense of self-worth, their sense of belonging to a community and of being needed, of giving and getting help, and their sense of growth, accomplishment, and recognition. Many workers—men and women, executives, middle managers, and production line workers—feel they are better people at work than at home, better friends to their co-workers and better fathers or mothers to their subordinates than they are to their children. People often feel more competent, powerful, and appreciated at work than at home. </font></p>
<p><font class="nonprinting">Hochschild doesn't pretend that this reversal of the place of work and home in people's emotional lives describes the majority of the people she studied—at most, perhaps a quarter of them fit this new model. Nor does she explore class differences in how work might operate as a sanctuary and home as a site of oppression. But she does illuminate an important piece of moral terrain, the terrain where people make choices about how much time to spend with their families and how much to spend at work. And while she's out exploring that terrain, she has some disturbing insights about how people experience time at home. </font></p>
<p><font class="nonprinting">For many people the family has been transformed beyond Christine Frederick's wildest dreams. People have Taylorized their relationships, not just their tasks. Time has become the hard currency of the family economy and there's a whole moral code about how family members earn it, save it, invest it, bargain with it, and borrow it on credit. A man goes fishing on Saturday morning knowing he's leaving his working wife with kid duty, but he forgives himself by telling himself he "earned that time" away from family. A woman requests overtime as her way of "sneaking free time" without having to negotiate for it with her husband or kids. A parent avoids spending time with a child by promising an IOU of future time. Much of family time is regulated as scheduled appointments, and time between appointments is felt as squandered. Sadly, many parents know only those aspects of their children's lives that fit into prearranged time slots—the soccer games, dance recitals, school plays, and medical appointments. </font></p>
<p><font class="nonprinting">The "work/family dilemma," as it's called in policy circles, is usually portrayed as something that can be ameliorated by adjusting work schedules and providing services to cover for adults, especially women, while they're on duty at work. Hochschild sees a more profound dilemma. It is not so much about having enough time, using it more efficiently, better balancing demands on one's time, or finding ways to get family tasks accomplished. It's about how we live in time, whether we are present in our acts and our relationships, and who or what we're present for.</font></p>
<p><font class="nonprinting">Hochschild is all in favor of family-friendly policies and she thinks more employers should have more of them, notwithstanding that they may "serve as little more than fig leaves concealing long work-hour cultures." But she's smart enough to see that without a cultural transformation in our values about time, changes in scheduling and assistance with family care cannot change the ways we are deeply diminished by the cult of efficiency. And she's historically grounded enough to know that cultural transformations don't come out of nowhere. They are always the result of political agency, however inchoate and unforeseen the causal chain might be. So, borrowing good ideas from every corner—academics like Juliet Schor and Lotte Bailyn, the "voluntary simplicity" movement, the old "eight-hour day movement"—she drafts a blueprint for a national "time movement." Her ideas won't solve the dilemma anytime soon, but they are the stuff of which important change is made. </font></p>
<p><font class="nonprinting"><font size="+2">I</font>t's hard to find a good woman because we live with a whole series of incompatible moral constructions of women and work. Deeply embedded in our history is a traditionalist view that assigns women a life project of cultivating domesticity. Add an overlay of scientific child rearing that declares women are selfish and will destroy their children if they work while their children are young. Add that pregnant women have been deemed unsuitable workers and women generally have been considered unfit for a whole variety of occupations. Add the claim that women's most important work is motherhood, and then reverse it to insist that motherhood doesn't count as real work. Add a feminist revision that says women are the equals of men and real women ought to prove their mettle in challenging careers and community activities. Reconcile the view that it would be better if poor mothers didn't have to work with a theory that the poor, women included, would benefit from the discipline of hard work. Add a recent welfare reform that tells poor mothers they had better be good providers—but not neglect the kids. Juxtapose the reverence for dull and dirty jobs with the stigmatization of people who do them. </font></p>
<p><font class="nonprinting">The books discussed here all illuminate some aspect of these moral constructions, but they're mostly at a loss about what to do with women. Meanwhile, beyond the bookshelf, we have calls for women to balance their time more effectively (balance is the predominant metaphor in the whole work/family discussion) and calls for employers to balance their needs for competitiveness with respect for employees' family lives. We have calls for poor women to work and work harder, and to be more responsible mothers, and to become mothers less often or not at all, and to put off becoming mothers until they can be good providers. </font></p>
<p><font class="nonprinting">The over-arching lesson of these books is that women are still subject to conflicting moral imperatives. In addition to advocating changes in government and corporate policies, maybe we—experts, philosophers, and activists, men and women—ought to work on making a big cultural tent in which it's possible to be a good woman and a good worker. Maybe we can accept that different women have different needs and different goals, and aim all prescriptions and policies at helping them figure out how to live well by their own moral lights. </font></p>
<p><!-- dhandler for print articles --></p>
</div></div></div>Wed, 19 Dec 2001 19:22:05 +0000141075 at http://prospect.orgDeborah StoneHow Do I Vote For Thee? Let Me Count The Ways.http://prospect.org/article/how-do-i-vote-thee-let-me-count-ways
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p></p><p>A close election, goes the old cliché, proves that every vote counts. Election 2000 proved just the opposite: When the election is close and every vote counts, or is supposed to, that's when the voter is the least powerful.&#13;</p>
<p>&#13;<br />
Come to find out, even the most fundamental particle of democracy is loaded with opportunities for partisan manipulation. Never mind outright fraud, like dead people voting. Leave aside campaign spending, advertising, and the way that money buys elections. Forget about the Electoral College and the way it undercounts voters in heavily populated states. And ignore the ordinary human error entailed in counting anything. Just stick to the normal machinery of democracy -- printing ballots, distributing them to voters, marking them, reading them, tallying them, and declaring a result. Every stage of the process calls for human decisions by someone other than the voter. Just because you vote doesn't mean your vote counts.&#13;</p>
<p>&#13;<br />
How does your vote become a vote? It's not as easy as you think.&#13;</p>
<p>&#13;<br />
Registrars decide whether your name's on their list. If they say no, you don't vote, and your vote doesn't count.&#13;</p>
<p>&#13;<br />
If the registrars say you must show two forms of ID when others show only one, you don't vote, and your vote doesn't count.&#13;</p>
<p>&#13;<br />
If they say you haven't filled in your voter ID number on your absentee ballot request, you don't vote and your vote doesn't count. . . &#13;</p>
<p>&#13;<br />
unless some party stalwarts were filling out applications en masse &#13;<br />
and prevailed on the local registrar to let them fill in your number. &#13;<br />
Then maybe you vote absentee.&#13;</p>
<p>&#13;<br />
If you live abroad and you don't get your absentee ballot postmarked by Election Day, your vote doesn't count. . . &#13;</p>
<p>&#13;<br />
unless a high state official declares that absentee ballots count if &#13;<br />
they arrive by the date she sets, regardless of whether the post mark &#13;<br />
is after Election Day.&#13;</p>
<p>&#13;<br />
If the polls are crowded and you don't get to the head of the line &#13;<br />
before they close, you don't vote and your vote doesn't count. . . &#13;</p>
<p>&#13;<br />
unless a politician or a judge where you live decides to keep the polls open until everyone in line votes. Then maybe your vote counts.&#13;</p>
<p>&#13;<br />
If you don't punch the little hole quite hard enough, you make a dimpled&#13;<br />
chad instead of a vote. A machine won't read your vote, so your &#13;<br />
vote doesn't count. . . &#13;</p>
<p>&#13;<br />
unless someone asks for a recount. Then a team of rival counters &#13;<br />
will decide whether you voted, depending, at their whim, on how &#13;<br />
many corners of your chad are still attached. . . &#13;</p>
<p>&#13;<br />
unless the election is really hotly contested, in which case higher &#13;<br />
authorities might set uniform standards for how to count chads. &#13;<br />
Your vote might or might not count.&#13;</p>
<p>&#13;<br />
If the recount is done by machine, and if your chad gets knocked off &#13;<br />
the ballot in the extra handling, your vote counts. . . &#13;</p>
<p>&#13;<br />
But if you only made a dimple on the ballot, your vote won't be read &#13;<br />
by the machine, and your vote doesn't count. . . &#13;</p>
<p>&#13;<br />
unless there's a hand recount and the counters decide the dimple &#13;<br />
indicates your intent to vote, then your vote counts.&#13;</p>
<p>&#13;<br />
If you get a butterfly ballot, you are likely to vote for a different &#13;<br />
candidate than the one you intended to vote for. Your vote counts, &#13;<br />
but not the way you meant it to count.&#13;</p>
<p>&#13;<br />
If you vote where there are old machines and all the levers don't work, you don't get to vote for some candidates or some offices. The precinct watchers know about the busted machines, but can't or won't do anything about them. Your vote doesn't count.&#13;</p>
<p>&#13;<br />
If the election is very close in your precinct or county, two or three &#13;<br />
appointed officials will decide whether to recount. . . &#13;</p>
<p>&#13;<br />
or maybe, a judge will decide whether to allow the recount or stop it. &#13;<br />
Your vote may or may not be counted correctly. &#13;</p>
<p>&#13;<br />
Any of these people might decide whether a recount will be by hand or by machine. If by hand, see #8. If by machine, see #9.&#13;</p>
<p>&#13;<br />
A state official has power to certify local tallies as correct, and if &#13;<br />
there's more than one tally, to decide which one will be the official &#13;<br />
count. Depending on which count she certifies, your vote may or &#13;<br />
may not count. Oh, by the way, the official with all this power might &#13;<br />
be a campaign activist, even the state campaign chair for one of the &#13;<br />
candidates in the election. . . &#13;</p>
<p>&#13;<br />
unless, the whole election gets pulled out of state jurisdiction and &#13;<br />
goes to the federal courts or the House of Representatives. Then, &#13;<br />
who knows if your vote counts, or how?&#13;</p>
<p>&#13;<br />
When an election is tight, the people making all these &#13;<br />
decisions are likely to be most partisan. Indeed, a close count as the &#13;<br />
election wears on is bound to stimulate partisan juices. Votes, like &#13;<br />
physical matter, are subject to the Heisenberg Uncertainty Principle: We&#13;<br />
can never know the precise measurements of a particle because the sheer &#13;<br />
act of observation changes the particle's behavior.&#13;</p>
<p>&#13;<br />
Of course we've always known that elections are won with &#13;<br />
money and power. But votes are supposed to be the great democratic &#13;<br />
equalizer -- one person, one vote, and all that. Until Election 2000, at &#13;<br />
least we had a theoretical hope that every vote counted. Now the ballot is&#13;<br />
just a thin, dimpled veil over the big democratic secret: Some people &#13;<br />
have lots more power to decide whether and how your vote counts than &#13;<br />
you do. Big Whigs, Money Bags, and Lawyers control your vote, just &#13;<br />
like they control everything else.&#13;</p>
<p>&#13;<br />
What should we learn from this election? We could decide that &#13;<br />
one vote doesn't matter after all and shun the polls even more than we &#13;<br />
do already. Or we could get mad as hell and clean up the system.&#13;</p>
<p>&#13;<br />
We can't eliminate human observation and decision-making &#13;<br />
from counting ballots. Even the best machines still make mistakes, but &#13;<br />
suppose we were to install the best machines in every precinct in the &#13;<br />
country. People will still have to make decisions about eligibility to vote,&#13;<br />
absentee ballots, ballot design, whether and how to do recounts, and &#13;<br />
ultimately, which results to accept as official when there's more than one&#13;<br />
count.&#13;</p>
<p>&#13;<br />
But we can eliminate some of the opportunities for partisan manipulation. For starters, we can set federal standards for elections, so that issues like voter registration, ballot design, and absentee voting are removed from local discretion. We should also set procedural standards for handling close elections. Yes, everybody's partisan in a close &#13;<br />
election, but surely we can apply some basic conflict-of-interest rules to&#13;<br />
the system. The person with the authority to make the most crucial &#13;<br />
calls -- deciding whether to recount and which count to certify -- ought not be a person who is a campaign activist for one of the candidates or parties. If, as in the case of Florida, these two roles coincide, the person ought to be required to step down. We ought to have something like compulsory arbitration in place for these situations. Bring in people whom both sides can trust and who have a professional stake in being fair.&#13;<br />
&#13;</p>
<p>Unless we establish procedures that persuade voters that &#13;<br />
elections are fair, democracy is finished.&#13;<br />
&#13;</p>
</div></div></div>Wed, 19 Dec 2001 19:08:06 +0000139291 at http://prospect.orgDeborah StoneAd Missionshttp://prospect.org/article/ad-missions
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><font class="nonprinting"><font size="+2">I</font>n a series of television ads sponsored by the Health Insurance Association of America (HIAA), Harry and Louise sit at the kitchen table discussing health reform. The ads are unremarkable, yet they have drawn the wrath of President and Mrs. Clinton. "It's time we stood up and said we're tired of insurance companies running our health care system," Hillary Clinton declared. Ira Magaziner termed the ads "unconscionable." Even some of the association's own insurance-company members took offense. New York Life called for a moratorium on the ads, and in the scuffle, Prudential became the fifth major insurer to pull out of HIAA. </font></p>
<p><font class="nonprinting">While everyone debates whether the ads really do distort the Clinton plan or whether they are good or bad for insurers, the real issue has escaped notice. Insurers' advertising campaigns are aimed at something far deeper than a presidential proposal. They are meant to shape Americans' understanding of the very idea of insurance and to influence our cultural commitments to private enterprise versus public responsibility. Here is where insurance advertising will have its real impact on the kind of health insurance the country will have. </font></p>
<p><font class="nonprinting">For years, commercial insurers have been promoting insurance in a way that makes people think of it as personal savings rather than mutual aid. In the 1970s and 1980s, this campaign of persuasion intensified, as women, gays, and the disabled challenged insurer practices of defining them as "uninsurable" or high risk. To counter growing public sympathy with these groups, HIAA, together with its life insurance counterpart, the American Council of Life Insurance (ACLI), ran a series of ads whose lesson was that fairness means each person should pay for himself. </font></p>
<p><font class="nonprinting">One such ad (reproduced in "AIDS and the Moral Economy of Insurance," <i>TAP,</i> Spring 1990) shows a construction worker on a scaffolding and asks, "If you don't take risks, why should you pay for someone else's?" Another shows a man and woman playing basketball, and asks, "Why should men and women pay different rates for their health insurance?" The choral refrain at the bottom of these ads is always the same: "The lower your risks, the lower your premium." The series explains that people with healthy jobs shouldn't have to subsidize those with risky jobs and that men, who make fewer doctor visits in their middle years, shouldn't help pay for the medical care of women. A Northwestern National Life ad, pushing its managed-care plans, appeals to employers' fears that "even if your employees are healthy, insuring their dependents can be scary." </font></p>
<p><font class="nonprinting">One can never be too careful about those other people who might be living dangerously, just plain sick, or in some way walking insurance claims. All of these ads teach the public that we are not responsible for each other. They subtly draw distinctions between "us" and "others," between people who deserve our sympathy and people who don't. People in different jobs, people of the opposite sex, dependents as opposed to employees--are all painted as different and dangerous. "They" are costly to "us." Why should "we" have to pay more to take care of "them?" Isn't it fair if each person just pays for his or her own risk? </font></p>
<p><font class="nonprinting">For many things, it is fair that we each pay our own way. People contribute to insurance plans, though, precisely because they fear they won't be able to meet the expenses of a big loss, even through personal savings. The very purpose of insurance is to create economic security by pooling large numbers of people together, spreading the risks of major losses to few people among the many. Insurance always entails some subsidy, in effect, from the people who don't incur losses to those who do. When insurers stratify their policyholders into very finely calibrated small groups with nearly homogeneous risks, they eliminate most of the risk-spreading and undercut the capacity of insurance to provide security. </font></p>
<p><font class="nonprinting">Some people argue that pricing health insurance according to individual risk factors encourages people to act responsibly about their health, but there are other ways to provide such incentives without destroying insurance. Insurance should be about community solidarity and mutual obligation. Instead, insurers promote an ethic of self-sufficiency and deafness to the plight of others, even others who build our buildings, raise our children, and otherwise contribute to the commonweal. This politics of selfishness, more than anything else, undermines the possibility of health insurance reform. </font></p>
<p><font class="nonprinting">The alternative, a politics of mutual responsibility, is suggested in a 1991 ad for Social Security. "Can you guess who's eligible for Social Security?" reads the caption under a photo of seven people, not a gray hair among them. The seven include an infant, a child, a teenager, and several adults; people of obviously different ethnicities and races; and males and females. The answer: "everyone." Alas, the Social Security ad is black-and-white and only half-page, not nearly so slick and eye-catching as the sometimes multi-colored, double-page spreads of the big insurers or HIAA. Can you guess who's winning the battle for the public mind? </font></p>
<p> </p>
<p> </p>
<p> </p>
<hr /><h3><font class="nonprinting"><i>Cempeting to the Death</i></font></h3>
<p> </p>
<p><font class="nonprinting">The world of health insurance isn't what it used to be. Until about 1960, health insurance was largely dominated by Blue Cross-Blue Shield, the nonprofit plans that began health insurance in the 1930s. On one flank, they lost to large employers, to whom a 1974 law (ERISA) gave tremendous financial incentive to self-insure. On another flank, they lost to commercial insurers, who made most of their inroads by offering cheaper rates to relatively healthy employee groups and pulling them out of the large community-rated pools insured by Blues plans. Competition between the nonprofit Blues and the commercial companies killed community rating and destabilized the health insurance market. Millions of people are left totally uninsured, temporarily uninsured as they (or their breadwinner) move between jobs, or partially uninsured because their policies don't cover serious illnesses they already have. </font></p>
<p><font class="nonprinting">For much of the postwar period, commercial insurers stuck together and sought their political fortunes via a unified trade and lobbying association, HIAA, while the Blues plans were represented by the National Blue Cross-Blue Shield Association. Beginning around 1990, fissures within the commercial sector became apparent, driven by increasing market segmentation. Ironically, it was probably the specter of health insurance reform that drove wedges into the industry ranks. Small and large insurers would be affected quite differently by various reform proposals, and larger companies were in a much better position to cede some of their traditional insurance functions and become managed-care providers instead. </font></p>
<p><font class="nonprinting">Some companies began to see that the industry's notion of fairness was not widely shared. What risk-rating meant to the general public was that sick people and people with sick kids couldn't get insurance, or couldn't change jobs, or might "prevent" their fellow workers from getting affordable insurance. Political pressure to constrain risk selection and cream-skimming was intense--even President Bush's otherwise very incremental proposal called for limits on pre-existing condition exclusions and on risk-based pricing. In November 1991, Prudential distanced itself from the industry's bear-your-own risk campaign with full-page ads in the <i>New York Times</i>, the <i>Wall Street Journal</i>, and other places. The new ad showed a chest X-ray, captioned, "Because he works for a small company, the prognosis for his fellow workers isn't good either." The ad went on to decry the industry practice of not insuring small companies with one or a few very sick workers. </font></p>
<p><font class="nonprinting">Within HIAA, there was vigorous discussion of insurance reform, as commercial insurers were increasingly held to blame for lack of access to insurance. Since many of the approximately 300 member companies survived by cream-skimming and using low first-year bids to woo companies away from their current insurers, the organization could not muster a consensus on ending these practices. The bigger companies, meanwhile, were changing their market strategy. Instead of fine-tuned risk evaluation and pricing for small businesses, they sought to attract larger employers by offering managed-care plans that promised to contain health insurance costs. They were happy to give up risk-rating in order to gain public goodwill, but the smaller companies felt they were being thrown to the wolves. </font></p>
<hr size="1" /><center><font class="nonprinting"><a href="/subscribe/"><img alt="Subscribe to The American Prospect" border="0" src="/tapads/mini_subscribe.gif" /></a> </font></center><br /><hr size="1" /><p><font class="nonprinting">In mid-1992, one of the largest health insurers, CIGNA, withdrew from the HIAA. On the day after Clinton's election, Aetna and Metropolitan Life announced they would pull out by the end of the year, and on December 31, Travelers left too. Prudential's recent announcement of its withdrawal completes the exodus of the so-called Big Five insurers from HIAA, an exodus that has cost the organization millions in dues and fees. The Big Five--CIGNA, Aetna, Prudential, Travelers, and Metropolitan Life--started their own lobbying organization, perhaps not coincidentally called the Alliance for Managed Competition. </font></p>
<p> </p>
<p> </p>
<p> </p>
<hr /><h3><font class="nonprinting"><i>Don't Call Me 'Insurer'</i></font></h3>
<p> </p>
<p> </p>
<p><font class="nonprinting">Anticipating an escalating shift to one form of managed competition or another, the Big Five have transformed themselves into managed-care providers in a big way. Aetna alone covers some 5 million people in managed-care plans, almost as many as the 6.5 million insured by Kaiser Permanente, the nation's largest and oldest health maintenance organization. Met Life covers 2.9 million, CIGNA 2.6 million, and Travelers 1.9 million in managed-care plans. Having positioned themselves to be managed-care providers under any managed-competition reform, they are now strong advocates of managed competition. </font></p>
<p><font class="nonprinting">While Harry and Louise mull over the president's plan, and defenders of the plan castigate the ads, the large insurance companies are advertising to win over Harry and Louise's hearts. Within the industry, there is much talk of the need to sell to the "retail customer"--that's you and me--instead of to the employer, because patients are the ones who are going to be choosing among health plans. In publications directed at general audiences, such as <i>Time</i>, <i>News- week</i>, and <i>U.S. News and World Report</i>, large insurers' ads rarely trumpet cost saving or financial stability of the company anymore. Instead, they plump the insurer as caregiver, as medical innovator, as savior. </font></p>
<p><font class="nonprinting">In October CIGNA changed its logo from a lifeless rectangle to a stylized tree, and in place of the motto, "We get paid for results," the company now dubs itself "a business of caring." Following a month of ads about the change of logo and the move toward caring, the November 1993 ads in various newsweeklies featured a photo of an infant and two pages of text about the company's dedication to caring for individuals. </font></p>
<p><font class="nonprinting">An Aetna full-color, double-page spread pictures a child's drawing. The text begins: "When Lisa was born, her kidneys didn't work. So we helped Lisa's mother learn to care for her. It saved $200,000 in hospital costs. And let Lisa grow up at home." Another in the Aetna series pictures a wolf, a witch, a pair of beady eyes, and something labeled "the measles." The text begins: "Not all monsters are make-believe. Measles is a very real threat to your child. . . . That's why we're giving millions of dollars . . . to educate and encourage parents to vaccinate their kids." One might think Aetna was a medical research center. Indeed, James McLane, chief executive of Aetna Health Plans, recently told the <i>Boston Globe</i>, "We're not an insurance company; we're a managed-health care company." </font></p>
<p><font class="nonprinting">Prudential is on the same bandwagon. One of its ads shows an anatomical drawing of a heart along with a valentine-like heart, under the headline: "If you ever need one, there's an insurance company that has one." A smaller headline claims credit for "a life insurance innovation that has made eleven life-saving heart transplants possible." Another Prudential ad in this vein shows a test tube and a pen: "It wasn't a breakthrough in medical science that saved his life. It was one in life insurance." Like Aetna, Prudential is asserting its identity as a medical provider rather than an insurer. "We've increasingly moved from being a traditional health insurer to being a manager of care, which means we're in a different business than most of HIAA's members," a spokesman told the <i>Boston Globe</i> by way of explaining the company's withdrawal from HIAA. </font></p>
<p><font class="nonprinting">Thus the very companies that strongly positioned themselves to gain from national health reform by growing large managed-care operations are now conducting a massive public relations campaign to get people to accept the idea of medicine and insurance rolled into one package. Until now, Americans have prized a system where the doctor was your advocate and thought only of your health, not somebody else's pocketbook. That system got us into trouble, and of course, doctors were thinking about pocketbooks all along, not least their own and their hospital's. </font></p>
<p><font class="nonprinting">Still, for two centuries we have believed it's a good idea to insulate medical decision-making from financial concerns. It doesn't matter whether the incentives are to do unnecessary or harmful things to patients or to withhold necessary and beneficial care. There's something precious and right about striving to keep medical decisions untainted by money. If medicine really offers valid ways to relieve illness and suffering, people should receive precisely the care that is medically appropriate for them. Now we are plunging into a system that institutionalizes the blurring of medical and fiscal boundaries by merging the payer and the provider into one entity. One hopes it's not so easy to change our gut instincts, but this is exactly what one stream of insurance advertising is designed to do. </font></p>
<p> </p>
<p> </p>
<p> </p>
<hr /><h3><font class="nonprinting"><i>Choosers and Losers</i></font></h3>
<p> </p>
<p><font class="nonprinting">Small insurers can't hope to play the managed-care game in the tough world of managed competition envisioned by the Clinton plan. No surprise, then, that their strategy has been to discredit the plan. Enter Harry and Louise. In the ad that most rankled the Clintons, an announcer intones, "The government is going to force us to pick from health care plans designed by government bureaucrats," to which Louise laments, "Having choices we don't like is no choice at all." This is the ad that provoked Hillary's angry outburst at a meeting of pediatricians. "One of the great lies currently afoot in the country," she insisted, "is that the president's plan will limit choice." </font></p>
<p><font class="nonprinting">Whether the president's plan will limit choice, as Harry and Louise fear, is a matter of perspective: Whose choice, and compared to what? Universal coverage, if it ever does get phased in, would open up choices to people who never had any for lack of health insurance in the first place. The restraints on risk-rating as well as the subsidies for small businesses and the poor would surely increase options for people whose health insurance is now non-existent, tenuous, or partial. The Clinton plan undeniably widens choice compared with the status quo in the sense that many people will gain insurance, and all will have some choice of plans, limited, of course, by what they can afford. </font></p>
<p><font class="nonprinting">Still, the very essence of the Clinton plan is to rein in health care costs by constraining patients' ability to choose their doctors, their specialists, and their therapies--the things that matter once people have insurance. Managed care abrogates many of these choices and gives them instead to managers. In turn, managers choose providers and therapies on the basis of economic criteria and studies of efficacy. True, the world of medical insurance is taking the express train to managed care even without Clinton's reforms, but it's disingenuous to pretend the plan won't accelerate the trend to managed care, or that managed care doesn't limit patients' choices compared with fee-for-service medicine. </font></p>
<p><font class="nonprinting">Thus the HIAA ran the ads and the Clinton team got so exercised because both know the ads are potentially so effective at scaring the middle class. For any reform to happen, the middle class--people like Harry and Louise who have some form of health insurance--need to be convinced to pay more to help those who don't have insurance. And for a managed-competition reform to win, Harry and Louise need to be persuaded to give up a lot of freedom to choose their doctors, and a lot of their doctors' freedom to choose their treatments. The Clinton reform will be lost if the middle class can be whipped up over choice. </font></p>
<p><font class="nonprinting">Meaningful choice would be far greater under a single-payer system in which all doctors and all hospitals would be universally accessible. Single-payer plans, and the plans of most other developed countries, limit their health spending by setting national or regional spending targets, by limiting investment in expensive technology, and by negotiating fee and expenditure limits with physicians and hospitals. </font></p>
<p><font class="nonprinting">In the end, patients in those systems have their choices limited, too, but they don't see or feel the limitations so directly. They can visit doctors of their choice, they don't have to get every referral and procedure authorized by an insurer--or worse, their own doctor who is now a "gatekeeper" for an insurance company--and they hear only about the choices their doctors think are available within the overall budget constraints. Ironically, the greatest foe of a single-payer system--the insurance industry--is invoking in its ads the kind of free choice that would be truly available only if the industry went out of business. </font></p>
<p><font class="nonprinting"><font size="+2">T</font>he Big Five companies know that choice is the hot-button for the public. An ad by their Alliance for Managed Competition lists ten reasons why "Americans will have more choice of health care providers" under managed competition. Among the reasons: people will be able to change doctors within plans, to switch plans, choose from a menu of plans, and to seek providers outside their plans. </font></p>
<p><font class="nonprinting">In fact, most managed-care plans limit patients to a very small cluster of doctors within the plan's larger roster. Frequently, a patient's menu of specialists is limited to those in the same cluster as his or her primary care physician, and the menu can be tiny, with only one or two specialists in any field. Those plans that do allow use of doctors and hospitals outside the plan charge a lot more for the privilege and may require the patient to get the out-of-plan care authorized by an in-plan doctor. And of course, patients can switch plans only if they can afford to, and usually may switch only during the one month of the year that is defined as an "open enrollment" period. </font></p>
<p><font class="nonprinting">While these companies are telling the general public that managed competition means choice of doctors, they promote their image as tough cops and good cost-controllers to employers and insurance salesmen. In business-oriented publications such as <i>Business Week</i>, <i>National Review</i>, <i>American Enterprise</i>, and <i>National Underwriter</i> (an insurance trade weekly), their ads feature their prowess in keeping down the employer's benefit costs through HMOs, preferred provider networks, utilization review, and managed care-- all devices that do limit patients' choices about doctors, hospitals, tests, and treatments. According to an article in <i>National Underwriter</i>, "gatekeeper-model, network-based managed care" is the wave the future. HMOs that allow members to see non-plan doctors use these "opt-out" provisions only to gain enrollments now and plan to close this option after a few years. Insurers do seem to be willing to play the freedom-of-choice message in whatever key their audience wants to hear. </font></p>
<p><font class="nonprinting"><font size="+2">C</font>hoice may be the political symbol everybody is claiming for themselves in this debate, but there is yet another subtext of ads by both the small and big insurers: private enterprise is better than government. The Harry-and-Louise ads sneer at government bureaucrats, coercion ("the government may force us to pick"), and incompetence ("the government sets a ceiling on spending and says, `that's it'"). One of Prudential's ads pictures an X-ray of the American flag, labeled "Patient: The United States" and captioned "What this country needs is health care that's been given a thorough examination." The ad thus turns a private corporation into the country's doctor, as Prudential sets forth guide lines for sensible reform. A joint ad of the Association of Health Insurance Agents and the National Association of Life Underwriters (both groups of salespeople who would be made obsolete by health alliances) portrays a prescrip tion form for a patient named the United States of America. The doctor (Uncle Sam, M.D.) has prescribed, "Take away competition, take away choice, take away personal service, add 1-800 hotline." Government, it seems, can only mangle the marvelous market. </font></p>
<p><font class="nonprinting">Even when not explicit, the private-enterprise-can-do theme is there in most commercial insurance ads, simply because the companies and organizations are all touting their own strengths. How often do we see advertisements for Medicare or the Veterans Administration health system? </font></p>
<p><font class="nonprinting">If we are to have any kind of health insurance reform, and certainly if we are to move toward universal coverage, we need a public ethic of solidarity. Citizens must believe that relieving illness is a community responsibility, rather than an individual misfortune, or worse, that it's just another taste for goods, to be satisfied with private purchases. We also need public confidence in institutions of governance. Pure market competition is the way to the present morass. Only public institutions--legislatures, agencies, and courts--can restrain for-profit insurers from competing in ways that undermine their ability to provide health insurance for all. On the surface, insurance ads pitch companies, their products, and their preferred political programs. We are the dupes, though, if we hear only their jingles and miss the deeper cultural messages debasing solidarity and public enterprise. </font></p>
<p>
<!-- dhandler for print articles --></p>
</div></div></div>Wed, 19 Dec 2001 19:04:08 +0000141415 at http://prospect.orgDeborah StoneMaking the Poor Counthttp://prospect.org/article/making-poor-count
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><font class="nonprinting">The poverty line is a stock figure of American statistical and political culture. Most of us are at least dimly aware that it is an official level of income used to separate the poor from the non-poor, that the government somehow sets it, that it changes from time to time, and that it is the topic of periodic political fights. Even those who know its arcane details, however, usually forget that our official measure of poverty was once crafted by a real person with a passion and a pencil. </font></p>
<p><font class="nonprinting">The person was Mollie Orshansky. In 1963, she published an article about poverty that set the U.S. government on a new path. "Children of the Poor" ran in the <i>Social Security Bulletin</i>, an otherwise rather dry publication of the Social Security Admini- stration's Division of Research and Statistics. By the time the decade was out, Lyndon Johnson had included her poverty estimates in his 1964 Economic Report of the President, the Office of Economic Opportunity had adopted her methods for determining eligibility to its programs, and the Bureau of the Budget had mandated that the entire federal statistical establishment use her definitions and methods in reporting poverty. </font></p>
<p><font class="nonprinting">The story of Mollie Orshansky and poverty measurement is a case study in what makes political arithmetic persuasive. "I wanted to show what it was like not to have <i>enough</i> money," she told me in an interview last summer. "It's not just that the poor had less money--they didn't have enough. I knew you couldn't spend for one necessity without taking away from another." </font></p>
<p><font class="nonprinting">Orshansky knew whereof she spoke. The child of Russian Jewish immigrants, she grew up in Brooklyn in the 1920s. Sometimes her father couldn't support the family of six daughters, even when he had work. Sometimes she went with her mother to apply for relief. ("If they gave you pork, you took it even though you couldn't eat it, because you didn't want to seem ungrateful.") She learned easily to forgive the other women in line from her neighborhood who lowered their eyes and pretended not to see her and her mother. </font></p>
<p><font class="nonprinting">The core of Orshansky's idea was to use the cost of a nutritionally adequate diet as the basis for a cost-of-living estimate and to calculate a cost of living for families of different sizes and composition. Because she had come to Social Security from the Bureau of Home Economics and Human Nutrition in the Department of Agriculture, Orshansky was eminently familiar with the department's food plans. These plans set forth quantities of various food groups--meat, bread, potatoes, fats, fruits and vegetables, and so forth--necessary to sustain adults and children, and using current prices, found the cost of food necessary to feed families on these rations. The department calculated several levels of food plans, which were more or less generous but whose names were as opaque as canned-olive sizes: instead of jumbo, colossal, and extra-large, the Department of Agriculture called its food plans liberal, moderate, low-cost, and economy (in descending order), the latter renamed thrifty in 1975. </font></p>
<p><font class="nonprinting">Orshansky came up with 62 family types based on the number and ages of children, and for each she created a farm and non-farm version, making 124 categories. For each of these, she calculated a food budget based on the low-cost plan and one based on the economy plan, giving 248 categories in all. Here's where the pencil came in. </font></p>
<p><font class="nonprinting">Food is not the only thing a family needs to live, but Orshansky chose it as the basis for her measure because there were relatively good studies of diets and their cost. She still needed a way to relate the cost of food to the cost of a family's total needs. A 1955 Department of Agriculture survey had shown that the average family spent one-third of its income on food. Orshansky assumed that families who must spend more than a third of their income on food were probably giving up other necessities--or food. So she made the elegant simplification of multiplying the cost of a family's necessary food basket by three, to arrive at an imputed total income that would put that family on the threshold of poverty. </font></p>
<p><font class="nonprinting">For a non-farm family of two adults and two children, the low-cost food plan (multiplied by three) yielded an annual income of $3,995, enough to keep the family above poverty, and the economy plan yielded a figure of $3,165. Though Orshansky was dogged in creating 124 family types and income requirements, the media latched on to the archetypal family of four (minus cat or dog) as the magic number that would be called "the poverty line." In fact, the poverty line is the entire set of threshold incomes for different types of families. </font></p>
<p><font class="nonprinting">Now that Orshansky had a way to measure poverty--to decide <i>whether</i> to count each household as poor--she needed a way to find out <i>how many</i> households were poor. For that, she needed to apply her definition to the whole population. With a $2,500 budget from the Social Security Administration, she went to the Census Bureau to get data on family and household income. The bureau had regularly calculated income for families of different sizes, but had never looked at income in relation to family composition: the number of children and adults and the ages of children, and the sex of the head of household. Orshansky used her $2,500 to buy these calculations, and when all was said and done, she concluded that somewhere between 17 million and 23 million children were poor. </font></p>
<p><font class="nonprinting">"Children of the Poor" did for Washington bureaucrats what Michael Harrington's <i>The Other America</i> did for the broad reading public. Harrington had a journalist's flair for revealing the poverty Americans had ignored and for evoking empathy and outrage. Orshansky had the economist's capacity to give quasi-scientific backing to political will. </font></p>
<p><font class="nonprinting">Preprints of "Children of the Poor" were circulated widely before publication and reached Sargent Shriver and others in the Office of Economic Opportunity. Legal advocates for the poor found inspiration in her measures as well. In 1964, the Justice Department called on her to help fight the poll tax; she fashioned the argument that even a $2 poll tax could deprive a poor family of a day's worth of food. In 1965, the OEO adopted her measures; in 1969, after a slight revision in the method of setting poverty thresholds, the Bureau of the Budget directed all federal agencies to use the Orshansky measures. In six years, her methods had become the indispensable tools of the poverty agencies and the standardized procedures of the statistical agencies. </font></p>
<p><font class="nonprinting">To be sure, Mollie Orshansky was not the first to relate poverty measurement to the cost of food. The English town of Speenhamland in 1795 instituted a relief system that made up the difference between a worker's wage and the cost of bread sufficient to feed him and his family. In the early 1900s, several American private charities and public agencies had calculated family budgets and used them to determine relief amounts. But in the early 1960s, everyone else was counting poverty by seeking a single income level for families of every size and description. The 1964 Report of the Council of Economic Advisers, for example, used a figure of $3,000 as a threshold for <i>all</i> families. </font></p>
<p><font class="nonprinting">Asked why her work was so influential, Orshansky usually cites three reasons. First, there was an accident of timing. "It appeared when needed," she wrote in 1988. She credits Lyndon Johnson for being ready to do something about poverty. Second, by linking poverty measures to dietary requirements and household consumption surveys, she provided a seemingly scientific rationale for poverty thresholds. And last, her estimate of $3,195 for a family of four gained plausibility by being very close to the Council of Economic Advisers' "one-size-fits-all" figure of $3,000. </font></p>
<p><font class="nonprinting">There are no doubt additional reasons why her work was so influential. It was innovative, argumentative, and aggressively political. It was arithmetic from the heart. It was meant to tug at our consciences, and it gained its leverage from having its feet planted in poor people's lives. </font></p>
<p><font class="nonprinting">Orshansky insisted that family size and composition made a difference in people's living standard. She refused to think in abstract aggregates of households or families. She knew she "needed to prove" that it cost more to feed more kids, and that it cost more to feed teenagers than infants. "This was the female side of research," she said. </font></p>
<p><font class="nonprinting">Others looked at data showing that large families spent less per person on food than smaller families and called it economies of scale. Orshansky looked at what food these large families actually bought and saw that it was often nutritionally inadequate. "It appears that what passes for 'economy of scale' in the large family may in part reflect a lowering of dietary standards enforced by insufficient funds," she noted in "Counting the Poor." </font></p>
<p><font class="nonprinting">Many economists said that if a family has insufficient funds, the housewife will just stretch the food dollar. But "economizing inside the family doesn't go even-Steven," Orshansky insisted. She explained this comment to me by spinning a hypothetical tale that is only thinly disguised autobiography: Father brings a couple of unexpected guests home to dinner, confident that Mother will just stretch the food, as she is so good at doing. Mother pulls her daughters aside and tells them, "When the meat comes around, you take one small piece and pass the dish on. Same with the potatoes." And when the meat dish comes to Mother, she takes nothing. </font></p>
<p> </p>
<p><font class="nonprinting">Orshansky refused to act as if a family is a family is a family. She looked at families by sex of the head of household, because she thought it would make a difference. It did. Mother-child families had more children on average than what she called husband-wife families. Mother-child families of every size also had dramatically lower incomes than husband-wife families with the same number of children. For example, mother-child families with two children had a median income of $2,390 while husband-wife families with two children had a median income of $6,615. </font></p>
<p><font class="nonprinting">One of Orshansky's "Aha!" moments came when she found that in 1961, the median income for a non-farm family headed by a woman was about $2,340, and 40 percent of these families had less than $2,000 a year to live on. She remembers rushing into her boss's office, interrupting a meeting to proclaim her horror: "These women have to live a whole year on no more than I paid for one statistical table!" One of the women at the meeting chided her not to get so excited: "If you didn't use the money for your tables, Mollie, those women still wouldn't get any of it." </font></p>
<p><font class="nonprinting">Orshansky understood that having a <i>plausible</i> measure was more important than pinpoint accuracy. She dedicated herself to making poverty measures reflect the reality of poor people's lives, but for her, the purpose of research was action, not "producing an intellectual treatise." "I don't think we should sit around debating whether we should do something for 32 million children or 26 million children when we haven't even done anything for 10 million." </font></p>
<p><font class="nonprinting">Orshansky made her measure something people could feel. She related it to food and peppered her studies with images of what it's like to be poor. Washington was talking in terms of a minimum annual income for a family of four--Johnson's 1964 Economic Report of the President cited her $3,165 figure. Orshansky broke down that annual income into the 23 cents per meal per person it allowed, and even some of the food budget-makers in Agriculture were stunned. No one could fix a meal for that, they protested. </font></p>
<p><font class="nonprinting">Orshansky's poverty thresholds were often criticized by the Left for being too stringent, and especially for using the Department of Agriculture's economy plan instead of its low-cost plan. Orshansky, though, was more cognizant than anybody of just how stringent the economy plan was. She publicized the Agriculture Department's own caveat that the economy plan was sufficient only in emergencies and on a temporary basis. She didn't remain abstract in her criticism, either. In "Children of the Poor," she invited middle-class readers to ponder the assumptions of the economy plan budgets: "They assume the housewife will prepare all the family's meals at home. There is no additional allowance for snacks or the higher cost of meals away from home, or meals served to guests. Nor is there extra allowance for the ice-cream vendor or the soda pop so often a part of our children's daily diet." </font></p>
<p><font class="nonprinting">In "Counting the Poor," Orshansky put a day's food allowance into the figurative housewife's pocket and asked readers to share her plight: "The poverty line would allow a housewife with a husband and two kids about 70 cents a day per person for food. For a meal all four of them ate together, she could spend only 95 cents, and to stay within her budget, she must allow no more a day than a pound of meat, poultry, or fish altogether, barely enough for one small serving for each family member at one of the three meals." </font></p>
<p><font class="nonprinting">I asked Orshansky if she had always wanted to do something for children or against poverty. But my question came from a different generation and a different economic experience. "I never thought of what I wanted to be. . . . You didn't plan to be anything--you planned to get a job." </font></p>
<p><font class="nonprinting">She was the only one of the "girls in the neighborhood" to go to high school because when she graduated from eighth grade at 14, she was not old enough to work. Since she had younger sisters at home to support, she would never have gone to college but for two scholarships from Hunter College. She graduated with a major in mathematics and statistics and a minor in physics. Her first position was as a statistical clerk in the New York Department of Health. After a year, she was recruited to the Children's Bureau in Washington, D.C. Then she worked in the Department of Agriculture, and eventually came to the Social Security Administration in 1958. In all these jobs, she did what she was assigned. She was not a crusader. </font></p>
<p><font class="nonprinting">Perhaps she was as much fired up by the times as she herself fired up the poverty warriors around her. She came to work on children in poverty serendipitously, after being dropped from another project that she found pretty boring anyway. Looking at children was her idea. Her first excursions into that $2,500 worth of data resonated deeply. "I don't need a good imagination when I write about the poor," she told me. "I have a good memory." </font></p>
<p><font class="nonprinting">Talking about the difficulties of lifting kids out of poverty, she reflected: "You're asking someone to go alone, to break away in a sense, from the family, and to move to a group where they're not necessarily accepted. If you're 13 or 14 or 15 . . . ," her voice trails off for a moment. "It's difficult to alienate children from what they know. The things parents want for their children--the things they do out of love--make a gap." </font></p>
<p><font class="nonprinting">Perhaps because she knows what it's like to have lived in that gap, she looks back today and says, "I wasn't doing a poverty line--I was doing what was necessary." </font></p>
<p><font class="nonprinting">Orshansky's necessity is still the foundation of today's poverty measurement, though her measures have been much distorted from what she originally intended. Orshansky shares the main criticisms that have been leveled. According to Gordon Fisher of the Social Security Administration, people in the SSA, the Census Bureau, the Council of Economic Advisers, and the OEO all knew by 1967 that food had gone from one-third to one-fourth of the typical family's expenditures. Orshansky proposed using a multiplier of four, but by then, the OEO was already using the old thresholds based on the one-third figure, and it opposed any change. Similarly, Orshansky would not have chosen the economy plan as the basis for an official measure, but the OEO did, and OEO's poverty definition set the terms for the future. </font></p>
<p><font class="nonprinting">Food now constitutes only one-fifth of non-poor families' budgets, but no agencies have been willing to use a multiplier of five. To do so would send the official poverty rate way up and create political pressure for dramatic increases in poverty spending. In <i>The Forgotten Americans</i>, John Schwartz and Tom Volgy show that if the official poverty measure were adjusted to stick more closely to Orshansky's original principles, the poverty rate for 1989 would have been 25.6 percent instead of 12.8 percent. </font></p>
<p><font class="nonprinting">The poverty measure has not kept pace with rising real income standards, even though, since 1969, the thresholds have been adjusted by the Consumer Price Index. As Schwartz and Volgy show, any realistic measure of the minimum income necessary to cover basic necessities is now about 50 percent higher than the official poverty thresholds. In the opening paragraphs of "Children of the Poor," Orshansky foresaw the problem: "As the general level of living moves upward and expands beyond necessities, the standards of what constitutes an irreducible minimum also change." </font></p>
<p><font class="nonprinting">The poverty line illustrates how a measure can take on a life of its own. Agencies become invested in protecting it from change. The current poverty lines based on Orshansky's measures are certainly too low and therefore fail to acknowledge millions of people who lack a decent standard of living. Yet despite numerous attempts during the Reagan and Bush years to manipulate the measures so as to make poverty seem to disappear, Orshansky's concepts protected a large core of the poor from statistical and political invisibility. Her thresholds were there not only when they were needed, but also when they were distinctly unwanted. </font></p>
<p><font class="nonprinting">For all its independence, the poverty line also reflects the life of its creator. Perhaps every social measurement should be designed by someone for whom the job is necessary. </font></p>
<p> </p>
<p> </p>
<hr size="1" /><center><font class="nonprinting"><a href="/subscribe/"><img alt="Subscribe to The American Prospect" border="0" src="/tapads/mini_subscribe.gif" /></a> </font></center><br /><hr size="1" /><p><font class="nonprinting">READING </font></p>
<p><font class="nonprinting">MOLLIE ORSHANSKY </font></p>
<p> </p>
<p><font class="nonprinting">"Children of the Poor," <i>Social Security Bulletin</i>, July 7, 1963 </font></p>
<p> </p>
<p><font class="nonprinting">"Counting the Poor: Another Look at the Poverty Profile," <i>Social Security Bulletin</i>, January 1965 </font></p>
<p> </p>
<p><font class="nonprinting">"Who's Who Among the Poor: A Demographic View of Poverty," <i>Social Security Bulletin</i>, July 1965 </font></p>
<p> </p>
<p><font class="nonprinting">"Recounting the Poor," <i>Social Security Bulletin</i>, April 1966 </font></p>
<p> </p>
<p><font class="nonprinting">"How Poverty Is Measured," <i>Monthly Labor Review</i>, 1969 </font></p>
<p> </p>
<p><font class="nonprinting">"Measuring Poverty: A Debate," <i>Public Welfare</i>, Spring 1978 </font></p>
<p> </p>
<p><font class="nonprinting">Gordon Fisher, "The Development and History of the Poverty Thresholds," <i>Social Security Bulletin</i>, Winter 1992 </font></p>
<p>
<!-- dhandler for print articles --></p>
</div></div></div>Wed, 19 Dec 2001 18:48:05 +0000141393 at http://prospect.orgDeborah StoneWhen Patients Go To Market: The Workings of Managed Competitionhttp://prospect.org/article/when-patients-go-market-workings-managed-competition
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><font class="nonprinting"><font size="+3">A</font>fter a generation of deadlock, there is finally a broad consensus that the health system is broken, and a rare political opportunity to fix it. The present system manages to be simultaneously inflationalry, arbitrary, cumbersome for providers, and unreliable for consumers. But despite the opportunity for reform, we are on the verge of a disasterous mistake. Increasingly, it appears that the Clinton administration will embrace some variant of "managed competition." The strategy seeks to achieve universal converage and cost containment while simultaneously avoiding either public financing of the entire scheme or a bruising political confrontation with the health-industrial complex. </font></p>
<p><font class="nonprinting">This logic seems clever, but managed competition is a therapy based on the wrong diagnosis. It doesn't begin to address the deep structural and cultural causes of our health system breakdown. And even though many of managed competition's intellectual creators draw on lessons from failures of past health system reforms, they often don't go far enough in breaking from past orthodoxies. </font></p>
<hr /><h3><font class="nonprinting">PATENT MEDICINE</font></h3>
<p><font class="nonprinting">Managed-competition advocates offer a simple diagnosis of the problem: consumers and providers of health care are not sufficiently cost-conscious, nor have they any reason to be. Since most users of health care pay only a fraction of the cost of their services (in the form of fixed health insurance premiums), they have no reason to shop for the most efficient providers. People who receive health insurance as a tax-free employee benefit don't even feel the true cost of their insurance. And since hospitals and doctors have been the largely unrivaled arbiters of what care is medically necessary, they have every incentive to offer generous, even marginally effective treatments to patients and to stick insurers with the bill. With that diagnosis of the problem, it makes sense to restructure the health care market to create cost-consciousness on both sides. And that's what managed competition aims to do. </font></p>
<p><font class="nonprinting">The failure of patients and providers to factor cost into their decision making is an old diagnosis. It is precisely the one that led to the expansion of Health Maintenance Organizations (HMOs) with a big boost from the federal government in the 1970s. </font></p>
<p><font class="nonprinting">HMOs were designed to receive a flat rate per patient-member, known as a capitation fee; with their budget thus essentially fixed, they would have every incentive to avoid unnecessary care and to provide necessary care as efficiently as possible. They would save money, among other ways, by emphasizing preventive care and by using primary care physicians as gatekeepers to hospital and specialist care. Once HMOs got their prices down, competition between them and conventional fee-for-service plans would force the latter to become more efficient and lower their prices as well. </font></p>
<p><font class="nonprinting">But HMOs failed to cure the disease. The federal government sought to control costs and increase coverage of the poor by stimulating the growth of HMOs and encouraging competition among them and between them and other providers. In 1973, Congress passed a law requiring employers to offer multiple insurance options to their employees if they offered any health insurance at all. The unanticipated effect of this requirement was to divide every employee group into smaller groups for insurance purposes. </font></p>
<hr size="1" /><center><font class="nonprinting"><a href="/subscribe/"><img alt="Subscribe to The American Prospect" border="0" src="/tapads/mini_subscribe.gif" /></a> </font></center><br /><hr size="1" /><p><font class="nonprinting">This was the beginning of the erosion of broad risk-pooling in health insurance. Employers, who had been the primary mechanism for aggregating individuals into large insurance risk pools, were now forced to disaggregate their employees, or at least, allow their employees to disaggregate themselves. Once insurers could compete for partial segments of an employer's work force, the competitive strategy of selecting the healthiest risks to insure was greatly intensified. Commercial insurers had already been using medical questionnaires and previous claims records to find firms with healthier-than-average employees, offer them lower prices, and attract them away from Blue Cross-Blue Shield plans. HMOs, while prohibited by the federal HMO law from using these same direct medical underwriting techniques, could still use marketing techniques and benefit design to target healthier groups and could make their services particularly inhospitable to people with expensive chronic illnesses. The one undisputed success of HMOs is that they cut the rate of hospitalization of their members, with no apparent loss of health. It is unclear, though, how much of HMOs' success in controlling costs was achieved by selecting (or being chosen by) healthier groups of patients. </font></p>
<p><font class="nonprinting">In many ways, managed-competition advocates have simply redesigned the HMO strategy to correct its mistakes, without comprehending the underlying flaw in their premise. Understanding that a lone consumer is never a potent bargainer, they would aggregate consumers into Health Insurance Purchasing Cooperatives (HIPCs) to create a strong countervailing force. They would also extend the HMO idea of packaging physicians together with insurance, so that all physicians would be bundled into "Accountable Health Plans" (of course there had to be a new moniker). Some versions would go so far as to forbid physicians from belonging to multiple plans. A new national Health Standards Board would design a uniform benefit package and set standards for the plans, and the purchasing cooperatives would certify plans, negotiate with them, purchase group contracts on behalf of consumers, and monitor the plans' performance. </font></p>
<p><font class="nonprinting">Because its advocates understand all too well how unrestrained market competition in health care can destroy the medical commons, managed competition seeks to use the authority and clout of government to harness the efficiency-producing potential of the market and to curb its community-destroying potential. Health Insurance Purchasing Cooperatives, despite the populist-sounding name, are in fact government agencies, akin to a regulatory agency. They are modeled after the German Bodies of Public Law, a form of quasi-public agency that is chartered by government and invested with the power to implement government programs and regulate relevant parties as necessary to fulfill their public purposes. HIPCs are the "managed" part of managed competition; indeed, many conservatives and insurance company executives oppose managed competition because they understand that the term "managed" is an appropriation of corporate-speak for what is really "regulation." </font></p>
<p><font class="nonprinting">The target of all this countervailing pressure is no longer just the providers, but also insurers, and with good reason. Under the competitive reforms begun in the 1970s, insurers turned competition into a game of avoiding the sick instead of--as economic theory promised--a game of winning market share by producing health services more efficiently and comprehensively. Managed competition in any of its variants is designed, among other things, to thwart the ability of insurers to "cherry pick." All proposals have some limitations on risk selection, such as a ban on pre-existing condition clauses, on discriminatory pricing according to individual health risks and on selective enrollment. To avoid the common practice of repelling high-risk consumers by clever design of benefits, most proposals would set a standard benefit package and require that health plans offer it, and then vary the HIPC payment to the plan according to the risk-profile of its members. </font></p>
<hr /><h3><font class="nonprinting">THE SUPPLY SIDE OF MEDICINE</font></h3>
<p><font class="nonprinting">Like the HMO strategy, managed competition is an effort by people who view health care as a marketplace to make health function more like a textbook market. In a textbook market, consumer demand drives the system. But a sphere where citizens receive services according to need rather than ability to pay cannot be understood as just another marketplace. Medical care, moreover, is not an area where consumers typically are able to make well-informed choices ("Say, Doc, I think I have a touch of amyotrophic lateral sclerosis") or are guided by what economists delicately call "tastes" ("I feel like having an appendectomy today"). It is hard to credit the claim that soaring medical costs and plummeting access to care are being driven by a consumer failure to choose wisely. </font></p>
<p><font class="nonprinting">Health care costs are "pulled" by capital and labor, not "pushed" by consumers. A famous law of health economics, coined by Milton Roemer in the late 1960s, holds that the demand for hospital beds expands to fill the supply. In today's cost-conscious, cost-shifting environment, where hospital CEOs scramble to boost occupancy rates, the law seems archaic. But it contains a deeper wisdom: give a boy a hammer and he'll find things that need pounding. </font></p>
<p><font class="nonprinting"><font size="+3">M</font>edical technology is the great hammer of the health care system. Technology broadly defined--not only MRIs and lithotriptors but also expensive drugs, organ and tissue transplants, expensive diagnostic tests, and experimental treatments such as genetic therapies--is where physicians and hospitals get their financial, psychological, and status rewards. Much medical research will lead to lower health care costs by preventing disease or inventing simple and cheap cures. But there is much research that leads only to expensive diagnostic tests for diseases we can't treat anyway, or to what Lewis Thomas calls halfway technologies--ones that don't cure but require months or years of extremely expensive life-prolonging therapies. Trying to hold down medical costs while massively funding this kind of medical research is like trying to curb smoking while subsidizing tobacco production. We would do better to choose research projects more carefully, thus enabling us to redistribute the effective technologies we already have. </font></p>
<p><font class="nonprinting">Another key to curbing medical costs is controlling capital investment in medical equipment. A government agency could do a lot more fiscal good by using its expertise and muscle to guide capital planning than by generating reams of information to set payment levels and inform individuals about health plans. Restrictions on physician ownership of capital equipment are important too. Physicians who own their own equipment refer patients more frequently for procedures using just that equipment than do non-owning physicians. And until very recently, as Marc Rodwin reports in his new book, <i>Medicine, Money and Morals</i>, much financing of physician-owned medical facilities was structured to ensure that participating physician-investors referred certain quotas of patients. Under these circumstances, expecting consumers to police and discipline medical markets is naive to say the least. Rather than relying on price signals to consumers, we should restrain incentives to the makers and users of medical technology. </font></p>
<p><font class="nonprinting">Instead of trying to teach the middle class the "true" costs of health insurance for everyone, the Clintons and their appointees might have a lot more impact on costs by using their moral authority to change the culture of prolonging life at any cost. This is a touchy subject, and not one that will go down well with the religious right and the anti-abortion crowd. Nevertheless, we are spending so much on health in part because we are approaching the technical capacity, in the words of one observer, to "keep a severed head alive," but we lack the moral capacity to stop ourselves. </font></p>
<p><font class="nonprinting">Finally, we have to look realistically at the monetization of caring work as a cause of expenditure growth. One big--but unacknowledged--source of cost increases in health care is that we are now paying for labor we used to get for free or for cheap. The costs of home care were not even a category of health expenditure when fewer women were in the work force and more provided full-time family care for free. Hospital costs now include decent salaries for nurses; gone are the days when hospitals ran on the cheap, and frankly exploited, female labor. These are new costs that we cannot and should not suppress. </font></p>
<p><font class="nonprinting"><font size="+3">O</font>ur health system creates voracious beasts in the forms of medical technology and medical research. Then we create jobs--and whole industries--to tame the appetites of the beasts: utilization reviewers, case managers, technology assessors, information systems designers and managers, training and accrediting organizations for utilization reviewers and case managers, and all the rest. There is a parallel development in insurance, with its proliferation of occupations in underwriting, product design, sales, claims management, information systems, and training and accrediting organizations for each occupation. </font></p>
<p><font class="nonprinting">Health care expenditures are eating up our GNP and raising the cost of American goods, but in a perverse way the health sector is the strongest part of the economy. Jobs in health care grew 43 percent between 1988 and 1992, while jobs elsewhere in the private sector inched up a paltry 1 percent. There's a nasty double bind here. Every health expenditure is income to somebody employed in the health sector, or to somebody who makes things or does things for the health sector. We can't restrain health care costs without putting a lot of people out of work. </font></p>
<p><font class="nonprinting">The good news is that many of the people who would be displaced by genuine health care cost controls--the cost-controllers and the insurance armies, as well as some medical researchers and research administrators--are highly educated and well-positioned to apply their skills in other productive occupations. They are the very "symbolic analysts" Labor Secretary Robert Reich has extolled as the adaptable labor force of the future and the key to global competitiveness. </font></p>
<hr /><h3><font class="nonprinting">CONSUMER CHOICE OR PRICE-RATIONING</font></h3>
<p><font class="nonprinting">In theory, consumers shopping around among plans will yield efficiency, cost-control, and quality. But that theory falsely assumes equal buying power; the approach under consideration will give most consumers access only to a stripped-down, basic plan. Supposedly, consumers with "tastes" for enhanced services beyond the basic package will satisfy their "preferences" by shopping among more costly plans. </font></p>
<p><font class="nonprinting">All it takes is money. </font></p>
<p><font class="nonprinting">The language of consumer preference is grossly misleading in health care. The whole argument disguises a call for distributing health services according to ability to pay--the very thing we are trying to get away from. To be sure, any health reform, including a Canadian-style single-payer system, would have room for the affluent to buy services not covered in the universal program. But managed-competition schemes would likely have a much larger proportion of health care defined as extras and limited by ability to pay. </font></p>
<p><font class="nonprinting">Managed-competition proposals envision that everyone will be funded, in one way or another, to purchase the lowest cost plan. Some people, such as the self-employed, might pay the entire sum themselves; some, such as workers in large companies, might have their employer pay a premium on their behalf (which in some views comes out of the workers' wages anyway); and some, such as the poor, might have the government pick up the tab in the form of subsidies or tax credits. Most proposals, whether or not they would require employers to provide at least the standard package, would tax any employer contributions above the level of the lowest cost, standard package plan. Everyone would have the option of purchasing a higher cost plan by paying the added costs themselves, probably in fully taxed dollars. Nominally, all Accountable Health Plans, would be required to offer a uniform benefit package covering medically necessary hospitalization, physician services, and so forth. In practice, the benefit package will vary, because higher cost plans will define "medically necessary" more liberally. </font></p>
<p><font class="nonprinting">The imagery in managed-competition proposals suggests that medical care can be divided into a core of necessary items--the standard benefit package at the lowest price--and a periphery of incidentals. What is an incidental? The same few examples keep cropping up, and they are very telling. Consumers, we are told, will choose among plans according to their tastes for aggressive or conservative practice styles, freedom of choice of provider, length of waiting time for non-emergency appointments, and level of amenities. On close inspection, very little in this periphery of incidentals is really a matter of taste. </font></p>
<p><font class="nonprinting">In the managed-competition literature, the term "aggressive practice style" is always contrasted with "conservative" practice style and seems to be a code word for physician disposition to intervene quickly and early when therapies, or even tests, might not be necessary. The very placement of "aggressive practice style" in the implicit category of unessential incidentals <i>defines</i> this kind of style (whatever it is) as non-essential care. In practice, there are many medical situations when an aggressive style is called for--when, if money were no bar, most consumers would want their doctors to run diagnostic tests and be prepared to treat immediately. It is hard to imagine, for example, that any woman would choose to wait a month or two to see how a breast lump develops instead of having a mammogram immediately. It is easy to imagine a health plan, on the other hand, that provides more cost-efficient care by refusing early mammograms, since most lumps are benign. (Far fetched? Lisa Belkin of the <i>New York Times</i> tells of a woman who thought she had Lyme disease--and in fact did--but whose managed care plan refused for many months to authorize the diagnostic test for it. The plan insisted it did not deny testing for financial reasons. The world of managed care is full of such tales.) </font></p>
<p><font class="nonprinting">Under any version of managed competition, the more money you have to spend, the more likely you are to receive early diagnosis and aggressive treatment when those are appropriate. Of course, all the proposals give lip service to the need for close monitoring, but the incentives in the low-cost plan for "conservative" style and undertreatment are overpowering. And there are so many medical situations requiring nuanced judgment about appropriate care that effective monitoring is unlikely. </font></p>
<p><font class="nonprinting">The same sort of argument can be made for waiting time for non-emergency appointments. The rub is all in how each plan defines emergency. They will have lots of discretion, and in any case, the triaging will be done, as now, mostly by receptionists, not medical personnel. Ideally, we would like to think that waiting time for an appointment would be determined by medical criteria. In a system where waiting time is defined as a matter of consumer taste, however, if there is a great difference among plans in average waiting times for physician appointments, managed competition will reinforce the ability of people with more money to buy their way to quicker visits. Most people who have to wait for a long time will not harm their health, but the poor will be a disproportionate share of those who are hurt by the cost-saving mechanisms of managed competition. </font></p>
<p><font class="nonprinting">The danger is that the basic, mandated plan will be a fairly minimal one, and the new world of managed competition will have insurers and other plan sponsors marketing blue-chip supplemental plans to wealthier and more attractive sub-markets--exactly the kind of inequitable and inefficient fragmentation universal health care should eliminate. </font></p>
<hr /><h3><font class="nonprinting">THE DOCTOR IS OUT</font></h3>
<p><font class="nonprinting">Freedom to choose one's doctor is probably the aspect of medical care people care most about, yet managed-competition proposals define it, too, as a luxury to be purchased by those who can afford it. Managed competition offers choice in one very restricted sense--the chance to choose a health plan once a year, assuming one has the disposable income to purchase anything other than the lowest cost plan. Proponents argue that consumers can make much better (well-informed and rational) choices about medical care when they are not sick and when the trade-offs between costs and benefits are presented in the abstract instead of when they are facing a particular disease. Most patients, by contrast, would probably consider their decisions better, more informed, and more comfortable when they are able to decide contextually rather than abstractly, knowing as much as possible about the specifics. In any case, most citizens care much more about being able to choose their doctors and having some say in therapeutic choices when they are ill than about being able to choose their health insurance plan. The freedom of choice that managed competition offers is simply not worth much to most of us. </font></p>
<p><font class="nonprinting">Many existing managed care plans restrict patients' choice of doctors more than meets eye. Though they advertise thousands of participating physicians, each patient is limited to a very small network of specialists in the same "referral circle" as his or her primary care physician. That circle may include only 100 doctors out of the thousands of participating physicians in a metropolitan area. Within a referral circle, there are usually only about 5 to 15 doctors per specialty, and in some circles, there are specialties with just one doctor whom the patient is allowed to consult. When families move into managed care plans, all members have to be in the same plan, or even referral circle, so some of them will have to sever links with personal physicians to consolidate their care. Though a plan touts its large network of member hospitals, patients are usually required to receive all their care in one hospital. </font></p>
<p><font class="nonprinting">When "amenities" truly refers to just incidental niceties--private hospital rooms, gourmet meals--it is reasonable to ask consumers to pay extra. But in a climate of scarcity, we had better all be watching to see what the health policy architects slip into the category of amenities. </font></p>
<hr /><h3><font class="nonprinting">HOW TO FRUSTRATE SMART SHOPPING</font></h3>
<p><font class="nonprinting">Consumer cost-consciousness in a competitive market is supposed to drive down costs, but managed competition isn't even true to its own theory. Instead of separating out the different features of health care and health insurance and truly giving consumers a chance to shop, managed competition bundles everything together in large packages. Consumers will not be able to mix and match different styles of medical care, waiting times, doctors or other personnel, and financing features. All of these will be lumped together in a very few take-it-or-leave-it plans. It is as if we could choose between blue four-door sedans with air conditioning and standard transmissions, or red two-door compacts with power steering and automatic transmissions. With that kind of "stickiness" of the goods, consumers can't possibly send useful information signals to the market about their preferences on individual features. </font></p>
<p><font class="nonprinting">If we have learned anything from two decades of competition in health care, it is that providers and insurers will compete on everything but better information, better service, and more efficient care. Employers, as payers of health care, have relied chiefly on shifting costs back to employees and cutting benefits. Insurers have relied heavily on selecting healthy customers and using pre-existing condition clauses and other small print to wriggle out of payment. Managed-competition advocates understand this problem but don't make clear just how large a regulatory task is required to prevent it. If we allow insurers to offer different benefit packages, we have to ensure that the benefits don't become marketing strategies to attract only healthy customers. If the purchasing cooperatives will adjust payments to health plans according to the risks of their members, they need much of the same underwriting or claims information that insurance companies collect for their risk assessments. Such risk adjustment will be enormously intrusive; it will require that personal medical information be semi-public, available to a government agency, and permanently attached to each person's record. For all the expensive monitoring, plans will still find ways to select their risks and optimize their payments. </font></p>
<hr /><h3><font class="nonprinting">INEQUALITY IN YOUR FACE</font></h3>
<p><font class="nonprinting">Americans want universal health insurance, and they consistently say they are willing to pay for it. The Clinton administration has so far done a smashingly good job of creating a vision and enlisting both ordinary citizens and business leaders to support it. Offering the country managed competition would wreck that alliance. </font></p>
<p><font class="nonprinting">Budget concerns ensure that the basic package in managed competition will be stripped down, leaving much of what people consider important to be purchased with tax-eaten dollars. Managed competition will remove the choices people care about. Promised a slightly restricted choice of physicians, they will be furious when they find out how binding the constraints really are. They will have very little choice among specialists once they choose their primary care physician. If managed competition is set up the way most of its proponents advocate, with physicians allowed to belong to only one plan, citizens will find they have no choice among plans if they want to stay with their current physician, and no ability to choose a physician they know if they want a plan with different features from the one their physician is in. They will be even more furious when they are rendered powerless to choose among diagnostic and therapeutic options because of restrictions of their plans. There is already a huge undercurrent of resentment among the existing managed care plans. Imagine the rumble when all but the rich are squeezed into managed care. </font></p>
<p><font class="nonprinting">The worst political mistake of managed competition is that it will put flashing neon price tags on levels of health care, extending, fine tuning, and making utterly explicit the price rationing we now have and want to escape. Americans want universal coverage. They are asking for a restored sense of community. Although they generally believe money should be able to talk, they think access to the House of Medicine that is our collective creation should not depend very much on wallet size. Managed competition is a grand betrayal. In another decade, medical care and its administration will still be eating up the GNP, leaders will still be blaming the people for their undisciplined shopping habits, and at best we will have nominal universal access with severe price rationing for anything but a place in the cellar. Even if the Clinton Health Reform Task Force is merely using an ambiguous label for tactical advantage, the perverse logic of the underlying model will inexorably hijack health policy. </font></p>
<p>
<!-- dhandler for print articles --></p>
</div></div></div>Mon, 26 Nov 2001 21:04:20 +0000141484 at http://prospect.orgDeborah StoneSex, Lies, and The Scarlet Letterhttp://prospect.org/article/sex-lies-and-scarlet-letter
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><font class="nonprinting"><font size="+2">O</font>nce when I was about nine, I wandered into my aunt's kitchen during Thanksgiving to find all the grown-up women whispering, hugging, and crying. When they explained to me what was going on (Auntie Cookie had just found out she was going to have another baby and they were crying from happiness), they confirmed a story I already knew--the one about how babies just happen, and women to whom they happen are considered very lucky. How else to explain the crying? A few years later, when my friend Phyllis told me her parents were "trying to make" another baby, I had the crashing revelation that human actions create babies. </font></p>
<p><font class="nonprinting">Reading about unwed mothers and welfare these days, I can't help but think the nation is in need of a crash course in sex education. According to an article by Barbara Dafoe Whitehead in the <i>Atlantic Monthly</i> last October, we've got lots of public school sex education programs, but they're teaching the wrong thing. Under the guise of "family life education," she wrote, the programs are just ideological bearers of the sexual revolution of the sixties, encouraging anything-goes sexuality for young people. Whitehead thinks there should be more straight talk about the downside of teen pregnancy and illegitimacy, especially for girls, because "girls bear the burdens and penalties of nonconjugal sex." </font></p>
<p><font class="nonprinting">But Whitehead herself betrays some of the mindset that generates this unequal burden. She describes all the bad consequences for teenage girls who "get pregnant," "find themselves pregnant," and "experience pregnancy." The boys and men in this article "have sexual encounters" and "experiences." She thinks (quite sensibly) it's important to understand what motivates teen girls to get pregnant, but utters not a word about what motivates their male partners to want or not want to make children.</font></p>
<p> </p>
<p><font class="nonprinting"><font size="+2">W</font>hitehead's article got me thinking about that great American textbook of sex education, <i>The Scarlet Letter</i>. Even those who haven't read it know its central image: An adulterous woman is forced by her community to wear a scarlet letter "A" as a sign of her depravity and a foreboding lesson to other women. </font></p>
<p><font class="nonprinting">Nathaniel Hawthorne's parable about hypocrisy and punishment is set in the early days of Puritan America, two centuries before its publication in 1850. It opens with one of the most memorable images of social stigma ever printed on the page: Hester Prynne is led from a prison door, carrying an infant and wearing a scarlet "A" she has meticulously embroidered. She mounts a platform in the public square--Hawthorne pointedly calls it a "scaffold"--where she is reviled and remonstrated by the townspeople. Although the punishment might have been seen as mere ridicule in his day, Hawthorne says, in Puritan New England it was "invested with almost as stern a dignity as the punishment of death itself." And indeed, adultery was punishable by death in those days, had the officials really wanted to throw the book at her.</font></p>
<p><font class="nonprinting"><i>The Scarlet Letter</i> is much more than a metaphor for searing stigma. Hester Prynne and her daughter Pearl are the archetypal unwed mother and illegitimate child in American social history. Before the story begins, we learn, Hester had been married in Europe to a dried-up, pretentious, academic sort who sent her ahead to America, intending to follow. He got hung up pursuing his fruitless studies, and after a couple of years, everyone, including Hester, presumed he lay dead at the bottom of the sea. Hester and her minister--yes, Puritan minister--Arthur Dimmesdale, had fallen in love and had relations. Hester had Pearl. Mr. Dimmesdale had a crisis of conscience. What Mr. Dimmesdale never does have as the story progresses is the courage, or necessity, to own up to his adultery or his fatherhood.</font></p>
<p><font class="nonprinting">While Hester is forced to stand for hours before the censorious community, Governor Bellingham directs Dimmesdale to use his priestly persuasive powers on Hester to make her name the child's father. Accord ing to the notes in my edition, Hawthorne's prototype for his fictional governor and upholder of the law was a real Massachusetts governor of the same name. In 1641 Bellingham married a woman already betrothed to a friend of his and performed the ceremony himself in a rush job, so as to avoid going through the required publication of marriage intentions. When asked to step down from the bench during an inquest about his breach of law, he refused. </font></p>
<p><font class="nonprinting">Thus, Hawthorne shows us "a people amongst whom religion and law were almost identical," inflicting a punishment equivalent to death on a woman, through the offices of their minister and their governor, each of whom has transgressed the same laws for which Hester is to be banished from human society. </font></p>
<p><font class="nonprinting">Hester pays dearly for her and Dimmesdale's love. Unlike him, she cannot conceal the fact of her adulterous sex because she cannot hide her pregnancy. She cannot flee from the fact of her motherhood because the child is in her and issues from her. And she cannot escape parenthood, because no one else is going to take care of the child and child abandonment is frowned upon. Dimmesdale pays, too, but his is a very private penance. He is eaten by guilt and dies near the end of the novel. </font></p>
<p><font class="nonprinting">What about Pearl? She is marked from the get-go, presumed by the Puritans to be the child of the devil. Even Hester absorbs the social view that nothing good can issue from a woman who was in a state of sin when the child was "imbibing her soul." So naturally, Pearl turns into a child who "cannot be made amenable to the rules." She is wild and seems to be part animal, part demon, all of which is to say she is definitely not fully human. She does eventually grow up to lead an apparently prosperous life--but only by escaping from her home and living in England.</font></p>
<p> </p>
<p><font class="nonprinting"><font size="+2">S</font>o it is today with what is now called the illegitimacy problem: The stigma of nonmarital sex, the identity as biological parent, and the work of child rearing almost always fall on the women. In the absence of an omniscient narrator, the fathers often remain invisible, at least to the public eye. Like Pearl, illegitimate children are regarded as predestined to a life of waywardness. Now, however, we cite statistical probabilities instead of the devil as the cause of their propensity to crime, drug abuse, dropping out of school, going on the dole, and having more out-of-wedlock children.</font></p>
<p><font class="nonprinting">Many conservatives seem to have adopted <i>The Scarlet Letter </i>as a primer on what to do about illegitimacy. Mothers of illegitimate children should be heaped with scorn for neglecting, abandoning, and abusing their children. They are irresponsible and immoral for "getting pregnant," as though they did it all by themselves. (In Hawthorne's Puritan Salem, at least, Dimmesdale would have been held equally responsible and immoral, had he been found out.) The way to deter people from having illegitimate children is to do what Salem did to Hester: prevent the mothers from receiving any social succor. Thus, the Republican Personal Responsibility Act would eliminate AFDC eligibility for young women who bear children outside marriage, and it would preclude any additional monies for women already on AFDC who bear another child. </font></p>
<p><font class="nonprinting">The double standard of <i>The Scarlet Letter</i> still prevails. Both the Republican and Democratic versions of welfare reform pay lip service to holding fathers more accountable, but both treat mothers far more harshly. Mothers on AFDC will be required to work at paid jobs, anywhere from 18 hours a week (Clinton's Work and Responsibility Act) to 32 or 35 hours (the Republican Personal Responsibility Act). Both plans, like Governor Bellingham, talk tough about establishing paternity. Mothers will have to cooperate with the state in identifying fathers and establishing paternity. The Republican bill, strikingly, does not add a thing to existing child support enforcement tools or provisions. Neither bill sets up work requirements, much less job programs, for fathers.</font></p>
<p><font class="nonprinting">So beyond identifying more fathers, what will welfare reform do to men? At its toughest, it might succeed at getting the courts to order more child support, but whether it will get more money to kids is another question. Nothing in the Republican reforms creates more jobs, more job stability, or higher wages for men. (States would, however, be allowed to use money they would otherwise spend for food stamps to subsidize private sector jobs.) And under the current system of child support enforcement, which both welfare bills would merely extend, all but $50 of any child support paid by fathers goes to the state, not to the mother or children. No matter how much fathers contribute under this system, the financial position of their kids does not improve by more than $50 a month. And perhaps even more important, nothing in the contemplated welfare reforms is addressed to increasing fathers' involvement with their kids. Because most of the father's payments go to the state, the system doesn't even give dads the psychological satisfaction of helping their kids. </font></p>
<hr size="1" /><center><font class="nonprinting"><a href="/subscribe/"><img alt="Subscribe to The American Prospect" border="0" src="/tapads/mini_subscribe.gif" /></a> </font></center><br /><hr size="1" /><p><font class="nonprinting"><font size="+2">P</font>art way through <i>The Scarlet Letter</i>, Hester and Pearl have one of those quintessential conversations about where Pearl "came from" that might have been a lesson in family values, had Hester not felt the pressing need to protect Pearl's father. Hester drills Pearl: "Tell me then, what thou art and who sent thee hither?" Pearl demurs, so Hester offers the correct answer: "Thy Heavenly Father sent thee." Pearl is having none of it: "He did not send me. I have no Heavenly Father." She begs her mother, "Tell me, tell me."</font></p>
<p><font class="nonprinting">Then Hester gets wind of a plan to take Pearl away and put her in the care of the state. Some of the good Christians of the town, it seems, had concluded that "if the child were really capable of moral and religious growth . . . then surely it would enjoy all the fairer prospect of these advantages by being transferred to wiser and better guardianship than Hester Prynne's." Hester takes Pearl to the Governor's mansion to plead her case. There she has an audience with Governor Bellingham, Arthur Dimmesdale, and another minister named Wilson. </font></p>
<p><font class="nonprinting">Bellingham commands Wilson to determine whether Pearl has had a Christian upbringing. Wilson quizzes her: "Canst thou tell me, my child, who made thee?" Pearl knows the correct answer as well as she knows the rest of the catechism, but she also knows it isn't true. In a moment of impish perversity, she says her mother plucked her from a rose bush. That does it. She is obviously "unsocialized," as the current rhetoric would have it. She will be taken from Hester and put in care of the state. And here's the pain of it: The very lie that Hester has maintained to preserve the authority of church and state and to protect the good name of Dimmesdale becomes the source of Pearl's resistance and the evidence of Hester's unfitness as a mother. </font></p>
<p><font class="nonprinting">Dimmesdale, true to character, remains silent during this little child welfare hearing--until, that is, Hester rises up in a fury and commands him to speak on her behalf. He has the gall to bring the authority of the church down on Hester once again, this time to her advantage. He speechifies about God's purpose in sending this "child of its father's guilt and its mother's shame" as retribution and even a "torture" to the mother, to remind her of her sin. Hester gets to keep the kid because the church, the minister, and the dad all say punishment is good for her soul. </font></p>
<p> </p>
<p><font class="nonprinting"><font size="+2">T</font>he great lie here--that bad children were created by bad mothers and that fathers and social policies bear little responsibility--is the same lie that justifies taking children away from their mothers. It's bad enough that these unwed mothers take support from the government. (Never mind, as Katha Pollitt said so eloquently in <i>The Nation</i>, that AFDC is merely replacing the cash support the fathers ought to be providing.) But many of them turn out to be bad mothers to boot. Even with all the money we taxpayers give them, they still don't feed their children properly, supervise them, discipline them, or give them quality time. Their kids would be better off in the care of the state. Better an orphanage than a neglectful and abusive mother. </font></p>
<p> </p>
<p><font class="nonprinting">There are, certainly, a whole lot of children who are ill cared for, neglected, and abused, and who would probably be better off in some kind of group home for young unwed mothers or boarding school for kids. But why are their mothers--the ones who do feed them, watch them, and spend time with them at all--the only parents who are bad? In most cases, if unwed mothers spent as little time with their kids as unwed fathers do, we would call it abandonment. Why do we look for solutions by focusing on the character and behavior of the mothers, while ignoring the fathers?</font></p>
<p><font class="nonprinting">Lest anyone doubt how lax our norms for fatherhood are, let them look at child support awards among divorced couples. Fathers are generally ordered to pay only a small proportion of their income in child support, and the portion declines as the man's income rises. Around half of fathers who are ordered to make child support payments do not make them after the first year or so, and courts do next to nothing about enforcing the awards. Since we don't hold middle class and affluent fathers to any standard of decent support for their children, how do we expect to convey norms of financial responsibility to the poor? Apparently, through brute force. We have a much more aggressive child support enforcement system for poor men, and we exact a much higher portion of their incomes than we do for middle-and upper-income men in divorce cases. </font></p>
<p> </p>
<p><font class="nonprinting"><font size="+2">B</font>y the time she is seven, Pearl comes to know on some level that Dimmesdale is her father. Once, Hester and Pearl come upon Dimmesdale in the middle of the night. He is standing on the scaffold where the three of them once stood together. He beckons them to join him, and they all hold hands in a moment of electric intensity. "Minister," implores Pearl, "wilt thou stand here with mother and me, to-morrow noontide?" "Nay, not so," replies Dimmesdale, backpeddling furiously as the import of public recognition hits him. "I shall indeed stand with thy mother and thee one other day, but not to-morrow." Pearl tries to pull her hand away, but Dimmesdale hangs on. She begs for acknowledgment and commitment, for a promise that Dimmesdale will take her and her mother's hands in public. She tries to pin him down to a date. Pushed into a corner, he names "the great judgment day." "The daylight of this world shall not see our meeting," he says.</font></p>
<p><font class="nonprinting">Near the end of the novel, Hester meets Dimmesdale in the woods and tries to persuade him that the three of them should return to Europe, where they could live out the love that "had a consecration of its own." She tells him he has repented enough, and casts off her patch with the scarlet "A." Then she begins performing that primal task of motherhood--helping members of a family to get along, to care for one another, to love each other. "Thou must know Pearl, our little Pearl," she tells him. Dimmesdale worries that Pearl won't warm up to him or trust him. "She will love thee dearly, and thee her," Hester assures him. But Pearl, summoned now to join Hester and Dimmesdale, goes into a "fit of passion" and refuses to come until Hester dons the scarlet "A" again. </font></p>
<p><font class="nonprinting">Hester gives a classic speech, the one women always give their children when bringing a new man into the family or when trying to reintegrate a prodigal father: "He waits to welcome thee. . . . He loves thee, my little Pearl, and loves thy mother too. Wilt thou not love him? Come! He longs to greet thee!" Pearl has been burned before. If he really loves her, she wants proof. She wants Dimmesdale to act like a father and husband. "Doth he love us?" she asks, staring into her mother's eyes. "Will he go back with us, hand in hand, we three together, into the town?" Once again, the adults tell her a deeper truth that contradicts all their previous words: "Not now, dear child." </font></p>
<p> </p>
<p><font class="nonprinting"><font size="+2">B</font>arbara Whitehead says that a "truly fact-based approach" to sex education would have to teach some hard truths. Schools would have to teach that unwed teenage parenthood is often bad for kids, that "not all families are equally capable of caring for children," and that love cannot make up for a lack of long-term commitment, responsibility, and sacrifice on the part of parents. Whitehead glimpses the dilemma here: how to teach such lessons without stigmatizing children who do grow up in broken homes or in unwed teenage families?</font></p>
<p><font class="nonprinting">The dilemma is much more profound than Whitehead imagines, though, because the facts are far more cruel than she acknowledges--and crueler than children ought to bear. Are we really willing admit to ourselves, let alone teach our kids, that some parents are less fit than others? That poor and less-educated parents are not as capable of giving their kids a good life as those in a higher socioeconomic station? That all children are not born equal? That some adults beat their kids and are terrible parents in this and other ways, but they're allowed to have kids anyway?</font></p>
<p> </p>
<p><font class="nonprinting"><font size="+2">W</font>e can't teach children these lessons, not so much because they would stigmatize some kids, as Whitehead says, but because they would challenge some fundamental liberal principles about equal opportunity and about the sacrosanct privacy of the family. But we can, I think, try to teach adults a few things.</font></p>
<p><font class="nonprinting"><b>Lesson One:</b> Children are not (<i>pace</i> Dimmesdale) to be used, or worse, brought into existence, as punishment for their sinful parents and object lessons to other errant souls. Unfortunately, this seems to be the premise behind state laws requiring pregnant minors to get parental permission for abortions. If we think minors are too immature to make a good decision about whether to have a child, they are surely too immature to be a good parent. So why make them have a child, if not to teach them a lesson? ("She made her bed, now let her lie in it," is an all-too-frequent adult answer.) If we truly want parents to make commitments and take responsibility for their children, why do we place so many obstacles in the way of abortion for young girls and women who know they and their children's fathers can't be responsible parents?</font></p>
<p><font class="nonprinting"><b>Lesson Two:</b> Supporting and caring for children are two different things, and in many ways incompatible. One requires earning money to buy food, clothing, and shelter. The other requires cooking and feeding, doing the laundry, cleaning the floors, never letting an infant out of your sight, cooing and cuddling, and numerous other activities not calculated to get you in good with your employer. We have historically had a division of labor in two-parent households because it's pretty near impossible to be out earning money and in minding the kids at the same time. Working moms make a go of it nowadays only by farming out much of the caring part of the job to someone else--their mothers and sisters, preschools, day care, and nannies. But we fault poor single mothers for not doing either thing well--supporting or caring--when doing both well is next to impossible and when middle-class and married mothers don't do it all themselves anyway.</font></p>
<p><font class="nonprinting">Work requirements are counterproductive to welfare reform's professed goal of improving parenting. Moreover, giving poor mothers a little help with child care is not, as many Republicans would have us believe, going to undermine Western civilization, or even motherhood.</font></p>
<p><font class="nonprinting"><b>Lesson Three: </b> DNA does not a father make. Current welfare reform proposals would beef up state bureaucracies for producing more DNA tests, more paper paternity acknowledgments, and more paper designations of fathers' wages as child support. This system creates no incentives for biological fathers to act like fathers. We need to restructure the child support system so that mothers, fathers, and kids all know and see how fathers' economic contributions help the kids. This may mean letting fathers' contributions make an AFDC family much better off if they have a contributing father (or two) than if they don't. It might mean giving fathers credit for time they spend with kids, as well as for the cash they contribute. (Perhaps once they recognize the value of men's caring time, legislators will be forced to credit women for caring time, too.) And it might mean sacrificing some of the budget relief provided by the current system of siphoning off fathers' payments for the state. But if the theory of economic incentives that now drives so much of welfare reform is applied with equal rigor to mothers and fathers, we will have to make these changes. </font></p>
<p><font class="nonprinting">As it stands, the Personal Responsibility Act encourages states to spend money on mandatory parenting and money management classes for mothers. A welfare reform bill that was serious about repairing the fractured family would also encourage states to spend money on programs to teach responsible fatherhood. Such programs would have to emphasize the personal satisfactions that come with knowing and raising your children, instead of preaching a financial obligation devoid of personal relationships. They would also have to confront honestly the problem of domestic violence, since violence is a major reason why many mothers want neither time nor money from their children's fathers.</font></p>
<p><font class="nonprinting"><b>Lesson Four:</b> Babies don't just happen. It takes a male and a female. . . . In the search for people whose motivations we might better understand, and whose character and behavior we might reform, there are two places to look. </font></p>
<p>
<!-- dhandler for print articles --></p>
</div></div></div>Mon, 19 Nov 2001 23:19:52 +0000141352 at http://prospect.orgDeborah StoneBedside Mannahttp://prospect.org/article/bedside-manna
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><table align="left" border="0" cellpadding="5" cellspacing="5" valign="top" width="150"><tbody><tr align="left" valign="top"><td><font class="nonprinting"><font size="-1">EDITOR'S NOTE: This article draws on material that will appear in more extended form as "The Doctor as Businessman: Changing Politics of a Cultural Icon," <i>Journal of Health Politics, Policy and Law</i>, vol. 22, no. 2, 1997.</font> </font></td>
</tr></tbody></table><p><font class="nonprinting"><font size="+2">F</font>or more than 150 years, American medicine aspired to an ethical ideal of the separation of money from medical care. Medical practice was a money-making proposition, to be sure, and doctors were entrepreneurs as well as healers. But the lodestar that guided professional calling and evoked public trust was the idea that at the bedside, clinical judgment should be untainted by financial considerations. </font></p>
<p><font class="nonprinting">Although medicine never quite lived up to that ideal, the new regime of managed care health insurance is an epic reversal of the principle. Today, insurers deliberately try to influence doctors' clinical decisions with money—either the prospect of more of it or the threat of less. What's even more astounding is that this manipulation of medical judgment by money is no longer seen in policy circles as a corruption of science or a betrayal of the doctor-patient relationship. Profit-driven medical decisionmaking is extolled as the path to social responsibility, efficient use of resources, and even medical excellence. </font></p>
<p><font class="nonprinting">How did such a profound cultural revolution come about? What does the new culture of medicine mean for health care? And where does it leave the welfare state and the culture of solidarity on which it rests when the most respected and essential caregivers in our society are encouraged to let personal financial reward dictate how they pursue patients' welfare? </font></p>
<p> </p>
<hr /><h3><font class="nonprinting">MONEY AND MEDICINE</font></h3>
<p><font class="nonprinting">Before the mid-nineteenth century, the business relationship between doctors and patients was simple: The patient paid money in exchange for the doctor's advice, skill, and medicines. However, to win acceptance as professionals and be perceived as something more than commercial salesmen, doctors needed to persuade the public that they were acting out of knowledge and altruism rather than self-interest and profit. Organized medicine built a system of formal education, examinations, licensing, and professional discipline, all meant to assure that doctors' recommendations were based on medical science and the needs of the patient, rather than profit seeking. </font></p>
<p><font class="nonprinting">In theory, this system eliminated commercial motivation from medicine by selecting high-minded students, acculturating them during medical training, and enforcing a code of ethics that put patients' interests first. In practice, medicine remained substantially a business, and no one behaved more like an economic cartel than the American Medical Association. The system of credentialing doctors eventually eliminated most alternative healers and, by limiting the supply of doctors, enhanced the profitability of doctoring. Nonetheless, medical leaders espoused the ideal and justified these and other market restrictions as necessary to protect patients' health, not doctors' incomes. </font></p>
<p><font class="nonprinting">It took the growth of health insurance to create a system in which a doctor truly did not need to consider patients' financial means in weighing their clinical needs, so long as the patient was insured. As Columbia University historian David Rothman has shown, private health insurance was advertised to the American middle class on the promise that it would neutralize financial considerations when people needed medical care. Blue Cross ads hinted darkly that health insurance meant not being treated like a poor person—not having to use the public hospital and not suffering the indignity of a ward. Quality of medical care, the ads screamed between the lines, was indeed connected to money, but health insurance could sever the connection. </font></p>
<p><font class="nonprinting">By 1957, the AMA's Principles of Medical Ethics forbade a doctor to "dispose of his services under terms or conditions that tend to interfere with or impair the free and complete exercise of his medical judgment or skill. . . ." This statement was the apotheosis of the ethical ideal of separating clinical judgment from money. It symbolized the long struggle to make doctoring a scientific and humane calling rather than a commercial enterprise, at least in the public's eyes if not always in actual fact. But the AMA never acknowledged that fee-for-service payment, the dominant arrangement and the only payment method it approved at the time, might itself "interfere with" medical judgment. </font></p>
<p><font class="nonprinting">Meanwhile, as costs climbed in the late 1960s, research began to show that fee-for-service payment seemed to induce doctors to hospitalize their patients more frequently compared to other payment methods such as flat salaries, and that professional disciplinary bodies rarely, if ever, monitored financial conflicts of interest. Other research showed that the need for medical services in any given population was quite elastic, often a matter of discretion, and that doctors could diagnose enough needs for their own and their hospitals' services to keep everybody running at full throttle. </font></p>
<p><font class="nonprinting">Still, the cultural premise of these controversies expressed a clear moral imperative: Ethical medicine meant money should not be a factor in medical decisionmaking. The new findings about money's influence on medicine were accepted as muck that needed raking. Occasional exposés of medical incentive schemes—for example, bonuses from drug and device companies for prescribing their products or kickbacks for referrals to diagnostic testing centers—were labeled "fraud and abuse" and branded as outside the pale of normal, ethical medicine. </font></p>
<hr size="1" /><center><font class="nonprinting"><a href="/subscribe/"><img alt="Subscribe to The American Prospect" border="0" src="/tapads/mini_subscribe.gif" /></a> </font></center><br /><hr size="1" /><h3><font class="nonprinting">THE PATH NOT TAKEN </font></h3>
<p><font class="nonprinting">Sooner or later, the ideal of medical practice untainted by financial concerns had to clash with economic reality. Everything that goes into medical care is a resource with a cost, and people's decisions about using resources are always at least partly influenced by cost. By the 1970s, with health care spending hitting 9 percent of the gross national product (GNP) and costs for taxpayers and employers skyrocketing, America perceived itself to be in a medical cost crisis. Doctors and hospitals, however, resisted cost control measures. By the late 1980s, neither the medical profession, the hospitals, the insurers, nor the government had managed to reconcile the traditional fee-for-service system with cost control, even though the number of people without health insurance grew steadily. </font></p>
<p><font class="nonprinting">During these decades, a pervasive antigovernment sentiment and a resurgence of laissez-faire capitalism on the intellectual right combined to push the United States toward market solutions to its cost crisis. Other countries with universal public-private health insurance systems have watched their spending rise, too, driven by the same underlying forces of demographics and technology. But unlike the U.S., they rely on organized cooperation and planning to contain costs rather than on influencing individual doctors with financial punishment or reward. Some national health systems pay each doctor a flat salary, which eliminates the financial incentive to over-treat, though it might create a mild incentive to under-treat. Systems with more nearly universal health insurance schemes also eliminate expensive competition between insurers, because there is no outlay for risk selection, marketing, or case-by-case pretreatment approval, and far less administrative expense generally. </font></p>
<p><font class="nonprinting">Countries with comprehensive systems typically plan technology acquisition by doctors and hospitals to moderate one of the chief sources of medical inflation. Most also limit the total supply of doctors, or of specialists, through higher-education policy. They may restrict doctors' geographic location in order to meet needs of rural areas and dampen excess medical provision in cities. Most countries with universal systems have some kind of global budget cap. But the difficult medical trade-offs within that budget constraint are made by clinicians under broad general guidelines, and not on the basis of commercial incentives to individual doctors facing individual patients. Significantly, although government is usually a guiding force in these systems, planning is done by councils or commissions that represent and cooperate with doctors, hospitals, other professions, medical suppliers, insurers, unions, and employer associations. </font></p>
<p><font class="nonprinting">The distinctive feature of the emerging American way of cost control is our reliance on market competition and personal economic incentive to govern the system. For the most part, such incentives are contrived by insurers. In practice, that has meant insurers have far more power in our system than in any other, and it has meant that they insert financial considerations into medical care at a level of detail and personal control unimaginable in any other country. </font></p>
<p> </p>
<hr /><h3><font class="nonprinting">RECONFIGURING THE ROLE OF MONEY</font></h3>
<p><font class="nonprinting">The theorists of market reform reversed the traditional norm that the doctor-patient relationship should be immune to pecuniary interests. Law professor Clark Havighurst, HMO-advocate Paul Ellwood, and economist Alain Enthoven and their disciples celebrated the power of financial motivation to economize in medical care. In the process, they elaborated a moral justification for restoring money to a prominent place in the doctor's mind. </font></p>
<p><font class="nonprinting">In what is probably the single most important document of the cultural revolution in medical care, Alain Enthoven began his 1978 Shattuck Lecture to the Massachusetts Medical Society by explaining why he, an economist, should be giving this distinguished lecture instead of a doctor. The central problem of medicine, he said, was no longer simply how to cure the sick, eliminate quackery, and achieve professional excellence, but rather how people could "most effectively use their resources to promote the health of the population." Enthoven dismissed government regulation as ineffective. The key issue was "how to motivate physicians to use hospital and other resources economically." It was time, he concluded, for doctors to look beyond the biological sciences as they crafted the art of medicine, and to draw on cost-effectiveness analysis. </font></p>
<p><font class="nonprinting">In Enthoven's vision, researchers would incorporate cost-benefit calculations into clinical guidelines; health plans would give doctors incentives to follow these guidelines; and if patients were allowed to shop for plans in an open market, the most efficient plans would win greater market share. We could succeed in "Cutting Costs Without Cutting the Quality of Care," as the title of his lecture promised. The ultimate safeguard against financial temptations to skimp on quality or quantity of care, according to Enthoven, was "the freedom of the dissatisfied patient to change doctors or health plan." </font></p>
<p><font class="nonprinting">In market theory, consumers are the disciplinary force that keeps producers honest. In applying classical market theory to medicine, theorists such as Havighurst and Enthoven confused consumer with payer. By the late 1970s, when medical care was paid for by private and public insurers or by charity, patient and payer were seldom the same person. </font></p>
<p><font class="nonprinting">Precisely this ambiguity about the identity of the consumer gave market rhetoric its political appeal. It papered over a deep political conflict over who would control medical care-insurers, patients, doctors, or government. Market imagery suggested to insurers and employers that they, as purchasers of care, would gain control, while it suggested to patients that they, as consumers of care, would be sovereign. For a brief while in the 1970s and 1980s, the women's health movement and a Ralph Nader-inspired health consumer movement adopted market rhetoric, too, thinking that consumer sovereignty would empower patients vis-à-vis their doctors. For their part, many doctors came to accept the introduction of explicit financial incentives into their clinical practice, because, they were told, it was the only alternative to the bogey of government regulation. ("Health care spending will inevitably be brought under control," warned Enthoven in his Shattuck Lecture. "Control could be effected voluntarily by physicians in a system of rational incentives, or by direct economic regulation by the government.") </font></p>
<p><font class="nonprinting">Enthoven's early approach relied only partly on the discipline of personal reward or punishment for doctors. He also advocated doing more research on cost-effectiveness and educating of doctors to make better use of scarce resources. And like Ellwood, Havighurst, and most advocates of market competition in medicine, Enthoven recognized the differences between medicine and ordinary commerce when he argued that competition had to be regulated in order to limit opportunism and enable patients to discipline insurance plans. But the heavy overlay of regulation originally envisioned by Enthoven and others was not established. While some HMOs have been more diligent than others in bringing quality and outcomes research to bear on medical practice, monetary incentives have become the paramount form of cost discipline. </font></p>
<p> </p>
<hr /><h3><font class="nonprinting">REMAKING THE DOCTOR AS ENTREPRENEUR</font></h3>
<p><font class="nonprinting">Today, financial incentives on doctors are reversed. Instead of the general incentives of fee-for-service medicine to perform more services and procedures, contractual arrangements between payers and doctors now exert financial pressures to do less. These pressures affect every aspect of the doctor-patient relationship: how doctors and patients choose each other, how many patients a doctor accepts, how much time he or she spends with them, what diagnostic tests the doctor orders, what referrals the doctor makes, what procedures to perform, which of several potentially beneficial therapies to administer, which of several potentially effective drugs to prescribe, whether to hospitalize a patient, when to discharge a patient, and when to give up on a patient with severe illness. </font></p>
<p><font class="nonprinting">In most HMOs, doctors are no longer paid by one simple method, such as salary, fee-for-service, or capitation (a fixed fee per patient per year). Instead, the doctor's pay is linked to other medical expenditures through a system of multiple accounts, pay withholding, rebates, bonuses, and penalties. Health plans typically divide their budget into separate funds for primary care services, specialists, hospital care, laboratory tests, and prescription drugs. The primary care doctors receive some regular pay, which may be based on salary, capitation, or fee-for-service, but part of their pay is calculated after the fact, based on the financial condition of the other funds. And there's the rub. </font></p>
<p><font class="nonprinting">Studies of HMOs by Alan Hillman of the University of Pennsylvania found that two-thirds of HMOs routinely withhold a part of each primary care doctor's pay. Of the plans that withhold, about a third withhold less than 10 percent of the doctor's pay and almost half withhold between 11 and 20 percent. A few withhold even more. These "withholds" are the real financial stick of managed care, because doctors are told they may eventually receive all, part, or none of their withheld pay. In some HMOs, the rebate a doctor receives depends solely on his or her own behavior—whether he or she sent too many patients to specialists, ordered too many tests, or had too many patients in the hospital. In other plans, each doctor's rebate is tied to the performance of a larger group of doctors. In either case, doctors are vividly aware that a significant portion of their pay is tied to their willingness to hold down the care they dole out. </font></p>
<p><font class="nonprinting">Withholding pay is itself a strong influence on doctors' clinical decisions, but other mechanisms tighten the screws even further. Forty percent of HMOs make primary care doctors pay for patients' lab tests out of their own payments or from a combined fund for primary care doctors and outpatient tests. Many plans (around 30 percent in Hillman's original survey) impose penalties on top of withholding, and they have invented penalties with Kafka-esque relish: increasing the amount withheld from a doctor's pay in the following year, decreasing the doctor's regular capitation rate, reducing the amount of rebate from future surpluses, or even putting liens on a doctor's future earnings. A doctor's pay in different pay periods can commonly vary by 20 to 50 percent as a result of all these incentives, according to a 1994 survey sponsored by the Physician Payment Review Commission. </font></p>
<p><font class="nonprinting">Of course, not all HMOs provide financial incentives that reward doctors for denying necessary care. In principle, consumers could punish managed care plans that restricted clinical freedoms, and doctors could refuse to work for them. But as insurers merge and a few gain control of large market shares, and as one or two HMOs come to dominate a local market, doctors and patients may not have much choice about which ones to join. The theorists' safeguards may prove largely theoretical. </font></p>
<p><font class="nonprinting">In the early managed-market theory of Enthoven and others, the doctor was supposed to make clinical decisions on the basis of cost-effectiveness analysis. That would mean considering the probability of "success" of procedure, the cost of care for each patient, and the benefit to society of spending resources for this treatment on this patient compared to spending them in some other way. But in the new managed care payment systems, financial incentives do not push doctors to think primarily about cost-effectiveness but rather to think about the effect of costs on their own income. Instead of asking themselves whether a procedure is medically necessary for a patient or cost-effective for society, they are led to ask whether it is financially tolerable for themselves. Conscientious doctors may well try to use their knowledge of cost-effectiveness studies to help them make the difficult rationing decisions they are forced to make, but the financial incentives built into managed care do not in themselves encourage anything but personal income maximization. Ironically, managed care returns doctors to the role of salesmen—but now they are rewarded for selling fewer services, not more. </font></p>
<p> </p>
<hr /><h3><font class="nonprinting">WHO CARES?</font></h3>
<p><font class="nonprinting">Because doctors in managed care often bear some risk for the costs of patient care, they face some of the same incentives that induce commercial health insurance companies to seek out healthy customers and avoid sick or potentially sick ones. In an article in <i>Health Affairs</i> last summer, David Blumenthal, chief of health policy research and development at Massachusetts General Hospital, explained why his recent bonuses had varied: </font></p>
<p> </p>
<blockquote><p><font class="nonprinting">Last spring I received something completely unexpected: a check for $1,200 from a local health maintenance organization (HMO) along with a letter congratulating me for spending less than predicted on their 100 or so patients under my care. I got no bonus the next quarter because several of my patients had elective arthroscopies for knee injuries. Nor did I get a bonus from another HMO, because three of their 130 patients under my care had been hospitalized over the previous six months, driving my actual expenditures above expected for this group.</font></p></blockquote>
<p><font class="nonprinting">Such conscious linking of specific patients to paychecks is not likely to make doctors think that their income depends on how cost-effectively they practice, as market theory would have it. Rather, they are likely to conclude, with some justification, that their income depends on the luck of the draw—how many of their patients happen to be sick in expensive ways. The payment system thus converts each sick patient, even each illness, into a financial liability for doctors, a liability that can easily change their attitude toward sick patients. Doctors may come to resent sick people and to regard them as financial drains. </font></p>
<p><font class="nonprinting">Dr. Robert Berenson, who subsequently became co-medical director of an HMO, gave a moving account of this phenomenon in the <i>New Republic </i>in 1987. An elderly woman was diagnosed with inoperable cancer shortly after she enrolled in a Medicare managed care plan with him as her primary care doctor, and her bills drained his bonus account: </font></p>
<p> </p>
<blockquote><p><font class="nonprinting">At a time when the doctor-patient relationship should be closest, concerned with the emotions surrounding death and dying, the HMO payment system introduced a divisive factor. I ended up resenting the seemingly unending medical needs of the patient and the continuing demands placed on me by her distraught family. To me, this Medicare beneficiary had effectively become a "charity patient."</font></p></blockquote>
<p><font class="nonprinting">Thus do the financial incentives under managed care spoil doctors' relationships to illness and to people who are ill. Illness becomes something for the doctor to avoid rather than something to treat, and sick patients become adversaries rather than subjects of compassion and intimacy. </font></p>
<p><font class="nonprinting">Here is also the source of the most profound social change wrought by the American approach to cost containment. Health insurance marketing from the 1930s to the 1950s promised subscribers more reliable access to high-quality care than they could expect as charity patients. But as it is now evolving, managed care insurance will soon render all its subscribers charity patients. By tying doctors' income to the cost of each patient, managed care lays bare what was always true about health insurance: The kind of care sick people get, indeed whether they get any care at all, depends on the generosity of others. </font></p>
<p><font class="nonprinting">Insurance, after all, is organized generosity. It always redistributes from those who don't get sick to those who do. Classic indemnity insurance, by pooling risk anonymously, masking redistribution, and making the users of care relatively invisible to the nonusers, created the illusion that care was free and that no one had to be generous for the sick to be treated. It was a system designed to induce generosity on the part of doctors and fellow citizens. But managed care insurance, to the extent it exposes and highlights the costs to others of sick people's care, is calculated to dampen generosity. </font></p>
<p> </p>
<hr /><h3><font class="nonprinting">PUTTING THE DOCTOR-BUSINESSMAN TO WORK</font></h3>
<p><font class="nonprinting">The insulation of medical judgment from financial concerns was always partly a fiction. The ideal of the doctor as free of commercial influence was elaborated by a medical profession that sought to expand its market and maintain its political power and autonomy. Now, the opposite ideal—the doctor as ethical businessman whose financial incentives and professional calling mesh perfectly—is promoted in the service of a different drive to expand power and markets. </font></p>
<p><font class="nonprinting">Corporate insurers use this refashioned image of the doctor to recruit both doctors and patients. The new image has some appeal to doctors, in part because it acknowledges that they need and want to make money in a way the old ethical codes didn't, and in part because it conveys a (false) sense of independence at a time when clinical autonomy is fast eroding. Through financial incentives and requirements for patients to get their treatments and tests authorized in advance, insurers are taking clinical decisions out of doctors' hands. Hospital length-of-stay rules, drug formularies (lists of drugs a plan will cover), and exclusive contracts with medical-device suppliers also reduce doctors' discretion. </font></p>
<p><font class="nonprinting">In contrast to this reality of diminished clinical authority, images of the doctor as an entrepreneur, as a risk taker, as "the 'general manager' of his patient's medical care" (that's Enthoven's sobriquet in his Shattuck Lecture) convey a message that clinical doctors are still in control. If they practice wisely, in accord with the dictates of good, cost-effective medicine, they will succeed at raising their income without cutting quality. HMOs have long exploited this imagery of business heroism to recruit physicians. Here's Stephen Moore, then medical director of United Health Care, explaining to doctors in the<i> New England Journal of Medicine</i> in 1979 how this new type of HMO would help them fulfill "their desire to control costs" while keeping government regulation at bay: </font></p>
<p> </p>
<blockquote><p><font class="nonprinting">Incentives encourage the primary-care physician to give serious consideration to his new role as the coordinator and financial manager of all medical care. . . . Because accounts and incentives exist for each primary-care physician, the physician's accountability is not shared by other physicians, even among partners in a group practice. . . . Each physician is solely responsible for the efficiency of his own health care system. . . . In essence, then, the individual primary-care physician becomes a one-man HMO.</font></p></blockquote>
<p><font class="nonprinting">The image of entrepreneur suggests that doctors' success depends on their skill and acumen as managers. It plays down the degree to which their financial success and ability to treat all patients conscientiously depend on the mix of sick and costly patients in their practices and the practices of other doctors with whom they are made to share risks. </font></p>
<p><font class="nonprinting"><font size="+2">T</font>he once negative image of doctor-as-businessman has been recast to appeal to patients, too, as insurers, employers, and Medicare and Medicaid programs try to persuade patients to give up their old-style insurance and move into managed care plans. Doctors, the public has been told by all the crisis stories of the past two decades, have been commercially motivated all along. They exploited the fee-for-service system and generous health insurance policies to foist unnecessary and excessive "Cadillac" services onto patients, all to line their own pockets. Patients, the story continues, have been paying much more than necessary to obtain adequate, good-quality medical care. But now, under the good auspices of insurers, doctors' incentives will be perfectly aligned with the imperatives of scientifically proven medical care, doctors will be converted from bad businessmen to good, and patients will get more value for their money. </font></p>
<p><font class="nonprinting">If patients knew how much clinical authority was actually stripped from their doctors in managed care plans, they might be more reluctant to join. The marketing materials of managed care plans typically exaggerate doctors' autonomy. They tell potential subscribers that their primary care doctor has the power to authorize any needed services, such as referral to specialists, hospitalization, x-rays, lab tests, and physical therapy. Doctors in these marketing materials "coordinate" all care, "permit" patients to see specialists, and "decide" what care is medically necessary. Meanwhile, the actual contracts often give HMOs the power to authorize medically necessary services, and more importantly, to define what services fall under the requirements for HMO approval. </font></p>
<p><font class="nonprinting">In managed care brochures, doctors not only retain their full professional autonomy, but under the tutelage of management experts, they work magic with economic resources. Through efficient management, they actually increase the value of the medical care dollar. "Because of our expertise in managing health care," a letter to Medicare beneficiaries from the Oxford Medicare Advantage plan promised, "Oxford is able to give you <i>100% of your Medicare benefits and much, much more</i>" [emphasis in original]. Not a word in these sales materials about the incentives for doctors to deny expensive procedures and referrals, nor in some cases, the "gag clauses" that prevent doctors from telling patients about treatments a plan won't cover. </font></p>
<p><font class="nonprinting">In an era when employers and governments are reducing their financial commitments to workers and citizens, the image of the doctor as efficient manager is persuasive rhetoric to mollify people who have come to expect certain benefits. To lower their costs, employers are cutting back on fringe benefits and shifting jobs to part-time and contract employees, to whom they have no obligation to provide health insurance. The federal and state governments are similarly seeking to cut back the costs of Medicare and Medicaid. The image of the doctor as an efficient manager—someone who can actually increase the value to patients of the payer's reduced payments—helps gain beneficiaries' assent to reductions in their benefits. Thus, the cultural icon of doctor-as-businessman has become a source of power for employers and governments as they cut back private and public social welfare commitments. </font></p>
<p><font class="nonprinting">The old cultural ideal of pure clinical judgment without regard to costs or profits always vibrated with unresolved tensions. It obscured the reality that doctoring was a business as well as a profession and that medical care costs money and consumes resources. But now that commercial managed care has turned doctors into entrepreneurs who maximize profits by minimizing care, the aspirations of the old ideal are worth reconsidering. </font></p>
<p><font class="nonprinting">In trying to curb costs, we should not economize in ways that subvert the essence of medical care or the moral foundations of community. There is something worthwhile about the ideal of medicine as a higher calling with a healing mission, dedicated to patients' welfare above doctors' incomes and committed to serving people on the basis of their needs, not their status. If we want compassionate medical care, we have to structure both medical care and health insurance to inspire compassion. We must find a way, as other countries have, to insure everybody on relatively equal terms, and thus divorce clinical decisions from the patient's pocketbook and the doctor's personal profit. This will require systems that control expenditures, as other countries do, without making doctor and patient financial adversaries. There is no perfect way to reconcile cost containment with clinical autonomy, but surely, converting the doctor into an entrepreneur is the most perverse strategy yet attempted. </font></p>
<p><!-- dhandler for print articles --></p>
</div></div></div>Sat, 17 Nov 2001 01:12:41 +0000141162 at http://prospect.orgDeborah StoneCare and Tremblinghttp://prospect.org/article/care-and-trembling
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><font color="black"><font size="2">U</font>ntil fairly recently, most caregiving was invisible to the public eye. Caring was done informally in the private sphere, mostly by women, for relatives, neighbors, and friends; and it was mostly unpaid. Gradually over the course of the twentieth century, caring has gone public. As middle-class women moved into the labor force, society responded by creating new entitlements to care for children, the sick and disabled, and the elderly. And as needs and entitlements expand, so does the provision of care. </font> <font color="black">Much care is now provided by people who care for a living and by professionals who gain their work identity from caring. Caring is the business of firms that care for profit, and of nonprofit agencies that care as a mission. And much care is now paid for by public subsidies, tax credits and exemptions, and by private and public insurance plans that tightly monitor and regulate the caregiving they fund. </font></p>
<p><font color="black">When care goes public, there is a clash between the norms of caring in private relationships and the norms of efficiency, professionalism, and accountability in business and government. Care gets reframed as a problem of bottom lines and cost containment. This cost squeeze comes from private enterprise, of course, but also from the public sector: legislators slash caregiving budgets to calm taxpayers and Wall Street, just as executives pinch caregiving outlays to protect their shareholders and profits. Since medical professionals, government, and business all use standards to define quality, care tends to get standardized—and restricted to what fits in the boxes of printed forms. As care is reduced to instrumental tasks, human relationships can get lost, and caregiving can be depleted of its emotional force and spiritual meaning. Most troubling of all, when care is recast as an economic and bureaucratic good, deep norms about altruism, generosity, and cooperation are reshaped as well. </font></p>
<p><font color="black">Though cost-containment pressures have shifted more caregiving out of hospitals and into homes, Medicare's home health budget was nonetheless slashed by the Balanced Budget Act of 1997. For two years the federal government has been running a cloak-and-dagger antifraud initiative known—quite inaptly—as Operation Re store Trust. To find out how recent Medi care and managed care changes are affecting caregiving at the bedside, I have been interviewing home care workers in New England and observing one nonprofit home health care agency in depth. The more I talked with people, the more I saw how financial tightening and the ratcheting up of managerial scrutiny are changing the moral world of caregiving, along with the quantity and quality of care. </font></p>
<p> </p>
<p><font color="black"><b>The Kindness of Strangers </b></font></p>
<p><font color="black">Caregivers work under a constant moral burden. They cannot give enough to satisfy the needs they think people have or the care they think people deserve. Frequently, the rules of insurance plans or agencies prohibit them from doing what they think is right. Formal policies sometimes clash with what they understand to be the rules of ordinary human morality. When that happens, they are caught in a moral double bind, forced to violate one set of norms or rules in order to abide by another. </font></p>
<p><font color="black">When care is done by strangers instead of by family and friends, strange things happen. Some of the home health aides I've talked with have pointed out the odd disparities between what they're allowed to do as certified and licensed paraprofessionals—in other words, as strangers—and what they're allowed to do as ordinary citizens, relatives, or friends. They can't give prescription medicines. They may dispense, apply, or feed any nonprescription lotions or potions, but they cannot even rub on prescription cream for a client who can't reach her own feet or back. They are not allowed to pick up a person who falls, because they might cause more injury if the client sustained a fracture. Cert ified nursing assistants, as these home health aides are called, are required to work under the supervision of a registered nurse, and as these examples show, they must observe strict boundaries between medical and personal care. Mean while, any child, or any ignorant but well-meaning neighbor, friend, or relative, may dispense prescription medicine or help someone up from a fall. </font></p>
<hr size="1" /><center><font color="black"><a href="/subscribe/"><img alt="Subscribe to The American Prospect" border="0" src="/tapads/mini_subscribe.gif" /></a> </font></center><br /><hr size="1" /><p><font color="black">If a client asks a certified aide to just sit and talk, the aide cannot comply, because she is there to do bodily care. If the client doesn't want any type of bodily care, such as a shower, shampoo, or fingernail cutting, one aide said, "I try to coax them into something, because . . . in order for the agency to get paid, to pay me, I have to do some type of hands-on [care]." Of course, care providers usually do chat for a few minutes, because they understand how intrusive and demeaning needing help can be, and they know you can't just start doing tasks for people without establishing some kind of human connection. But they live with an undercurrent of tension between their clients' desires for company and companionship and the more instrumental care they are trained and paid to provide. </font></p>
<p><font color="black">In some agencies, if a client asks an aide to do certain household chores, the aide is supposed to refuse. An aide in New York told researchers Ruth Glasser and Jeremy Brecher, "You're not going to do something that gets you in trouble, but sometimes the rules have to be bent a little bit. . . . Like for example, we're not allowed to get up on ladders and change curtains. But sometimes you have clients who don't have a relative or somebody to come over and do it, and if you think you're capable of doing it, you do it." When caregivers are asked for kinds of help they aren't supposed to give, they have to negotiate between their job definitions and their personal assessments of the care they think people want and need. </font></p>
<p><font color="black">When I ask aides to tell me how they know when they have done a good job, they almost always say something like, "The client smiled," or "The client is all pretty and looks nice—you've made her happy," or "They say 'thank you.' " When I asked one aide to tell me about a time she felt she gave really good care, she replied, "It always makes you feel good when somebody says, 'I'm really glad to see you. I haven't seen anybody all day.' And you know you're bringing a little spark into them." They talk about how rewarding it is to make a difference—client happiness is their barometer. </font></p>
<p><font color="black">Some aides told me that Medicare clients often ask them to continue care on weekends or on a private basis after the agency discharges them. The clients would love to keep the same aide, and often the aides have grown very fond of a client and would love to continue caring. But the agency maintains separate staffs for Medicare and non-Medicare clients, and aides are keenly aware that if they went to someone's home on a private basis, they wouldn't have workers' compensation or any benefits, and they would be wide open to suits if a client or family became disgruntled with their work. </font></p>
<p> </p>
<p> </p>
<p><font color="black"><b><font size="2">O</font></b>nce they are paid and licensed, caregivers' concept of caring work changes subtly. They now think of caring work as something for which they need the benefits and legal liability protections that people expect in paid jobs but don't even think about when caring for friends and relatives. For their part, agency managers understand and regret the disruption in caring relationships when the division of labor is determined by the client's funding source. But if the agency were to allow more fluidity between its divisions, or between its services and its aides' private-duty services, it would engender suspicion that it was playing fast and loose with Medicare rules. </font></p>
<p><font color="black">Also, when caregiving becomes formalized, it is controlled by someone who is neither the caregiver nor the cared for—it is no longer a matter of private obligations and gifts, personal affection and tiny personal struggles. Once a business or government agency gets involved, it has interests, concerns, rules, laws, and reputations to worry about. </font></p>
<p> </p>
<p><font color="black"><b>Bureaucracies of Care </b></font></p>
<p><font color="black">When organizations provide care as their <i>raison d'être</i>—whether they are for-profit firms, nonprofit agencies, or private or public financing programs—they must be concerned with things other than the people getting and giving the care. They must pay attention to productivity, costs, waste, fraud, and solvency. They must worry about pleasing their shareholders, boards of directors, bill payers, insurance companies, Medicare and Medicaid offices, auditors, accrediting agencies, and congressional oversight committees. All of a sudden, the goals of staying in business, balancing the books, and lowering costs displace the goals of making patients feel cared for and improving their well-being. </font></p>
<p><font color="black">This new reality reflects three distinct trends. First, with more women in the paid labor force—either out of choice or economic necessity—fewer are available to do the traditional woman's unpaid work of caregiving. And fewer extended families live in close proximity to one another, which makes it even harder for relatives, male or female, to be caregivers. Second, society has created a partial entitlement to paid caregiving by strangers. And third, this entitlement has given rise to severe cost-containment pressures that in turn intensify the bureaucratization. All of these dilemmas would arise in any system that shifted from unpaid to paid caregiving. But the contradictions surely would be less acute if there were more adequate financing and more respect for the caregivers' professionalism. Greater subsidies for paid family leave would help, too, so that more care could be done according to traditional norms but without economic sacrifice. </font></p>
<p> </p>
<p> </p>
<p><font color="black"><b><font size="2">S</font></b>itting in on case conferences, I'm struck by how much talk there is of goals: "What are our goals here?" is the question about every case. Even more surprising is the most common answer: "Our goal is to be out by the end of the week," or "to be out by the end of the month." With the Balanced Budget Act's drastic cutbacks for home health care, and managed care's stringency as well, agencies seem to have had to shift their goals from providing care to terminating care. They can't provide care if they don't stay in business, and they can't stay in business on lower revenues unless they terminate unreimbursed care. Cost control, then, introduces a perverse logic. </font></p>
<p><font color="black">If the focus on goals shows up in an emphasis on time frames and "getting out" of the client's home, it also shows up in the content of caregiving. Short-term goals and long-term goals must be defined and documented. Nurses and physical therapists tell me their care has had to become much more narrowly focused in the last few years. They have to document a specific medical problem for each patient, such as an unhealed wound or a knee injury, and they're not supposed to spend any time addressing other problems—even other medical problems—that are not listed as the reason for needing home care. </font></p>
<p><font color="black">Caregivers also have to document patients' progress toward goals. If the reason for care is an unhealed wound, nurses must write down a date when the wound is going to heal. "This is such a crapshoot," said one nurse. "Would you mind telling me how in hell I'm gonna know when this wound's going to heal?" She and other nurses said they feel they are allowed much less time to teach patients how to care for themselves. They are allowed time to explain something only once—even if the patient forgets or needs more explanation. "Prior to all this push and shove," one nurse said, "[teaching] was more tracked in the patient's learning capacity. You didn't feel like if you told them something, whether they got it or not you just wrote down you told it to them." </font></p>
<p><font color="black">Medicare requires that patients show progress toward their goals in order to remain eligible for services. In team conferences, I heard a fair amount of talk about patients who "weren't progressing" toward their goals, always followed by some discussion of whether there were legitimate reasons for this lack of progress, or whether it was a matter of what the caregivers call "noncompliance." The agency has to cover itself with Medicare, so the caregivers are forced into making uncomfortable judgments. Do they find a way of showing progress to protect patients' eligibility? Do they abandon patients who they feel are to blame for their own lack of progress? Should they be in the business of judging patients and assigning blame in the first place? </font></p>
<p><font color="black">When ordinary people care for their own family members, they don't think about progressing toward goals. They think about their relationship. And if you ask them to articulate their goals, they are likely to say that they want to make their relatives as happy and as comfortable as they can, and preserve their dignity. This is especially true in the case of adult daughters caring for parents. Historian Emily Abel, author of <i>Who Cares for the Elderly?</i>, found that adult daughters are overwhelmingly concerned with reciprocating the care they received as children, but without reversing roles in a way that demeans their parent. They're acutely aware of how hard it is for a parent to be cared for <i>like</i> a child, <i>by</i> a child. </font></p>
<p><font color="black">In the private sphere, the goal of home health care, if we were to put it in these terms, might be to express love and gratitude, and to preserve identity and meaningful relationships in the face of one person's decline. In the public sphere, the imperatives of organizational survival displace these private purposes and motivations for caregiving. Yet the caregivers still care, and they care intensely; the emotions and the attachments are not diminished. </font></p>
<p> </p>
<p><font color="black"><b><font size="2">T</font></b>he main strategy of keeping costs down in home health care is to limit care to medical needs and medically related tasks, and to eliminate any care that is merely social. Medicare and most of the insurance companies will now pay only for skilled medical services and for care that is necessary to cure a medical problem. In a case conference, I heard about a woman who was just home from a rehabilitation center after an accident that left her paralyzed from the neck down. She and her husband, who was caring for her full-time, were overwhelmed with the problems of how to cope with everything from her bowels, to getting her from bed to wheelchair, to running their business. They'd already used up the three home health care visits their insurance company had allowed. When the agency called to get more visits authorized, the insurer at first refused, saying her medical condition was now stable. All their difficulties were only "emotional adjustment problems." </font></p>
<p> </p>
<p><font color="black"><b>How Caregivers Cope </b></font></p>
<p><font color="black">Payers, naturally, use the medical/social distinction as a way of limiting their expenditures. But agencies that provide caring services find it hard to sustain that distinction. As one aide put it, "You can't just go in and get out. I'm sorry. You know, my grandmother had people taking care of her. . . . I wouldn't want them to do the same—you know, just come in and wash her up and leave. They have to have some kind of relationship going." The caregivers are keenly aware that home health care is very intimate and very personal, and that it's very intrusive to have someone come into your home, much less touch your body. Caregivers must build rapport and trust before they can take care of their patients; and that's not just something that has to be done on the first few visits, when a client is new to home care. Making a social connection has to precede every episode of caregiving. "Even now," one aide said, "I still have patients that'll say, 'Come sit down and let's talk for a few minutes.' You know, they don't want to get right to business, they want to visit for a few minutes." </font></p>
<p><font color="black">The line between medical and social, emotional, or spiritual evaporates in practice. Yet because both public and private payers use this line to try to manage their costs, agencies and individual caregivers are pressured into pretending they don't cross it. They joke about how they treat other problems in addition to the official reason for care; and they even sometimes care for other members of the patient's family. Good care often requires observing a patient's spouse and attending to his or her health problems as well, for the spouse is often the caregiver when the agency people aren't around. The agency caregivers know what not to write down, and they are frustrated by payers' blindness to the realities of human life. </font></p>
<p><font color="black">Virtually all direct care pro viders I have talked with—all the nurses, physical and occupational therapists, and home care aides—say they've visited clients on their days off. They consider it normal, and when I ask about this practice, they tell me, "Oh, everybody does it." One woman said, "I have on occasion run into somebody's home when I wasn't treating the person. Or, we are treating but I wasn't normally going." I asked what she did on these occasions. She said she never gave a treatment, but she would just check on how someone was doing. "But I think that's called being a neighbor." </font></p>
<p><font color="black">It is also, in effect, a subsidy from the caregiver's scarce free time. Caregiving bureaucracies are betting that the caregivers will dip into the well of their own humanity to offset the budget constraints and the stifling rulebooks. </font></p>
<p> </p>
<p> </p>
<p><font color="black"><b><font size="2">B</font></b>ecause Medi care, the largest payer of home care services, is currently scrutinizing and cutting back home care services, many long-term clients are in jeopardy of losing their home care benefits. Other insurance plans are tightening up as well. Increasingly, caregivers feel that insurance rules contradict their own sense of common decency and morality. Visiting clients on their time off becomes a form of resistance to new insurance rules—a form of protest. </font></p>
<p><font color="black">When team members heard about the quadriplegic woman I mentioned earlier, one nurse burst out: "Well, I'll just go visit her <i>as a friend</i>. And if I happen to have a few little things in my pocket. . . ." Another team member chimed in, "Yeah, I told her I go to the market all the time, and if she needs anything, she should give me her list." A couple of different caregivers told me about a patient with a chronic crippling disease who'd been getting home care for ten years, but who they feared was going to be cut off by Medicare. One of his aides told me she had reassured him, "'Hey, if they cut you off, you won't go without a bath.'. . . I told him I'd stop after work and do it. They can't stop you on your own time. After 3:30, you're a private citizen." </font></p>
<p><font color="black">Though agencies and their staff are under tremendous pressure to cut back the formal, reimbursed care they provide, nobody, as far as I can tell, is stopping caregivers from voluntarily picking up the slack. Aides and nurses tell me that the agency discourages them from getting too emotionally attached, but they get attached anyway. "You can't help it," they say. "If you're human," or "if you have any human compassion, you just do." The case managers and supervisors know this. They know that nurses and aides sometimes accept phone calls at home from their patients, even though they're not supposed to give out their numbers. They know that nurses and aides often visit clients on their days off and do special favors, even though they're not supposed to. The supervisors, who are nurses themselves, look the other way. They've been there. </font></p>
<p><font color="black">Of course, everybody—taxpayers, insurance companies, agencies—benefits from the free, volunteer labor that caregivers provide when they do it on their own nickel. No doubt, supervisors and managers don't actively discourage this sort of thing, because it helps keep their clients happy at lower costs. But they also know the staff will do it anyway, and to a large extent, they share the direct caregivers' moral commitments. In this sense, the exploitation of women's generosity that characterized traditional, informal caregiving systems continues in the new systems of formal, socially financed care. </font></p>
<p> </p>
<p><font color="black"><b>Compassion and Suspicion </b></font></p>
<p><font color="black">The caregivers talk about their personal attachments to clients and their "extras" as though their deep affection, fierce loyalty, and overflowing generosity were somehow illicit. When I asked about ways they go beyond their duties, some of them, especially the aides, lowered their voices to a whisper. Some got visibly uncomfortable. They all said something to the effect that "the agency discourages" getting emotionally involved and doing things outside the job description or outside the care plan. </font></p>
<p><font color="black">Several aides said they sometimes pick up milk, bread, or other items for a client. Such favors are against the agency's rules, but I didn't know this the first time an aide mentioned it. I only figured it out when she hesitated to talk about it, and when I realized she was trying to justify herself to me. The man lives three miles from the store, she told me, and his daughter lives even further away in a neighboring town. "It's very inconvenient for her to bring him milk. So how's he going to get milk? How's he going to get a loaf of bread? I'm going right by. It's on my way home." She then asked, "How can you say no? You can't. You're a human being. I go out of my way to do things for people. . . . I guess I'm guilty." </font></p>
<p><font color="black">She used the word "guilty" three times when talking about her favors and kindnesses. Finally I asked her, "You're doing something nice for these people. Why should you feel guilty?" "They tell you you're not supposed to have contacts with patients outside of work," she answered. </font></p>
<p><font color="black">Medicare's antifraud program and its stepped-up eligibility reviews create a climate of intimidation, fear, and guilt. Several caregivers said they believed there has been abuse in the home care industry, and that sometimes perhaps they or their agencies have kept clients longer than they should have. But they all talked about how the new climate "makes you feel like a criminal." "Now you feel like you can't make a mistake—you're being watched," one physical therapist told me. With Medicare reviewing and questioning more and more cases, "You feel like a crook from the start," said another. "I feel that I'm short-shrifting the patient," a nurse said about the speed-up in teaching. "We feel like we're abandoning our patients," another nurse said about Medicare's refusal to continue to pay for chronically ill patients. One nurse told me about a patient who benefits greatly from home care, but who, she thought, she was able to see only "because nobody's denied it yet. But I am feeling very guilty about it." They're waiting for the ax to drop on their patients—and meanwhile, they're feeling guilty for caring. </font></p>
<p><font color="black">These are women who overwhelmingly love their work. Almost all of them volunteered that they find their work incredibly rewarding, and that they wouldn't want to do any other kind of work. When I asked explicitly how cutbacks have affected them, a few mentioned reduced client loads and working hours, but almost all of them used the word "sad." They still love their work, but more and more often they are having to watch their clients get hurt by the system. And they are forced to participate in the withdrawal of care. </font></p>
<p> </p>
<p> </p>
<p><font color="black"><b><font size="2">P</font></b>aradoxically, public accountability has brought moral confusion to the world of care. When people care in a public context as employees of an agency and, in effect, as agents of insurance companies or Medicare and Medicaid, they are made to feel that their personal relationships with clients are illegitimate—something to be hidden, kept in check, restrained, best left unspoken. Not only their professional judgment but their compassion comes under suspicion. They talk about caring and their caregiving work almost as if they were engaging in civil disobedience. They frequently justify themselves by using the terms "human," "friend," or "citizen" and insisting that insurance or agency rules shouldn't prevent them from doing what <i>any</i> human being or <i>any</i> citizen would and could do. </font></p>
<p><font color="black">This dampening of generosity, this suppression of altruism, this check on the formation of social connections—these are the real dangers of making caring work formal and public. We are in a political time when a mean spirit dominates our public philosophy, and an obsession with cost control dominates our public policy. When these values infuse the work worlds of people who care for a living, they may become all citizens' way of seeing what is morally proper in their relations with fellow citizens. When empathy, generosity, and reaching out to your fellow human beings are perceived as civil disobedience, we are a society in trouble. </font></p>
<p><font color="black">In grappling with the issue of long-term care for the elderly and chronically ill, most politicians and policy analysts have worried primarily about one danger—that if public programs pay for long-term care, they will displace much of the voluntary care currently offered. They worry that public responsibility for long-term care will erode family responsibility, and that hordes of people will crawl out of the woodwork to devour any new care that is subsidized by public funds or private insurance. </font></p>
<p><font color="black">Policymakers ought to worry instead about a different kind of displacement: the displacement of caring relationships and social connections by narrow, task-oriented bodily maintenance; the displacement of empathy and affection by cool professionalism and calculated fiscal prudence; and the displacement of an ethic of responsibility for one's neighbors by an ethic of working-to-rule. </font></p>
<p><font color="black">If we care about preserving the norms of reciprocity, trust, and mutual aid that make us a community, we had better take better care of our caring work. The solution to these dilemmas is not to shove caregiving back into the invisible world of women's unpaid, unacknowledged work. Instead, we need to look into the heart of caregiving, to remind ourselves what we truly value. Before we can have intelligent conversations about how much to socialize the costs of caregiving—how much for paid family leave, how much for paid professional care—we need first to decide that humane caring matters. </font></p>
<!-- dhandler for print articles --><p></p>
</div></div></div>Fri, 16 Nov 2001 23:00:09 +0000140854 at http://prospect.orgDeborah StoneWhy the States Can't Solve the Health Care Crisishttp://prospect.org/article/why-states-cant-solve-health-care-crisis
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>One of the enduring metaphors of American federalism is that states serve as laboratories for the federal government. States are the basement tinkerers that generate ideas to solve big national problems. They are the crucibles for testing the safety and efficacy of new ideas before the whole country adopts them. State leaders, the argument goes, are closer to the people, more sensitive to local conditions, and more attuned to real social problems than are national officials.</p>
<p>With medical costs zooming, over 60 million people uninsured at some time over a two-year period, and the federal government slow to act, many people -- state officials foremost among them -- are looking to the states to lead us out of the mess. Indeed, some states have tried to seize the initiative, and many are debating new measures. Hawaii and Massachusetts, which both have enacted mandates on employers to provide insurance, and Oregon, with its widely publicized but not yet implemented rationing plan, are closely watched as testing grounds for the nation.</p>
<p>The states cannot do it alone, however, because of fundamental impediments in the federal system. When it comes to health care financing, the states lack sufficient autonomy from the federal government, on the one side, and sufficient power over private insurers, doctors and hospitals, and businesses, on the other. Federal laws governing Medicare, Medicaid, and employee benefit plans limit the options available to the states and handicap them in dealing with health care providers and employers. Achieving both universal access and cost control -- the crux of the challenge in health policy -- is simply too big a problem to be handled at the state level, given current impediments to innovation. Despite valiant efforts, most state and federal policy makers now realize that major federal reform is a sine qua non, even for state-based solutions. <font class="headline">Fingers in the Dike </font><br />State initiatives to deal with the uninsured are "slowly but surely filling the gaps," according to a 1990 report from the Intergovernmental Health Policy Project at George Washington University. Maine put together a package with a private health maintenance organization (HMO) that now provides insurance for about 15 percent of small businesses that previously had not offered it. Connecticut recently created a plan enabling small firms to purchase lower-cost health insurance; the price reductions come mainly from lower provider fees. Twenty-four states have created high-risk pools, which insure anywhere from 2,000 to 10,000 people who have been rejected by private insurers as too sick to insure. However, while these and other state reforms provide health insurance for several thousand formerly uninsured people, they do not touch the structural features of health insurance that create -- and will continue to create -- gaps in the first place.</p>
<p>At the heart of the problem is a medical insurance market that gives everyone incentives to withdraw from sharing risks. The nonprofit Blue Cross-Blue Shield plans used to treat all employee groups in a region as if they were one giant group; in effect, they aggregated and spread the cost of high-risk subscribers. In recent decades, however, insurers have increasingly sought to escape from high risks by dividing up the market. They might have competed by offering better benefit packages, claims servicing, or cost-control features. Instead, they have competed primarily by judiciously selecting their "raw materials," which in the insurance business means seeking out healthy policy-holders and avoiding sick ones.</p>
<p>Seeking the lowest-cost insurance plans, many employers have withdrawn from Blue Cross-Blue Shield. Instead of pooling the risks of their employees with those of other, often less healthy groups, these firms "self-insure" (that is, they assume the risks themselves and use insurance companies just to process claims), or they purchase commercial policies at prices specifically reflecting their employees' risks. Commercial insurers target young, healthy groups in their marketing efforts and, to protect themselves further, often screen potential subscribers and impose limitations in coverage, such as exclusions of preexisting conditions. Increasingly, businesses are subdividing their employees into groups with more homogeneous risks, thus reducing risk-sharing even within the firm. Premiums for employee categories within the same firm can now vary by $2,000 or more. Firms are also shifting more of their work to part-time employees and contractors, for whom they are not obligated to provide any health insurance, and they are cutting back insurance for employees' dependents. As a result of all these measures, growing numbers of people are forced to pay very high prices for health insurance and are often unable to obtain any at all.</p>
<p> </p>
<p></p><center>* * *</center>
<p>In one of the most widely touted initiatives to deal with this problem, twenty-six states have created high-risk pools especially for people denied private insurance for medical reasons. Since high-risk pools group together people who have or are likely to have costly diseases, the premiums and uncovered out-of-pocket costs are necessarily high. Without big subsidies, the pools simply cannot provide affordable insurance for the average person. Moreover, because the pools are expensive to operate, most states limit admissions and have long waiting lists.</p>
<p>The insurance pools do not avoid costs to the public. Some states subsidize the high-risk pools from state revenues; others assess insurance companies based on their pro-rata share of business in the state; and some states use a combination of these two methods. Most states allow insurance companies to credit their high-risk pool assessments against their state taxes. Since state tax revenues are thereby reduced, the net effect is the same as subsidizing the pools from state general revenues.</p>
<p>High-risk pools are, quite unintentionally, another force behind the collapse of risk pooling. Although subsidies from state general revenues do broaden risk-pooling (by requiring all citizens in a state to contribute to the costs of medical care for the very sick), other features of high-risk pools effectively narrow risk-pooling. First, these pools by definition admit only people who are likely to have very high medical expenditures; under this arrangement, sick people share their high costs with other sick people. Second, under the federal Employee Retirement Income and Security Act of 1974 (ERISA), employers who self-insure are exempt from state insurance regulation; the states, therefore, cannot require them to contribute to high-risk pools. Nor do these firms pay any premium taxes to state coffers, another exemption courtesy of ERISA. Hence, employers and their employees do not share in the medical care costs of the high-risk pool, except insofar as they contribute in other ways to state general revenues.</p>
<p>Third, by limiting entry into the pools to a fixed number of places, states are limiting the number of people who are allowed to pool their risks with the general public via the state subsidies and tax forgiveness. Last, and most important, the very existence of these pools as a lifeboat for people who have been rejected by private insurers reduces political pressure on private insurers to relax their criteria for providing coverage. By creating high-risk pools, states actually make it easier for insurance companies to continue their cream-skimming. <font class="headline">Reforming the Small-Group Market</font><br />Many state efforts to deal with the access problem go under the heading of "small-business market reform," a strategy with support from just about everybody -- even the commercial insurance industry -- because it appears to solve the problem without costing a penny. 'If s the easiest target Congress has," Senator Jay Rockefeller told the <i>National Journal.</i> "If s a wonderful, glorious, multicolored, brilliant, magnificent sitting duck, and it's all free."</p>
<p>One approach is to exempt insurers from state legislative mandates to include certain benefits, such as mental health care, treatment for substance abuse, or maternity coverage, when they sell policies to small businesses. Conservative free-market advocates have long argued that the private insurance market could provide coverage to more people if insurance companies were not hamstrung by legislative requirements to include an enormous array of non-essential benefits. Some twenty-three states have passed permissive legislation of this sort, and another seventeen are considering bills. Of course, the bare-bones policies fail to provide coverage for services the state has otherwise deemed essential. Of fifteen states surveyed by the Intergovernmental Health Policy Project in 1991, only six would require prenatal care in a basic benefit package and only seven maternity care; only two would require coverage of Pap tests and only five mammograms; only four would require coverage of newborn children, only three adopted children, and only one children's preventive health services.</p>
<p>Advocates of the bare-bones approach have sold it politically by emphasizing that the savings for small employers come from eliminating frills such as in vitro fertilization, herbal therapy, and expensive cures at substance abuse rehabilitation centers. In fact, the bare-bones policies eliminate a great deal of primary and preventive care. Virtually all the stripped-down plans reduce the number of physician office visits and hospital days that insurance policies must cover, and they increase the deduct-ibles and copayments borne by policy-holders. State health insurance officials estimate that two-thirds of the savings in these plans come from higher out-of-pocket costs for the insured.</p>
<p>In the six states where marketing of these plans is already underway, employers are distinctly unenthusiastic. Blue Cross-Blue Shield introduced a stripped-down plan in Virginia in July of 1990, and by the end of 1991, only twenty-five firms with a total of 100 employees had bought it. In Washington State, 2,300 employees are covered under bare-bones plans, but half of these are in firms that were downgrading their plans rather than buying coverage for the first time. All in all, stripped-down benefits policies are unlikely to put a scratch in the uninsured problem.</p>
<p>Yet another strategy is to subsidize the purchase of health insurance. Some states, such as Maine, provide subsidies to small businesses that purchase health insurance, usually restricting the subsidizes to first-time buyers. Others, such as Washington, contract with a provider to offer subsidized insurance to the working poor. Still others market subsidized policies to special groups, such as pregnant women and children. Boosters of state-based solutions to the health insurance crisis often point to programs like these as successful demonstrations of what states can do.</p>
<p>Obviously, though, state fiscal realities limit the potential subsidies. With twenty-eight states now running in the red and governors everywhere cutting back services and laying off workers, subsidies for health insurance can hardly be expected to grow. (In fact, Michigan cancelled its demonstration program in 1991 because of the state fiscal crisis, and Massachusetts' program is in a stall.) Like the other state solutions, subsidies do not alter the market so as to make insurance affordable in the long run.</p>
<p>Some states have established re-insurance mechanisms whereby the state or a private insurer picks up the costs of expensive medical care for individuals or for employees of small-group plans. Connecticut assesses all sellers of small-group policies to finance the re-insurance, but many other states use state subsidies in addition. Re-insurance does ultimately spread the risk of catastrophic illness in small groups, but via an administratively complex (and thereby expensive) route. Reinsurance simply fragments the market, rather than aggregating expensive risks with cheaper ones.</p>
<p>Some state and local pilot projects take on and pay for the administration and marketing of small-group insurance in order to enable private insurers to charge small groups the same low premiums as large employee groups. This practice, too, merely subsidizes the profits of private insurers, without changing their incentives to segment the market into smaller groups.</p>
<p>None of these reforms addresses the real problem of the small group market: small-ness. Insurance works by aggregating risks into large pools and spreading the costs widely. Each of the so-called small-group reforms in fact enables insurers to keep the market disaggregated, and to make profits on the administration of an inherently inefficient structure.</p>
<p>Most of the other state innovations to deal with the access problem follow similar lines. Typical strategies, each used by several states, include creating special state pools for some uninsured workers (but not non-workers), poor pregnant women, the disabled, or children; establishing trust funds or special accounts to cover hospitals' costs of uncompensated care; and providing tax credits or subsidies to small firms who offer health insurance. Because each of these strategies addresses only part of a large systemic problem, each stopgap measure lets the overall system continue to operate -- and to continue excluding those with high risks from full protection.</p>
<p>The cure for lack of insurance has to be risk-pooling. Unless we create mechanisms to re-aggregate people into large groups to share the costs of health care, we will continue to siphon money into unnecessary and wasteful insurance contraptions. <font class="headline">Barriers to State Solutions </font><br />The big problems of health care transcend state boundaries and require more political power than state governments have. The federal Medicare program is the payer for about 40 percent of hospital costs. States and community coalitions can try to do something about controlling hospital costs, but the lion's share of the costs is controlled by a lion outside their jurisdiction. Indeed, the federal government's chief cost-containment strategy for Medicare has been to use price controls and other methods to curtail its own costs, and to withdraw from sharing the costs of uncompensated care with other payers.</p>
<p>Medicaid accounted for nearly 14 percent of state budgets in 1990, the second biggest line item after elementary and secondary education. Although it is nominally a federal matching program for expenditures the states decide to make, in practice the states have less and less autonomy to decide what they will spend on Medicaid, let alone how they will manage the program. Federal mandates have increased the types and income-level of people states must cover, first through the federal Supplemental Security Income (SSI) program, then through mandates built into budget acts of the 1980s. What started out in 1965 as a physician and hospital insurance program for the poor has become, through the SSI program, primarily a funder of long-term care and other services for the elderly, disabled, and blind. These three groups account for about 30 percent of the Medicaid population but 75 percent of Medicaid expenditures. Poor adults and children on Aid to Families with Dependent Children (AFDC) comprise about 70 percent of the Medicaid population but account for only 25 percent of the expenditures.</p>
<p>As Medicaid expenditures have grown, states have covered a declining proportion of the poor. The ratio of Medicaid enrollees to the poverty population dropped from 65 in 1976 to only 42 percent in 1989. As a result, states and their county and local public hospitals are forced to pick up the tab for uncompensated care. Meanwhile, the federal government no longer shares in those costs through Medicare.</p>
<p>No wonder, then, that state officials feel powerless to control their programs and their budgets. An official in Tennessee complained of "state programs being turned into federal programs," and many speak of the "federalization" of Medicaid. The executive director of the National Governor's Association, commenting on its most recent survey of state budgets, says that Medicaid requirements imposed by the federal government are "devouring virtually every new dollar of revenue and leaving little money for new programs." The survey found that state Medicaid budgets are expected to rise by 116 percent in only five years. <font class="headline">Mandates without Power </font><br />Federal rules simultaneously require states to expand their coverage of people and services and constrain their ability to control costs. States could try to squeeze hospital and physician reimbursements. But a 1981 federal rule, known as the Boren amendment after Senator David Boren of Oklahoma, guarantees hospitals and nursing homes "reasonable and adequate rates," and a 1990 amendment requires that nursing home rates take into account the cost of services necessary to provide the "highest practicable" well-being. Also in 1990, the U.S. Supreme Court interpreted the law to allow facilities to sue states for adequate reimbursement. Such suits are on the increase, and the mere threat has made states more cautious about holding down reimbursement. States are left to find budget cuts elsewhere -- in physician services, outpatient care, immunization programs and other health services not protected under the Boren amendment, and in AFDC and General Assistance programs.</p>
<p>For many state officials, federal restrictions on their capacity to control reimbursement are not nearly as annoying as the federal propensity to issue mandates and then fail to provide regulations for carrying them out. A section of the 1987 Omnibus Budget Reconciliation Act, for example, requires states to monitor nursing home performance by assessing residents, staff, and facilities. But even though states were required to implement the program by October 1990, the Bush Administration did not issue final regulations by then, and, even now, there are no final or even proposed regulations for some of the provisions. States must operate in the dark.</p>
<p>When states want to experiment with innovative ways of managing their health expenditures, they need to get a waiver from normal federal program rules. Precisely because Medicaid is a joint federal-state program, designed originally to induce states to make greater fiscal efforts on behalf of health care for the poor, it has certain national standards for state programs. These include not only eligibility conditions and minimum service packages, but other design requirements, such as offering recipients a free choice of medical provider, making all program rules applicable across the state, and using particular forms of provider reimbursement.</p>
<p>While often laudable and highly effective, national standards also seriously constrain the ability of states to experiment. If a state wants to conduct any kind of an experiment on a community or county level, for example, it needs a waiver from the "statewideness" requirement. If it wants to experiment with more centralized budgeting and planning by combining all revenue sources for health care, it needs waivers to include Medicare and Medicaid in its plans. Moreover, the federal government requires that all state experiments be budget neutral for the first year to qualify for a waiver.</p>
<p>The federal waiver process has, by all accounts, been at best a discouragement and at worst an obstacle to state innovation. Even though the 1981 Omnibus Budget Reconciliation Act encouraged states to experiment with different cost-containment strategies and authorized waivers to permit states to limit recipients' choice of providers, subsequent congressional acts and amendments gave conflicting signals to the states. The 1985 Consolidated Omnibus Budget Reconciliation Act permitted states to provide case-management as an optional service, without seeking federal waivers, and gave an extraordinarily broad definition of case-management. But when the National Governors Association surveyed state officials in 1986 about their plans to implement case-management experiments, many indicated they were "waiting for guidelines and regulations" and that they were uncertain how the federal government might interpret statutory language and administer its review of state plans.</p>
<p> </p>
<p></p><center>* * *</center>
<p>Perhaps the most vivid example of how the federal waiver process puts the brakes on state innovation is the current Oregon plan to deny Medicaid coverage for certain medical procedures deemed not cost-effective. Under the plan, Oregon would be the first state to manage the cost/access dilemma by explicitly refusing to fund some expensive procedures and using the savings to insure more people for primary care. Oregon leaders conceived of the plan as incremental. Initially, the service exclusions would apply only to poor adults and children in Medicaid, but not to the elderly, disabled, and blind recipients of Medicaid (nor to state employees or anyone else in the state). Since the Oregon plan calls for eliminating some services from the federally specified basic Medicaid benefit package, the state must have a waiver to implement it. After the Health Care Financing Agency (HCFA) indicated its reluctance to use its administrative authority to grant a waiver for such a major change, the state turned to Congress for a legislative waiver. Oregon hired ICF/Lewin, a national consulting firm, to write its waiver application, and turned to Senator Bob Packwood for help in Congress. The Senate Finance Committee voted in favor of a waiver, and the debate moved to the full Senate for consideration as part of the 1989 budget bill.</p>
<p>Once the waiver request entered the congressional arena, it became highly visible. Sara Rosenbaum, then Director of the Health Division for the Children's Defense Fund, saw inequities for poor women and children and teamed up with Representative Henry Waxman of California to oppose the waiver. National organizations such as the American Academy of Pediatrics, the National Association of Community Health Centers, and the Association of Catholic Hospitals lined up with opponents of the Oregon plan. The Oregon waiver was dropped like a hotcake from the 1989 budget bill and continues to be embroiled in national politics.</p>
<p>There is much to criticize in the Oregon plan [see Bruce Vladeck, "Unhealthy Rations," TAP, Summer 1991], but holding aside its merits, one dramatic lesson is surely that states are not their own masters in the making of health policy. Medicaid is the biggest state program addressing the health problems of the poor, yet federal regulations ensure that states cannot innovate without national political support.</p>
<p>States are so hungry for solutions to the problems of health care for the poor that they sometimes pick up an innovation before it has gotten off the drawing boards in its home state. For example, the Colorado, Ohio, and Michigan legislatures have debated proposals patterned on Oregon's. "We recognize that Oregon hasn't finished the process," an aide to a Michigan state legislator told Linda Demkovich of the Intergovernmental Health Policy Project, "but it's important that we start dealing with those same issues." <font class="headline">The Big Sleeper: ERISA </font><br />Although never conceived as a piece of health legislation, the Employee Retirement Income Security Act of 1974 indirectly had a major impact on the American health insurance system, perhaps more than any other legislation since Medicare and Medicaid. By exempting firms that self-insure from state insurance regulation, as well as from state premium taxes, ERISA gave employers a strong incentive to exit from the commercial and Blue Cross-Blue Shield insurance markets. In 1974, self-insured plans covered only 5 percent of people with employee health insurance; now they cover over 50 percent.</p>
<p>The very same ERISA exemption that promoted the break-up of large risk pools in health insurance also prevents state governments from rectifying the disintegration. Because they cannot regulate self-insured businesses, they cannot reach the major insurers. This exemption has drastically curtailed the possibilities for state reforms.</p>
<p> </p>
<p></p><center>* * *</center>
<p>Hawaii offers the crucial lesson here. In 1974, slightly before ERISA was passed, Hawaii passed its Prepaid Health Care Act requiring employers to provide coverage at least as good as a state-defined benefit package, and to pay at least half the cost of coverage for its employees. When the state tried two years later to increase the benefit package by adding treatment for substance abuse, Standard Oil Company of California sued, claiming ERISA preempted the states from regulating self-insured companies. The U.S. Supreme Court agreed with Standard Oil in 1981. Hawaii managed to negotiate a special congressional exemption from ERISA permitting it to keep its original plan and benefit package in place, but it can make no further changes in requirements on self-insured companies. At the time, Congress made clear, too, that its one-time exception to Hawaii would be unavailable to other states.</p>
<p>The Supreme Court interpretation of ERISA, combined with congressional unwillingness to extend the Hawaii precedent, means that states have almost no leverage over employers. A few states are trying to craft legislation to require employers to provide health insurance without violating ERISA, by offering employers the option of "playing" or "paying." (Employers may either buy health insurance that meets state criteria, or pay into a state pool to cover people without health insurance.) So far, only Massachusetts and Oregon have play-or-pay laws on the books, and both have delayed carrying them out until at least 1995. No play-or-pay law has yet undergone the judicial scrutiny sure to come from a business challenge.</p>
<p>As more states have run up against ERISA in their attempts to extend health insurance, Congress has begun to face up to the problems it unwittingly created with ERISA. The HealthAmerica proposal introduced by the Senate Majority Leader George Mitchell, though primarily relying on a play-or-pay approach to expand insurance coverage, includes an option for states to experiment with a single payer system, and even offers some federal planning money and technical assistance. The bill would override ERISA. (President Bush's "Comprehensive Health Reform Program," by contrast, would actually extend the ERISA exemptions to small businesses that buy commercial insurance, thus further undercutting states' ability to regulate insurance.) <font class="headline">Big Problems Need Big Innovators</font><br />Even if the federal government were not an obstacle, there are still reasons why states might not be able to craft and implement successful solutions to the challenges of health policy.</p>
<p>Probably the most widely used metaphor in health policy is the balloon. Squeeze health care costs in one part of the system and they whoosh to another. If one payer musters the power to constrain its hospital costs (say, by firmly fixing its hospital rates, as Medicare did, or by getting a state to cap its hospital rates as Blue Cross-Blue Shield did in New Jersey), hospitals shift their costs by charging more to other payers and self-paying patients. If a state manages to establish an effective means of hospital cost controls, physicians move more of their work out of the hospital to offices, walk-in clinics, and outpatient surgery centers. If states try to increase employers' share of health care costs and citizens' access to services by requiring all insurance policies to include certain benefits, employers switch to self-insurance, where, because of ERISA, they are not subject to state regulations. The lesson of the balloon metaphor is that it is impossible to regulate the health system effectively if you can only regulate a part of it.</p>
<p>In health policy, the fates of the key interests -- hospitals and physicians, commercial and non-profit insurers, business and state government -- are inextricably intertwined, and each player is exquisitely sensitive to proposed policy changes. Because the stakes for each group are so high, even a temporary loss seems unthinkable. From the point of view of state governments, permitting temporary losses might mean destruction of institutions -- hospitals that go out of business, physicians who flee the cities, insurers who stop writing business in the state, or employers who move their operations and their jobs out of state. In this type of political contest, one player in the system can block a proposal and effectively bring the situation to a stalemate.</p>
<p>States are hamstrung in part by being "only" states. In a federal system, political actors who are unhappy with a state regulation always have the possibility of exit, and the threat of exit is developed to a fine art. If a state tries to regulate insurers (and states are the only jurisdiction with authority to regulate insurers), insurers can and do threaten to withdraw from the state. Insurers used this tactic when a f ew states and Washington, D.C., tried to prohibit health and life insurers from using AIDS antibody tests, and succeeded in rolling back every state prohibition except California's ban on using AIDS tests in health insurance underwriting. When Massachusetts was trying to legislate its play-or-pay law in the late 1980s, the threat of exit by both business and insurers was an omnipresent, if usually unspoken, factor in the bargaining. States simply do not have the clout to push business and insurers around.</p>
<p>Threats of exit can be so potent that state policy makers are discouraged from even attempting reforms. Even states with healthy economies that have the fiscal potential and political will to increase their taxes feel impotent to proceed. The director of Maine's state planning office noted that even though states have the formal power to raise property and sales taxes, they are "constrained in how aggressive [they] can be. We can't go to a 6 percent sales tax when New Hampshire doesn't have one."</p>
<p> </p>
<p></p><center>* * *</center>
<p>The corollary of the exit threat in a federal system is the "magnet fear." States fear that by offering more generous benefits to the poor than neighboring states, they will actually induce more poor people to move into the state. Indiana and neighboring Illinois seem to be a case in point. Officials in both states agree that some people have moved to Indiana because its high-risk pool is easier to join than that of Illinois. Indiana has no waiting list, no ceiling on enrollments, and only a ninety-day residency requirement. Observers of Hawaii's unique program invariably note that because the state is in the middle of the Pacific Ocean, legislators did not need to worry about attracting uninsured poor people from the continent.</p>
<p>The striking thing about all the "universal access" reforms is that they are conglomerations of different insurance plans, with multiple insurers, eligibility rules, benefit packages, and arrangements with providers. It is common wisdom now among health policy analysts that such aggregations generate huge administrative costs to pay for all the personnel and paperwork necessary to keep everybody and everything sorted into its proper compartment. Steffi Woolhandler and David Himmelstein estimate that as of 1987 $96 billion to $120 billion, or 19 to 24 percent of annual health expenditures, went to administrative expenses, including insurance overhead, hospital and nursing home administration costs, and physician billing costs. These estimates do not include the costs of burdensome paperwork that patients, especially Medicare recipients, must perform.</p>
<p>Large public insurance programs are notably more efficient than the fragmented U.S. industry. While overhead for the U.S. private industry has been estimated at 11.9 percent of premiums, the Social Security Administration spends about 2 percent of its revenues on overhead, and the national insurance system in Canada only 1.2 percent. Other nations with unified insurance programs (Canada, Britain, Sweden, Japan, France, and even Germany with its 1,100 sickness funds) manage to provide greater access to health services for far less money. <font class="headline">Who Champions the States?</font><br />Our health policy system is federally dominated, notwithstanding the reigning ideology that celebrates state and local innovation. All the vibrant hustle-and-bustle of health insurance reform at the state level is testimony to the optimism and dedication of state officials, not to mention dire necessity. But no one should be lulled into thinking states can control all the pockets of the health care cost balloon or reconstitute the splintered insurance market into a viable, large risk pool.</p>
<p>States cannot hope to curtail harmful insurance underwriting practices unless they band together -- or unless the federal government does it for them. As long as they continue to fund high-risk pools, subsidies to small businesses or uninsured individuals, and special insurance plans for special constituencies, they only contribute to the fragmentation of risk pools, thereby enabling insurers to continue using risk-selection as their prime cost-saving strategy, and fostering the expansion of administrative costs.</p>
<p>Even the President's plan, otherwise an ode to free markets, recognizes the inability of states to halt the erosion of risk pooling. The plan proposes a federal prohibition on some of the worst industry risk selection practices: cancelling policies once the holders become sick; refusing to insure sick members of small employee groups; and excluding coverage for preexisting conditions. The plan would also put some limits on insurers' ability to differentiate prices according to people's health status, although it is vague on how the limits would work. Still, the transfer of even that much jurisdiction over insurance to the federal government is remarkable for an administration committed to reducing federal regulation.</p>
<p>If the Bush plan recognizes the need for greater state clout over insurance industry practices, it is not inclined to enhance state power vis-a-vis business. As already noted, it would extend the ERISA exemptions to small businesses, removing more of the insurance market from the reach of state regulation. Of course, conservatives think state-mandated benefits are the major cause of lack of coverage, so releasing business from their grip should be a good thing.</p>
<p>However, the President's plan is nearly silent on what benefits would have to be included in the basic package it would extend to people through tax credits and small group reforms. The few illustrative examples of basic benefit packages, which are carefully billed as examples, not requirements, include plans that provide only three physician visits a year or only fifteen hospital days a year, and plans that make no mention of maternity and prenatal care or of prevention. Under the President's plan, states may not be able to guarantee access to insurance worth having.</p>
<p>In theory, states should be an ideal jurisdiction for large health insurance risk pools, but to carry out serious reform in the face of the existing insurance system, the states need federal legislation to empower them. Without jurisdiction over self-insured employers or the clout to clamp down on insurance selection practices, the states can only tinker. The idea that the solution to the health care crisis will appear in the states, as if they could act on their own like true "laboratories of democracy," is a fantasy.</p>
</div></div></div>Tue, 05 Dec 2000 02:51:49 +0000141560 at http://prospect.orgDeborah StoneRace, Gender at the Supreme Courthttp://prospect.org/article/race-gender-supreme-court
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>The confirmation hearings of Clarence Thomas were a great national Rorschach test. The lesson, some say, is that the United States has made great progress in race relations. Or, is it that racism is alive and well? Some concluded that women gained a new place in politics, so that even an issue as threatening to men as sexual harassment can no longer be swept under the rug. Others learned that women are still not taken seriously by a male power establishment and it doesn't pay to speak up. For a few, the Thomas affair demonstrated the strength and adaptability of our political institutions. For many, it revealed rot at the core.</p>
<p>Whichever interpretations ultimately dominate the nation's collective self-understanding, politics after Thomas will never be the same. The hearings not only changed the way we will frame issues of race and gender, but also the institutional machinery black constituents, who strongly favored a with which we will resolve them. Lost in all the rumbling about race and gender and party politics is the most profound transformation of all: the gradual erosion of the Supreme Court's moral authority as it becomes less a co-equal branch and more explicitly a creature of presidential ideology and policy strategy.</p>
<p>Political analysts of every stripe immediately recognized the nomination as a brilliant maneuver to split the traditional liberal alliance between the civil rights movement and the women's movement. By naming an anti-affirmative-action black man to fill the ninth seat on an otherwise all-white court, Bush forced liberals to choose between black representation on the court or public policy efforts to create opportunity for minorities. By naming an opponent of choice on the abortion issue, he forced liberals to choose between their black replacement for Justice Thurgood Marshall, and their female constituents, who for the most part favored leaving abortion decisions in the hands of individual women. In short, liberals had to choose between the potent symbolism of demographic representation and the pragmatic reality of policy substance.</p>
<p>The real comeuppance for liberals is that they will have to stop relying on crude symbols of race and gender, and instead develop policy positions that speak to women and blacks in all their diversity about issues of well-being, work, and family. This means going beyond the civil rights agenda of the sixties, and even the social equality agenda of the seventies and eighties, to a deeper understanding of how discrimination, subjugation, and exclusions work -- and work differently -- in different social institutions.</p>
<p><font class="headline">Affirmative Inaction</font><br />Republicans successfully maneuvered the confirmation process so it became a parable about the dangers of affirmative action as conservatives have portrayed it. Liberals, so the conservative story goes, in their efforts to provide equal opportunity for the disadvantaged, might ignore competence, sacrifice quality, and destroy organizations in the process. The Democrats fell right into the conservative trap and played out the script.</p>
<p>First George Bush, a steadfast opponent of affirmative action who sees quotas lurking everywhere, made a nomination that looked an awful lot like filling a black quota on the Court. He named a black man who had less than a year and a half of judicial experience; lacked any coherent judicial philosophy; and was in all probability willing to lie to get the job, since it is unlikely in the extreme that a lawyer of his generation never discussed <i>Roe v. Wade.</i></p>
<p>Next, Clarence Thomas, who has insisted blacks don't need special consideration, that they should earn their positions the hard way, invoked racism as a special consideration the moment he got into trouble, precisely so he wouldn't have to defend against the harassment charge the hard way. ("I will not get into any discussion about my private life," he said, and Democrats on the Judiciary Committee obliged him. Only a week earlier, he and the White House had peddled his private life as his main qualification for the job.) Even though Thomas is black, and pejorative racial stereotypes about sexuality do exist, does that mean his behavior cannot be examined and held to the standards of the law of the land? Thomas seemed to think so.</p>
<p>Thus, conservatives capitalized on the very brand of affirmative action policy they nominally reject: fixed quotas and lowered standards applied on the basis of skin color. The southern Democrats, too, used Thomas as a cipher; if voting for him would get them kudos from their constituency, they would support him, no questions asked. Sad to say, many liberals participated in this form of deference to skin color, though it is not the brand of affirmative action most would otherwise defend. Hobbled by the Dixiecrats, by their own unwillingness to play hardball politics, by Senator Ted Kennedy's personal troubles, and by a general squeamishness about confronting racial issues head on, liberals on the Judiciary Committee did exactly what many people most fear and resent about affirmative action: They brushed aside the question of the candidate's competence.</p>
<p> </p>
<p></p><center>* * *</center>
<p>Although the American Bar Association rated Thomas as only "minimally qualified" for the Supreme Court, the Judiciary Committee failed to investigate his competence in any serious way. They deferred to him when he insisted he had no opinion on issues of jurisprudence or specific cases, or when he said it would be "inappropriate" or "improper" for him to comment on recent cases. Improper for someone applying for a permanent job on the Supreme Court? When the committee questioned Thomas about legal views he had expressed in speeches, he often replied that his statements weren't really his positions, that they were thoughts of the moment, and that he hadn't really understood the implications of decisions about which he had offered strong opinions. His strongest defense was that his critics had mistaken mere opportunism for extremism.</p>
<p>The Judiciary Committee largely ignored all these signs of his inability to articulate a coherent position, and assumed instead that he was stonewalling to avoid giving opponents anything to use against him. But it was entirely possible and plausible that Thomas simply didn't know constitutional law and didn't follow the jurisprudential disputes about recent cases of the Supreme Court. No one was willing to push very hard to find out.</p>
<p>The Democrats' great political failure on affirmative action went virtually unnoticed. They allowed the conservatives to act out a bankrupt version of affirmative action, one that ought to get elected representatives into trouble with both black and white voters.</p>
<p>Democrats might have started by forcing Thomas to address his views on affirmative action in the context of his own life. They could have used the Thomas family story [See article, page 78] to show that access to jobs in the privileged, primary labor market is largely through expensive credentials and personal networks. Lacking these credentials and networks, most of the working poor, like Thomas's sister and mother, participate in a secondary labor market where the jobs are underpaid and carry no pensions, health insurance, unemployment benefits, job security, or pathways to better jobs. Economic security and upward mobility through hard work -- the great social backdrop against which affirmative action seems unnecessary -- are simply not there for many Americans.</p>
<p>Democrats might have used the hearings to challenge the conservative portrayal of affirmative action as a departure from the "normal" merit-based system of job recruitment, promotion, and pay allocation. They could have asked Thomas whether he supports veterans' preferences and seniority, two major departures from merit in the normal labor market that overwhelmingly disadvantage women and blacks respectively.</p>
<p>They could have shown how the notion of individual achievement used by conservatives to promote Thomas and debunk affirmative action profoundly oppresses women. It labels men like Thomas, supported at every stage of his life by female relatives, as products of their own efforts, while it denigrates women like his sister, who work at taking care of their families, as dependent scroungers.</p>
<p> </p>
<p></p><center>* * *</center>
<p>The hearings were, at bottom, a political default. Many Democrats have come to accept the tacit premise that a President is entitled to his Supreme Court nominees, no matter how scantly qualified, no matter how extreme their views. The Senate was moved to vote down Robert Bork not because of his extreme views, but because of his extreme arrogance. Thomas in the end received forty-eight negative votes rather than the anticipated thirty to thirty-five, only because of the sexual harassment charge. Democratic senators seem to accept that as long as a candidate has no overt prejudices, no criminal record, and -- better yet -- no record of jurisprudence, controversial or otherwise, they are obliged to vote for him. They seem to accept that if they turn down scholarly right-wing judges, the corollary is that they must vote to confirm mediocre ones.</p>
<p>These assumptions are, of course, preposterous. The Democrats ought to demand that the President's judicial nominees be both judicially distinguished and ideologically moderate, not one or the other. This is, after all, the all-time record era of divided government. It is only reasonable that a President who shares power with a Democratic Senate should not be able to insist on nominees well to his own right -- men whom he has been nominating mainly to curry favor with the Republican party's extreme right wing. Bush has no respect for either the Senate's advise-and-consent function or for the Court's stature as an institution. Dwight Eisenhower, who had both, nominated William Brennan -- William Brennan! -- with the full knowledge that he was a liberal Democratic state judge, as well as Earl Warren, a moderately liberal Republican governor.</p>
<p><font class="headline">Supreme Courtship</font><br />It was a failure of politics in the first set of hearings -- a failure to challenge the candidate's temperament, philosophy, and qualifications -- that led indirectly to the bungled attempt in the second set of hearings to challenge the candidate's character. If the hearings united everyone against the <i>idea</i> of sexual harassment, they also exposed profound disagreement over what it is. The term is nowhere mentioned in Title VII of the Civil Rights Act of 1964, but since 1986, the act's prohibition of "discrimination on the basis of sex" has been interpreted by the Supreme Court to include two types of sexual harassment: "Quid pro quo" harassment, when a supervisor or employer makes sexual favors a condition of the job or promotion; and "hostile environment" harassment, when an employer permits unwelcome remarks, pornographic posters, or constant attention to a person's sexuality that interferes with her ability to perform her job.</p>
<p>The treatment of Anita Hill demonstrated one of the inadequacies of formal civil rights law. When a woman comes forward with a sexual harassment claim in 1991, she is protected by a judicial doctrine that recognizes sexual harassment as a civil rights violation. But judicial doctrine is only as good as the way it is interpreted, and sexual harassment, like rape, has mostly been adjudicated from a male point of view which largely ignores realities of gender power.</p>
<p>Hill was verbally battered by older white men who asked her in a hundred ways why she hadn't behaved as they would have in such a situation. Why had she followed Thomas to another job and maintained good relations with him if she found his behavior so unbearable? They simply could not imagine what it is like to try to make it as a young, black woman in a racist, sexist world. As soon as they got close to understanding, they shivered at how the dirty little secrets of their own world of power would look to the American public. Perhaps, as elected politicians, they could imagine all too well what it is like to have to make nice to people you despise but whose support you need. But like Clarence Thomas, they pretended that individuals make their careers by themselves, and so refused to regard Anita Hill's situation from the point of view of someone who needs other people -- and knows and admits she needs other people -- to get anywhere.</p>
<p>Anita Hill's hearing was a kind of symbolic rape trial. Her virtue and character were challenged, while Thomas's behavior and motives were taken at his word. Her sexuality was examined and pontificated upon by witnesses-turned-pop-psychologists. Witnesses for Thomas were encouraged to speculate on her motivations for fantasizing the events she described. An acquaintance was brought in to testify to her proclivity to see romantic interest where there was none. She even underwent the ritual physical examination familiar to rape victims, this one in the form of a lie detector test. Though Senator Joseph Biden, the Judiciary Committee's chairman, didn't admit the test as evidence, it is a tribute to the power of the symbolic ritual that her lawyer advised her to take the test, while Bush publicly called it "a stupid idea" for Thomas.</p>
<p> </p>
<p></p><center>* * *</center>
<p>For all the prurient interest that may have made people watch, listen, and read, the motive force for this national exercise was a clash of deep male and female anxieties. For women, it was the anger at being transformed into a raw sexual object and the powerlessness to stop or undo that transformation. For men, it was the fear of false accusation and of prosecution for a crime whose standards are not clear to anyone, least of all themselves.</p>
<p>The hearings, surveys, interviews, and polls dramatized this conflict without moving an inch toward resolving it. We were left with a host of questions. What are the limits of permissible courtship in the workplace? Have the boundaries of the workplace expanded to include the bar around the corner, the restaurant, the apartment near the office, the out-of-town conference hotel? What can a woman reasonably be expected to do to defend herself at the moment? Since sexual harassment, like rape, is usually an offense without witnesses, what will count as evidence? How can a man defend himself against harassment charges besides simply denying them?</p>
<p>For starters, to frame the issue as one of confusion over standards of permissible courtship is to miss the mark. True, harassment often includes activities that in another context would be courtship -- asking for a date, making flattering comments, touching, kissing -- but context is all the difference. Though the workplace is often the setting for social mixing, the job and particularly the supervisory relationship are not mixers. No woman or man can do his or her job, let alone be perceived as doing it well, while being treated as an object of sexual conquest. What may seem to a man a minor sexual comment, joke, or advance can assault a woman by abruptly shifting her mental focus from work tasks and temporarily casting her out of her work role. That's the mild form. Sexual demands, forced conversations about sex, or unwelcome touches do more than temporarily displace her identity; they suppress it and deny it by making her sexuality more important than her work. This is what women mean when they say harassment is about power, not sex.</p>
<p>This is also why styles of courtship are irrelevant. The issue is not, as Orlando Patterson wrote in <i>The New York Times</i>, whether people from different regions, social classes, or ethnic backgrounds have different styles of courtship. According to Patterson, regaling a woman with "Rabelaisian humor" is a normal part of Southern working-class courtship ritual, and Anita Hill, who surely understood that, was "disingenuous" when she displaced Thomas's behavior from its context and brought it into the white, upper-middle class work world of the senators. Patterson concluded that if Thomas had done exactly what Anita Hill said he did, he would be morally justified in lying because she had applied the wrong standards to his behavior and he didn't deserve the "self-destructive and grossly unfair punishment" that telling the truth would bring.</p>
<p>Therein lies the rub. Just whose standards should be applied to the kind of behavior at issue here? Sexual harassment, like rape, is a crime of coercion (though it is not strictly a crime, but a civil rights violation). Harassment is coercing someone into sexual contact they don't want to have and coercing them out of one identity and into another. Only genuine consent can render an activity non-coercive, and therefore the standard of judgment should reflect how the action looks to the weaker party, given the real disparity of power. It is a mockery of the liberal ideal of autonomy to interpret a potentially coercive relationship from the point of view of the person who has the power to coerce. The only just criterion in a harassment case is whether the woman felt she had the freedom to resist, without taking career risks.</p>
<p> </p>
<p></p><center>* * *</center>
<p>Is that unfair to men? Are men supposed to be mind-readers, you ask? Well, yes. Parents, who exercise inordinate physical and psychological control over children, are morally and legally obliged to understand their children's needs, even when their children can't talk. They are not free to abuse children because the children don't protest. In any situation of power, the powerful have a moral obligation to see the world from the point of view of those they govern or control, and to exercise power in the interests of the governed. Just consent is what makes power legitimate instead of tyrannical.</p>
<p>Especially since most harassment takes place in private, with no witnesses, the weaker party needs the protection of a legal standard that says her "no" means "no." She can't enforce her "no". Sometimes, she feels too threatened even to utter her "no." As long as men are in positions of power, the burden is on them to anticipate how their actions affect weaker people. This is the burden that goes with the privilege of power.</p>
<p>There is work to be done to get this standard to prevail in courts as harassment cases are adjudicated, and even more important, in men's heads as they live their daily lives. The women's cause was enormously advanced by the outpouring of tales of harassment following the Hill testimony. Perhaps the next step should be outing" -- telling stories with names attached. The fear of false accusations might just do wonders to get men to feel in their stomachs the vulnerability and powerlessness women live with constantly.</p>
<p><font class="headline">Fair Judging</font><br />It is in just such situations, where the points of view of the powerful can obliterate those of the weak, and where objective evidence is difficult if not impossible to obtain, that we most need judges we can trust. We need judges who have the capacity to empathize, to evaluate evidence and arguments from multiple points of view, and to suspend judgment while they move between different points of view. Clarence Thomas showed few of these qualities.</p>
<p>As Ronald Dworkin noted in the <i>New York Review of Books</i>, Thomas asserted views in a speech to the Heritage Foundation that would logically require the Supreme Court to outlaw abortions after conception. (In other words, the Supreme Court should not just roll back <i>Roe v. Wade</i> so that states may outlaw abortions if they wish, but it should revoke the states' current authority to permit abortions.) If, as he told the Judiciary Committee, he was merely trying to appeal to his conservative audience in that speech, had only skimmed the article whose ideas he endorsed, and had thought the ideas would be interesting "to play around with," then he has a rather cavalier attitude about the responsibilities of a federal judge to develop considered views on issues over which he will exercise great power.</p>
<p>Thomas gave us other glimpses of his cavalierness toward judging. In maintaining he had never discussed <i>Roe v. Wade</i>, he was saying he felt no need to engage with the legal community or anyone else about one of the major constitutional and political issues of his era. In endorsing the view that Anita Hill was part of a liberal interest group conspiracy to undo him, he showed a healthy disrespect for evidence. In announcing that he had not watched or listened to any of Anita Hill's testimony, he showed a disdain for the fact-finding process. It is not clear which is the scarier prospect: a Supreme Court justice who thinks abortion should be entirely outlawed, or one who thinks it is proper for a judge to decide without paying much attention to evidence or argument.</p>
<p><font class="headline">Equal Opportunity on Trial</font><br />In the background of the Thomas hearings was the paradoxical issue of affirmative action. Was he the ultimate affirmative action hire? Would he do the ideological bidding of his conservative sponsors and be the definitive fifth vote against affirmative action? And is affirmative action worth defending? This was the debate that the Judiciary Committee never quite had, and one that liberals ought to be leading. In the hearings themselves, the Democrats failed to use the confirmation process as a venue to dissect the symbol affirmative action has become, and to defend a coherent affirmative action policy aimed at making formal legal equality a reality.</p>
<p>In the university, ordinarily a bastion of liberal values, the scramble to recruit black university professors from a very small pool of qualified applicants has created a mentality of grudging tokenism in many academic departments and has left a residue of bad feeling among the professorate of both races. Many white college professors feel coerced into hiring colleagues of seemingly lower formal qualification, while many highly qualified blacks resent the presumption that they were hired only because of their race. This dynamic has left some liberal intellectuals particularly skeptical of the whole approach. However, it is wrong to project the college experience onto affirmative action generally. Affirmative action is not simply, or even mostly, for professional elites. The real action is out there in construction, manufacturing, clerical jobs, unionized public-sector jobs, transportation, and the like. It is in these sectors that formal qualifications matter less, yet oddly, minorities and women have been excluded from the better paying manual jobs.</p>
<p>Affirmative action has also been criticized for giving disproportionate help to relatively advantaged blacks, while ignoring masses of poor blacks. But affirmative action was never intended as a means to improve jobs at the lower end; it couldn't possibly do anything to increase pay, benefits, job security, or advancement opportunity in the secondary labor market. Of course, we ought to make bad jobs better through other policies such as minimum-wage, tax credits, health insurance, and unemployment insurance; and we ought to make paid employment less hostile to family life.</p>
<p>Liberal leaders need to explain that the working poor are poor and sometimes unemployed because their government and business leaders don't provide a stable economy and a decent safety net, not because unqualified women and members of minority groups are taking their otherwise terrific jobs. But affirmative action shouldn't be blamed for these broader economic failures. Rather, affirmative action was intended and designed to improve access to better jobs and careers, and to do so by altering systemic barriers to entry. In that, it has succeeded.</p>
<p> </p>
<p></p><center>* * *</center>
<p>Liberals need to distinguish between the caricature of affirmative action exploited by the conservatives and the original spirit of affirmative action. They would do well to remind citizens -- and themselves -- of the social circumstances under which the Johnson administration devised affirmative action and the Supreme Court approved affirmative action in the first place. In 1965, under Executive Order 11246, the administration required federal contractors to take affirmative steps to overcome past patterns of racial exclusion. In 1969 the Nixon administration's pilot "Philadelphia Plan" added the requirement of specific "goals and timetables" to overcome persistent racial exclusion in skilled construction work. In the 1978 <i>Bakke</i> case, involving a University of California minority admissions plan, a fragmented Court concluded that minority representation goals could be constitutional. In 1979 the Supreme Court approved a voluntary affirmative action plan adopted by United Steelworkers and Kaiser Aluminum. The company hired only people with prior experience for its skilled crafts positions. On its face, the prior experience requirement was neutral, but since black workers had long been excluded from craft unions, few had any experience in skilled craftsmanship. To address this problem, the union and company created an on-the-job training program for all its employees. Entry into the program was determined by seniority (which again gave white workers an advantage), but half the slots were reserved for black employees, even if they had less seniority than other white applicants.</p>
<p>The Supreme Court's first explicit approval of a court-imposed plan with preferential hiring goals came in 1986, in a case filed against a local union of the Sheet Metal Workers International Association by the Equal Employment Opportunity Commission. The union had barred black workers from its apprenticeship program until 1964, and after that, continued to award apprenticeship positions primarily on the basis of "sponsorship" by current union members. Obviously, the sponsorship requirement, although it never mentioned race, had the effect of keeping out non-whites. By the time the case reached the Supreme Court, the union had ignored several court orders enjoining it to stop its discriminatory practices and increase its hiring of non-whites.</p>
<p>Also in 1986, the Court approved a voluntary affirmative action program in a government agency. In this case, the Santa Clara County (California) Transportation Agency was trying to increase the number of ethnic minorities and women in professional, administrative, technical, and skilled craft positions. In fact, at the time of this case, there were no women in the 238 skilled craft jobs, although women were 36 percent of the area labor force. Despite the nominal openness of traditionally male jobs to women, deep and long-standing patterns of hostility to women prevented them from seeking these jobs or succeeding in them. So the agency set long-term hiring and promotion goals based on percentages of ethnic minorities and women in the area labor force, but didn't reserve any fixed number of slots for these groups. Instead, its plan called for taking sex and ethnicity into account as additional factors when there were several qualified applicants for a position. On that basis, the agency promoted a woman to the job of road dispatcher, from a pool of seven applicants who were deemed qualified after a first interview. The plan was challenged by a man who had received two points more than she -- on an eighty-point scale -- in the initial interview. (Think about the validity of a two-point difference in anything so subjective as an interview.) The Court allowed her promotion and the plan to stand, noting approvingly that the agency's plan created no "absolute bar" to men, set no quotas, and used sex and ethnicity criteria only in addition to job-related standards.</p>
<p>These cases established the broad outlines of affirmative action policy. Neither these nor other affirmative action plans approved by the Supreme Court were cases of someone arbitrarily seeking to fill a statistical quota for women or minority workers. They were cases where simply changing the formal rules and nominally opening up jobs and training programs to previously excluded groups was patently insufficient to establish genuine equal opportunity.</p>
<p>Affirmative action is sometimes necessary to enforce formal civil rights. Deeply rooted patterns of racial and gender exclusion, harassment, and discrimination have not been eliminated by one generation of civil rights law. The Supreme Court has approved race-conscious remedies when ostensibly race-neutral selection procedures either deliberately or inadvertently perpetuate the effects of prior discrimination. The Court has shown an increasing preference for race-neutral remedies, but has never said that race-conscious remedies for prior discrimination would be impermissible when race-neutral remedies are ineffective.</p>
<p> </p>
<p></p><center>* * *</center>
<p>There is still plenty of room for this kind of affirmative action, if only liberal leaders dared articulate a rationale. Defensible affirmative action programs do not, as the caricature suggests, put people in skilled positions for which they are not qualified. They put sufficiently qualified people in a position to acquire more skills and knowledge, and to be eligible for further upward mobility genuinely based on their achievement. These programs recognize that when there is a surplus of qualified people for any job or training position, it is permissible to take into account other standards, such as ethnicity or sex, in making a selection from a pool of qualified people. The Supreme Court has consistently endorsed this kind of affirmative action, as long as the plan is temporary and doesn't entirely exclude whites or males from the opportunities.</p>
<p>Of course, an increasingly conservative Court may well pull back from the brand of affirmative action that seeks to broaden minority representation on the job, and narrow permissible affirmative action to cases of individual remedy rather than redresses of social patterns of exclusion. But that is no reason for liberals to give up on the affirmative action ideal, any more than liberals should give up on reproductive rights because the courts have begin to erode the guarantees of <i>Roe</i>. As in the case of <i>Roe</i>, an increasingly hostile judiciary means precisely that liberals must win their case in the court of public opinion and electoral politics. The tellingly labeled "Civil Rights Restoration Act," opposed by the administration all the way to the signing ceremony in the Rose Garden, illustrates how strong political action can and should counteract backsliding by conservative courts.</p>
<p>Since 1989, attempts to overturn major Supreme Court rulings have been virtually permanently on the congressional agenda. The Thomas hearings brought an almost immediate White House "compromise" on the previously deadlocked civil rights legislation, which, for all its gaps, is a vigorous rejection of a major line of recent Supreme Court interpretation. Congress failed to overturn <i>Rust v. Sullivan</i>, the "gag rule" on publicly funded women's health clinics. Ironically, that failure will probably assure that abortion is a prominent issue throughout the 1992 presidential campaign, and therefore a constant reminder of just how far out of touch with the mainstream the Supreme Court has strayed.</p>
<p><font class="headline">Congress and Court in the Dock</font><br />By the end of the second round of hearings, nearly everyone had lost sight of the Supreme Court as an institution. The Senate didn't grapple in the slightest with the institutional questions raised by Anita Hill's testimony: In the adjudication of disputes, how should judges assist the weak? How, in other words, is a court to be more like an umpire and less like a hired thug? And getting down to the brass tacks of advice and consent, is Clarence Thomas a man who has any moral sense of how to handle his own power? The Senate failed as a body of public counsellors. It behaved instead like a master of television ceremonies and submitted Hill's and Thomas's performances to the national clap-o-meter of a hasty public opinion poll.</p>
<p>Most senators framed the issue as a criminal trial where the decision was guilty or innocent. Senator Biden told <i>The New York Times</i> "In my mind if there is substantial doubt, you resolve that doubt in favor of the accused." Beyond-a-reasonable-doubt is indeed the standard of justice courts apply when they are considering depriving someone of fundamental liberties -- sending them to prison, for example. But it is assuredly not the appropriate standard when a legislature is considering elevating someone to a position of great power, from which he can be removed only with tremendous difficulty, and in which he will decide on the liberties of every citizen. Senator Kennedy came close to the point when he said, "In a case of this magnitude, where so much is riding on our decision, the Senate should give the benefit of the doubt to the Supreme Court."</p>
<p>Lacking any standard by which to assess the rightness of political issues, our politicians grab at standards from other spheres of life, such as personal character or criminal trials. The important questions remain unasked: What kind of institution is the Supreme Court? How and to whom is it accountable? What are reasonable criteria by which to evaluate the qualifications of proposed justices? What makes for good judging, and how can the extraordinary power of judges contribute to democracy rather than erode it?</p>
<p>While the legal scholars are still debating whether judges decide by some neutral principles of legal reasoning or are mere mortals exercising power, the public and the politicians know the answer. Nomination politics over the last few years has made that clear to anyone who doesn't remember Franklin Roosevelt's court-packing scheme. The last two Presidents have nominated judges with extremist views, and used their appointment powers to gain control of the judicial branch and thereby implement their preferred policies, in open defiance of Congress. Congress and interest groups have responded by playing the same game -- though far less adroitly -- treating the federal courts, and especially the Supreme Court, as just another political institution to be "won."</p>
<p>Even the Supreme Court, in an otherwise unobjectionable spring 1991 decision holding that elections of state judges are subject to the Voting Rights Act, implied that courts are political bodies. But if judges are to have legitimacy as neutral umpires and if they are to decide conflicts on the basis of higher principles, then they must not be regarded as merely political creatures representative of particular constituencies or current ideological fashions. If courts become merely representative institutions rather than deliberative ones, they risk losing their ability to resolve conflict on the basis of principle rather than raw power, and the rule of law suffers.</p>
<p>The Supreme Court's moral authority may have taken particularly heavy blows with the Thomas appointment, but if so, these losses are only part of a larger trend. Supreme Court nominations are increasingly hard to distinguish from electoral campaigns. Large majorities of Congress have voted on several recent occasions to overturn Supreme Court rulings. Coming at a time when several Supreme Court rulings on civil rights and one on abortion were under siege within Congress, the Thomas hearings only further dramatized that the Court's decisions are the result of a highly politicized selection process, just like the decisions of the other branches of government. It may be harder and harder to sustain popular support for the least democratic branch. Not that anyone will attempt to do away with the Court, but the Court's only real enforcement power resides in its ability to command respect and exert moral suasion.</p>
</div></div></div>Tue, 05 Dec 2000 02:51:49 +0000141572 at http://prospect.orgDeborah StoneFetal Risks, Women's Rights: Showdown at Johnson Controlshttp://prospect.org/article/fetal-risks-womens-rights-showdown-johnson-controls
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Fetal Risks, Women's Rights: Showdown At Johnson Controls by Deborah A. Stone Johnson Controls, Inc. manufactures batteries in sixteen states. Jobs on its production line are "good jobs," the kind rapidly disappearing from the American economy -- unionized, high-paying, skilled manual labor with good benefits and chances for advancement. They are also the kind of jobs that have never been widely available to women. Just as women began to gain entry into these jobs, employers discovered a medical problem: The lead used in battery production endangers not just adult workers; it can build up in human tissues, and if passed on to a fetus can cause serious mental and physical problems for the child.</p>
<p>Starting in 1982, Johnson Controls advised its hiring offices to tell women there were no openings for women capable of bearing children. The company's fetal protection policy applies to women, right up to age seventy, who cannot provide medical evidence of their sterility, regardless of their intention or desire to have a child. The company bars women not only from jobs with high lead exposure, but from all jobs with any possibility of transfer or promotion into a high-lead job -- effectively, all production jobs. At the company's Fullerton, California plant, for example, women are ineligible for all production jobs even though only 35 percent of the jobs are unsafe for pregnant women according to the company's blood and air sample standards.</p>
<p>In a case that comes before the U.S. Supreme Court this fall, Johnson Controls' policy of excluding all fertile women is being challenged by the United Automobile, Aerospace, and Agricultural Implement Workers of America (UAW), which represents the employees. The UAW lost in the federal district and appellate courts, which held that Johnson Controls was not guilty of sex discrimination under the federal Civil Rights Act of 1964. (In a separate case, a woman who had been denied work at the company's Fullerton plant sued successfully under the state's Fair Employment and Housing Act, but that decision applies only in California.) At the federal level, a veritable who's who of labor, women's, civil rights, and public health organizations have now joined the UAW in trying to overturn the appellate decision.</p>
<p> </p>
<p></p><center>* * *</center>
<p>Johnson Controls has a legitimate concern. Lead can cause genetic damage prior to conception, and after conception can cause abnormal fetal development. Transferring a woman out of a high exposure job only after she became pregnant might not prevent damage to her fetus; lead is stored in human tissues for several months, and the woman might not realize she was pregnant until several weeks after conception. Beyond a concern with the health of employees and their offspring, firms have a legal obligation under the federal Occupational Safety and Health Act to provide a safe working environment.</p>
<p>The occupational fetal health issue thus appears to pose a genuine dilemma: Policies to protect the health of children seem to require policies that restrict employment opportunities for women. Indeed, courts that have handled the fetal protection issue have framed their analyses as a clash between public health and civil rights.</p>
<p>With the issue posed this way, the courts can see only two possible solutions. One is to put the health of future generations first when the needs of children and the rights of women conflict. The remedy, according to this perspective, would be to allow and perhaps even to encourage employers to use sex-based exclusionary policies to protect children's health.</p>
<p>Alternatively, some would let individuals make their own choices. That libertarian solution appears to absolve employers and governments of any moral responsibility. But, like many libertarian solutions, it presupposes free choice when, in fact, it abandons individuals to face terms of choice set by parties more powerful than they are. In this case, letting women decide individually means forcing them to choose between their livelihood and risk to their baby's health.</p>
<p> </p>
<p></p><center>* * *</center>
<p>The scientific and social facts, however, do not point to so narrow a choice and such limited alternatives. Risks to fetal development and children's health arise not only from many industrial chemicals and workplace substances, but also from poverty and lack of health insurance, among many other things.</p>
<p>Keeping fertile women out of risky jobs may preserve them from one health risk only to expose them to others that are equally serious. Moreover, the occupational risks to normal fetal development stem not only from exposure of mothers, but also from the exposure of fathers. Many toxins, including lead, may cause genetic damage to the father's sperm. So excluding fertile women from the workplace is neither a necessary nor adequate measure to protect babies.</p>
<p>Once we step outside a mindset profoundly shaped by gender stereotypes, a third solution becomes apparent. If we no longer see a woman's body as the only aspect of her life relevant to her child's welfare, if we understand that a father's as well as mother's health affects their baby, and if we recognize that women can control pregnancy responsibly, we can choose to modify the organization of work to permit parents to have healthy babies and to ensure equality between men and women. In this case, as in so many others, changing the terms of choice will enable us to protect both civil rights and public health.</p>
<p>The workplace fetal protection issue is part of a much larger societal trend toward controlling and even punishing women in the name of protecting the next generation. Legislation to restrict abortions, court-mandated Caesarian sections against a woman's wishes, and criminal prosecution of women who take drugs while pregnant are other manifestations of this trend. A woman's health habits may even be considered in custody decisions. <i>The New York Times</i> reported in August, for example, that a California judge hearing a custody dispute ordered a woman not to smoke in the presence of her child until he turns eighteen.</p>
<p>Much, therefore, will hang on the Supreme Court's decision in the Johnson Controls case. While the decision will not apply directly to other areas in which women are paying high costs for their biological capacity to bear children, it will signal how the highest court in the land intends to adjudicate such conflicts.</p>
<p><font class="headline">The Emergence of Fetal Protection</font><br />Johnson Controls is not the first or only company to deny jobs to fertile women on grounds of possible harm to a potential fetus. In the late 1970s American Cyanamid made headlines when five of its female employees had themselves sterilized to keep their jobs. Numerous other companies, including General Motors, Gulf Oil, B.F. Goodrich, and Eastman Kodak have explicitly excluded women from jobs, claiming their exclusionary policies were designed to protect fetal health. Since 1978 the Equal Employment Opportunity Commission has received over 100 complaints charging that fetal protection policies were being used to discriminate against women.</p>
<p>A recent survey of chemical and electronics manufacturing firms in Massachusetts documents what many women's and civil rights advocates have long suspected. Exclusion of fertile women on grounds of fetal health is much more common among firms with mostly male workers than among those with mostly female employees. Twenty-four percent of male-intensive firms restrict all women while only 7 percent of female-intensive firms do. Nearly 20 percent of the firms surveyed have policies excluding fertile or pregnant women from some job categories.</p>
<p>The same survey revealed a disturbing lack of knowledge about reproductive hazards. Only 40 percent of the companies that reported using chemicals known to cause reproductive harms acknowledged that substances in their workplace might pose reproductive risks. In numerous instances, companies have acted on evidence that a substance is harmful to female reproductive health while ignoring long available evidence that the same substance harms the male reproductive system as well. Of the 37 firms with restrictions on women's employment, only one had any restriction on men.</p>
<p>Hundreds of thousands of jobs are already closed to women as a result of fetal protection policies. The Bureau of National Affairs, a private research organization that monitors federal agencies, has estimated that some 20 million jobs involve working with chemical fetotoxins and could be closed to women if fetal protection policies like Johnson Controls' are allowed to stand. And that estimate does not count all the jobs involving non-chemical potential hazards to fetal development, such as radiation from medical equipment and other sources and electromagnetic fields from video display terminals.</p>
<p>Many firms, including Johnson Controls, say they are fearful of economic liabilities, should they be sued by a child harmed <i>in utero</i>. In fact, current legal doctrine makes it unlikely that an employer would be forced to pay damages in these circumstances. Workers compensation law prevents most workers from bringing injury suits against their employers, and even if parents were to sue on behalf of their child, it would be extremely hard to prove that the employer's negligence caused their child's disability. There are no known cases where an employer has been held liable.</p>
<p><font class="headline">A Troubling Decision</font><br />Title VII of the Civil Rights Act prohibits employers from discriminating against any person on account of sex (as well as race, religion, and national origin). After the Supreme Court refused to consider different treatment of pregnant women as "discrimination on account of sex," Congress amended the act in 1978 to clarify that sex discrimination included discrimination on the basis of "pregnancy and related conditions." Since Johnson Controls employs men capable of fathering children and discriminates among female job applicants on the basis of the mere possibility of pregnancy, the company's policy would seem to be a <i>prima facie</i> case of sex discrimination.</p>
<p>But Title VII also provides for an exception. Employers may use a discriminatory rule if they can establish that sex is a bona fide occupational qualification (BFOQ) for the job. Courts do not grant this exception lightly In fact, the argument that race is a bona fide occupational qualification is never permissible as a defense in a race discrimination case. To prevent invidious stereotyping, the BFOQ defense requires an employer to demonstrate that "all or substantially all" women could not perform the job safely and efficiently.</p>
<p>The UAW and the employees bringing suit against the company have several objectives. They want to establish that companies may not use fetal health as a pretext for continuing the job segregation by gender outlawed by the Civil Rights Act. Most of the women employees want to have their old jobs back or to be able to work in the higher-paying production jobs now off-limits to them. They argue that they are through with child bearing and do not want to be excluded merely because they are biologically capable of having another child. The men, for their part, are concerned about their health and their offspring. They do not believe that the company's standards adequately protect them.</p>
<p>The Court of Appeals for the Seventh Circuit ruled in 1989 that Johnson's policy did not violate Title VII. The majority said that Johnson's policy is not intentional discrimination because it is intended to benefit the offspring of both male and female employees.</p>
<p>The same logic would hardly be acceptable in cases of racial discrimination. Imagine a town government faced with interracial violence on public playgrounds. Town officials decide the best way to stop the violence is to segregate the playgrounds, with separate facilities for black and white children. The policy is not discriminatory, officials say, because it protects children of both races from injury.</p>
<p>Given the history of racial segregation in America and all the connotations of inferiority and subjugation that go with it, few people would agree that segregating the children was neutral and non-discriminatory. Likewise for women. Given the history of exclusion of women from so many occupations, the claims of benign intent are unconvincing.</p>
<p>Because the Seventh Circuit Court saw Johnson Control's hiring rule as neutral on its face and only incidentally discriminatory, the company could use a weaker standard to defend its policy under Title VII. Under this business necessity" standard, the employer has only to show that its policy serves "legitimate employment goals" in some significant way and that there is no less discriminatory way of accomplishing the same ends. The court found that protecting unborn children from disabilities was a legitimate employment goal, and it accepted Johnson Controls' claim that no other policy could protect unborn children.</p>
<p>The majority went on to say that even if, for the sake of argument, someone wanted to hold Johnson Controls to the higher BFOQ standard, the company policy would still pass muster. Making batteries safely, according to the majority decision, is part of the essence of Johnson's business operation. Women cannot make batteries without endangering any fetuses they might conceive while they have lead in their blood. For women, therefore, sterility is a bona fide occupational qualification for the job.</p>
<p>By concluding that the company can still make batteries safely as long as fertile women are excluded, the majority justices define safety by a male norm. A safe manufacturing process is one that is safe for men -- and for women who are biologically like men in their inability to bear children.</p>
<p><font class="headline">Dissents on the Right</font><br />Judges Richard Posner and Frank Easterbrook, both staunchly conservative Reagan appointees to the Seventh Circuit, dissented from the majority opinion, though they drew very different conclusions from the libertarian, free-market ideology they share. Their differences are especially interesting for civil-rights watchers, because the conservative majority on the Supreme Court may well be influenced by them.</p>
<p>Judge Posner chides the majority for "recast[ing] what is plainly a ... case of intentional discrimination against a protected group" and says the case must be decided under the BFOQ standard. But Judge Posner also thinks courts should give great credence to employers' judgments about how to run their businesses and to their fears of liability The BFOQ standard, in his view, could excuse exclusionary fetal protection policies, as long as the exclusions were a bit more limited. He suggests that lowering the limit from age seventy and restricting the exclusions to jobs that are themselves dangerous would make the policy acceptable.</p>
<p>Both the majority opinion and Posner's dissent assert that when fetal health and women's civil rights conflict, the protections of Title VII must give way. Easterbrook's dissent, on the other hand, takes a more straightforward libertarian position. He faults Johnson Controls, and implicitly his fellow justices, for assuming that "women are less able than men to make intelligent decisions&amp;, that the interests of the next generation always trump the interests of living women, and that the only acceptable level of risk is zero."</p>
<p>"The purpose of Title VII," he says, "is to allow the individual woman to make that choice for herself."</p>
<p>Much of Easterbrook's dissent borrows from feminist criticism of exclusionary fetal protection policy He expresses the sentiment of many feminists when he calls the Johnson Controls case "likely the most important sex discrimination case in any court since 1964." Because he is more respectful of women and of the evidence of reproductive harms to men, his dissent tempts feminists and civil rights advocates into a curious alliance with libertarians. But offering a spurious free choice to the individual woman worker would do nothing to promote safer workplaces for men, women, or fetuses.</p>
<p>The Seventh Circuit decision leaves Congress, the Supreme Court, the Equal Employment Opportunities Commission (EEOC), and the Occupational Safety and Health Administration (OSHA) still wrestling with the problem of occupational fetal health. The majority staff of the House Committee on Education and Labor quickly issued a report repudiating the decision; the report insisted that employers may not satisfy their obligations under federal law to provide a safe workplace simply by kicking out fertile women. Even the EEOC, which since 1981 has refused to act on complaints about fetal protection policies, issued new guidelines disagreeing with the Seventh Circuit and instructing its staff to follow the Easterbrook dissent when they investigate complaints.</p>
<p><font class="headline">How Not to Think About Safety</font><br />Johnson Controls, like many companies, assumes that lead can harm a developing fetus only through maternal exposure. That assumption is perpetuated by a gender bias in research. Since most people believe that only maternal exposure matters, far less research is done on the impact of paternal exposure.</p>
<p>Nevertheless, the Seventh Circuit Court had plenty of evidence available. By 1978, OSHA had reviewed the research and concluded that lead-caused genetic damage to both sperm and egg cells can be passed on to the fetus. OSHA the Environmental Protection Agency, and the congressional Office of Technology Assessment have all reported that levels of exposure to most toxic agents high enough to harm a fetus probably harm adult males and females as well.</p>
<p>The California court, the Committee on Education and Labor staff, the congressional Office of Technology Assessment, and the EEOC have all found the OSHA evidence on lead compelling. But the Seventh Circuit majority, in a move that is certainly "making science from the bench," dismissed the evidence as "speculative and unconvincing" because it was based, they claimed, on animal studies. Animal studies are often the basis of risk evaluations; for example, they are the main sources of evidence used by the Food and Drug Administration to decide whether to test drugs in humans or to withdraw carcinogenic substances from the market. Moreover, the court ignored important studies of men with high lead exposures, showing adverse effects on their sperm as well as high rates of birth defects among their children.</p>
<p>As we think about ensuring safety in the workplace, we owe some credence to an agency that has expertise and experience in safety assessments. And OSHA to its credit, assumes that a health hazard affects both men and women until there is evidence to the contrary. In fact, of 26 chemicals OSHA has examined for reproductive toxicity, 21 were found to cause male infertility or genetic damage.</p>
<p>We should think hard about other, less discriminatory ways to safeguard fetal development. On this question, too, the Seventh Circuit majority was rather cavalier with the evidence and the canons of scientific reasoning.</p>
<p>Until 1982, Johnson Controls advised women of the dangers of lead, suggested that those planning a family not work in jobs with significant lead exposure, and asked them to sign a release form. The company had a variety of safety programs, including monitoring employees' blood lead levels, monitoring the air in the workplace, using respirators, and installing powerful cleaning systems. Employees of either sex found to have high blood lead levels were transferred to other jobs, with their rate of pay and benefits partially protected. The company told the courts it had to introduce its exclusionary policy because the earlier one had failed.</p>
<p>What was the evidence that the earlier safety program had failed? Over four years, six women had become pregnant while their blood lead content was above the level considered safe. Six out of how many? Johnson Controls never said, and no one in the lower court seems to have asked.</p>
<p>Did any of the children born to these women suffer harms caused by lead exposure? Johnson Controls' chief physician recalled one hyperactive child who had an elevated blood lead level. On cross-examination, the physician acknowledged that the child continued to be exposed to lead afterbirth. Though no one has any way of knowing whether the child's hyperactivity was caused by lead -- let alone by lead in its mother's blood -- the judges accepted six pregnancies and one hyperactive child as evidence that a non-discriminatory policy had failed.</p>
<p>Other important questions relevant to evaluating the earlier program were never asked. Did Johnson Controls provide health insurance coverage for routine gynecological care to enable female employees to get sound advice and effective contraception? Were women workers adequately informed about the dangers of lead? The release forms they were required to sign made the risks sound slight and the evidence tentative. Might the six women have chosen to get pregnant because they took the risks of harm to be tiny? Had other companies tried less restrictive fetal protection policies with success?</p>
<p> </p>
<p></p><center>* * *</center>
<p>It would be one thing to adopt a zero-risk posture toward children's health throughout our social policy. But our society has scarcely done all it can to prevent children from suffering malnutrition, homelessness, lack of medical care, and abusive violence. No landlord or government agency has an obligation to provide a home for a child, and no physician or hospital has a duty to give prenatal care to a mother. Why are we collectively ducking our obligations to children and suddenly putting the onus of responsibility for <i>any</i> risk on potential mothers?</p>
<p>Johnson Controls and the courts have dismissed other less discriminatory alternatives largely because they are blinded by pejorative stereotypes about women. Excluding all fertile women assumes women are pregnancies waiting to happen. Judge Posner articulates the women-are-careless theory best: "There are many careless pregnancies, as is shown by the frequency of abortion and of illegitimate birth." Notwithstanding Posner's logic, most women exert conscious control over whether they get pregnant. Ninety-three percent of married fertile women and 80 percent of unmarried fertile women use contraception.</p>
<p>The exclusion of women all the way to age 70 ignores social reality and fairly mocks women with biological possibilities. Pregnancy among women aged 50 to 70 is virtually nil the government does not even include women over 49 in its vital statistics on fertility. Pregnancy is rare (about 0.4 percent) among women aged 40 to 44, and rarer still among women aged 45 to 49 (0.02 percent). Even for women aged 35 to 39, the rate is only about 2 percent.</p>
<p>To be sure, unplanned pregnancies happen, but rather than assume women are incapable of controlling their fertility, why not ensure that they can by making the relevant knowledge and medical resources available? A less discriminatory alternative to Johnson Controls' policy would provide women workers with good gynecological care, coverage of contraceptives, and coverage of elective abortions if a woman should accidentally become pregnant while her blood level was high.</p>
<p><font class="headline">Equal Opportunity (Making Batteries Not Included)</font><br />Another persistent stereotype is that women who work do so for pin money. That assumption allows the Seventh Circuit majority to see the women's jobs as expendable. The justices take this view so much for granted that they slip it into a dependent clause: "Since [women] have become a force in the workplace as well as in the home because of their desire to better the family's station in life No one would feel it necessary to explain why men work. Men work because that is what men do, because they have to. Women, in the court's view, work not out of need, compulsion, or societal pressure, but out of mere "desire" to add something extra to the family's resources. In this picture, all the women have a male breadwinner to bring home the bulk of the bacon.</p>
<p>That picture is sociological nonsense. The kind of women who are likely to hold or seek production jobs at Johnson Controls have <i>always</i> been a force in the workplace. They used to work in the textile mills clothing and shoe factories, laundries and hospitals, and small goods assembly lines, where they got paid far less than men plying equivalent skills in the chemical and metal industries. They have become a force in workplaces like Johnson Controls' battery-making plants because the Civil Rights Act opened up these higher-paying jobs to them. Many of them cannot or do not rely on a man to support them or their children. They also know that they cannot rely on the government, should they even want to. Welfare policy now expects women, even mothers of small children, to work. Moreover, according to affidavits collected by the American Civil Liberties Union, jobs at Johnson Controls would provide many women with health insurance coverage that they could not obtain through other jobs available in their area.</p>
<p>The majority judges on the Seventh Circuit think that since women work to better their station in life, a female employee might "rationally discount" the risks of lead exposure "in her hope and belief that her infant would not be adversely affected.... Clearly, these judges think a woman's material and status ambitions are likely to cloud her judgment, and that a woman who decides to work in high-lead jobs and have children is actually irrational. That is why the majority is not willing to follow judge Easterbrook and leave the decision to women.</p>
<p>Occupational safety experts agree that blood lead level can be largely controlled by personal hygiene (wearing a respirator, changing clothes, and showering after work). Why not let women try to keep their blood levels down and treat those with low blood levels differently from those with high levels? To justify the exclusion of all fertile women from the workplace, the court has to assume all women are incapable not only of controlling their fertility, but also of protecting themselves.</p>
<p>To expose all these stereotypes at the heart of current fetal protection policies is not to argue that women (or men) should be offered a Hobson's choice in lieu of genuinely safe jobs. Rather, shedding false presumptions reveals more options -- less discriminatory ones -- for assuring safety of adults and children. We have only to recognize that female and male workers can participate intelligently in making work safe. Courts, of course, cannot create these options, but they can oblige businesses to consider the alternatives by denying them the exclusionary option.</p>
<p><font class="headline">A Strategy for Fetal Protection</font><br />A less discriminatory way to protect fetuses from workplace risks rests on three assumptions. First, most men and women want to have healthy kids, and if informed and given the resources, they will take steps to ensure the best for their offspring. Second, pregnancy is eminently controllable -- not completely, of course, but enough to make family planning part of a fetal protection strategy Third, lead exposure (and many other kinds of toxic exposure) can be significantly controlled with personal hygiene, industrial engineering, and carefully planned job rotation.</p>
<p>Occupational fetal protection is eminently suited for a regulatory strategy. Neither employers nor courts are in as good a position as OSHA and its scientific arm, the National Institute of Occupational Safety and Health (NIOSH), to assess the scientific evidence necessary for good policy making or to sponsor research that is still needed. Moreover, employers are apt to be overly influenced by their fear of tort liability and their stereotypes about women's suitability for certain jobs. They are apt to discount the reproductive harms to men and the economic harms to women, in their hope and belief that excluding all fertile women prevents disabled babies. They are apt to show heightened concern for fetal safety when women are only marginal to their work forces, and to be less interested when women are predominant.</p>
<p>Congress should clarify and strengthen OSHAs power to regulate fetal protection. OSHA should have authority to promulgate levels of exposure that are unsafe for pregnant women and men planning to father. These standards should be based on solid research evidence assessed by OSHA and NIOSH. No standard could be established without adequate research on the male transmission route as well as the female, and where male transmission of a hazard is possible, protective regulations must be designed for men as well as women. Giving OSHA this authority would help ensure that risks of different jobs are evaluated consistently and that decisions to protect pregnant women are based on the seriousness and likelihood of the dangers, not on how central women are to the job category.</p>
<p> </p>
<p></p><center>* * *</center>
<p>This approach would allow employers -- indeed, even require them -- to remove pregnant women from jobs with exposures on OSHA's danger list, transfer them to safe jobs, provide them with the same rate of pay and benefits, and guarantee the old job when a woman is no longer medically vulnerable. Such medical removal protection, as it is called, is already required under OSHA's general lead regulation. However, OSHA's blood level standard of 50 micrograms per deciliter is somewhat higher than the level of 30 micrograms that Johnson Controls uses as a safety threshold for women and that some experts think should trigger removal for men as well as women trying to conceive. (Despite its avowed concern for absolute safety, Johnson Controls does not provide any job transfers for men planning to father.)</p>
<p>Mandatory transfers of employees trying to conceive a child would entail an unacceptable loss of privacy, since they would require people to disclose their child-bearing plans to an employer. Still, men and women trying to conceive a child should be able to transfer to safe jobs if they wish.</p>
<p>To protect infants, a woman might have to stay out of a high-exposure job throughout the period of nursing. NIOSH should seek research on whether toxic substances are transmitted through breast milk. Mothers who choose to nurse should be given medical removal protection for a reasonable period after delivery, perhaps nine months.</p>
<p>Medical removal protection does not work perfectly. For example, employees transferred to low-lead jobs usually do not earn as much overtime pay in the new jobs, and they may lose out on some of the potential incentive bonuses available in high-lead jobs. But the policy does largely protect workers' economic security without endangering their health.</p>
<p>Occupational fetal protection policy should never be based on mere fertility, documented or assumed. Such abroad rule imposes severe limitations on all women to protect the extraordinarily small fraction who develop high blood lead levels and then want to get pregnant or to continue an unplanned pregnancy. Though we cannot know in advance who those women are, we can do a lot to make sure that women who do get pregnant --and men who father -- have the best possible opportunities to keep their children healthy</p>
<p>We should use creative job rotation strategies to ensure that men and women in their reproductive years can have long periods in low-exposure jobs when they can plan families. Companies with OSI-designated fetotoxic jobs should be required to provide comprehensive health insurance to employees, including coverage for routine monitoring of reproductive health, contraception, prenatal care, prenatal diagnosis, and abortion. If such companies cannot get health insurance because insurers refuse to insure their employees at affordable rates, the federal government should create a special program to guarantee insurance to this category of worker until we resolve these failures in the private insurance market.</p>
<p> </p>
<p></p><center>* * *</center>
<p>What about corporate liability? Johnson Controls asserts that it must adopt a zero-risk posture because of its potential economic liability if children born to employees suffer lead-caused damages. The liability issue is probably a red herring, and in any case, federal law is clear that any added costs of employing women or minorities cannot excuse discrimination.</p>
<p>But to ensure that liability concerns are not a factor in women's employment, OSHA's fetal protection policy could include immunity from tort liability for employers with designated fetotoxic jobs, as long as the employer is in full compliance with OSHA's regulations, including the provision of health insurance. Such a grant of immunity would make it difficult for companies to claim "business necessity" as the justification for excluding fertile women. It would also give employers real incentive to comply with OSHAs safety regulations, and thereby give OSHA more clout.</p>
<p>In principle, the threat of legal liability deters employers from perpetuating unsafe working conditions and assures financial compensation to people who are injured as the result of an employer's negligence. In practice, however, only a minute fraction of occupational injury cases ever go to court and produce big awards. Therefore, any deterrent effect of liability is greatly weakened, and tort damages are not a reliable safety net for people disabled by occupational injuries. Granting immunity as a tradeoff for a comprehensive and non-discriminatory fetal protection policy might be a worthwhile bargain. To aid disabled children, we are better off continuing to use existing programs, such as the Education for All Handicapped Children Act, Supplemental Security Income, and Medicaid, which are designed to take care of children's medical needs.</p>
<p>This regulatory plan is not perfect. Job rotation, for example, may be impossible for small employers, seasonal labor, or self-employed women. The plan cannot guarantee that no person will ever conceive a child while exposed to fetotoxic substances, but the quest for absolutely no risk is illusory anyway. Parents are exposed to many non-occupational risks to their fetuses that exclusionary employment policies will only aggravate.</p>
<p>Perhaps the most important of these risks are inadequate nutrition and prenatal care. Expanding women's access to high-paying jobs with health insurance can only reduce these risks to their children. And we ought to be more concerned about the estimated two million preschool children -- real children, not biological possibilities -- whose blood lead levels are already in the danger zone, due mostly to lead paint poisoning. If we truly want to protect children from disabilities caused by toxic exposures, we should grant women full citizenship in the economic sphere, not restrict their employment opportunities.</p>
<p>Protecting the health of future generations is a noble idea, but less so when the urge to eliminate all risks is used selectively to control women's behavior and enforce the primacy of their role as mothers. How else can we explain a fetal protection policy that puts women in their fifties and sixties in safe jobs, but lets young men hold jobs that endanger their offspring? How else explain the prosecution of new mothers for drug addiction, when almost no drug treatment facility will accept pregnant women as patients? How explain the courts' willingness to force Caesarean sections on pregnant women, when courts otherwise refuse to compel people to undergo medical procedures for someone else's benefit -- even a child's.</p>
<p>Increasingly, our social policy imposes a standard of perfect motherhood on the individual woman, but provides few of the supports parents need to raise healthy children, such as parental leaves, day care, health insurance, affordable housing, and job guarantees. Focusing on the health of the individual woman's womb may do more harm than good if it blinds us to the larger social causes of poor health in our children.</p>
</div></div></div>Tue, 05 Dec 2000 02:51:49 +0000141634 at http://prospect.orgDeborah StoneAIDS and the Moral Economy of Insurancehttp://prospect.org/article/aids-and-moral-economy-insurance
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><span class="dropcap">W</span>hen a blood test to detect AIDS antibodies was first announced in 1985, the ensuing controversy over the use of the tests by insurance companies seemed to take a familiar shape. On one side were civil rights advocates claiming discrimination if the insurers were permitted to use the tests to screen applicants for life and health insurance. On the other side was an industry insisting on its right to be free of government regulation. But despite the seemingly familiar pattern, the conflict over AIDS testing really concerned a novel problem with repercussions for many people who do not see themselves as having any stake in the issue.</p>
<p>The AIDS antibody test is only one of a growing number of predictive diagnostic tests. These tests tell whether a person is highly susceptible to a disease or, very rarely, whether someone is certain to get sick. Other predictive tests include the recently identified genes and genetic markers for Alzheimer's disease, manic-depressive disorder, multiple sclerosis, muscular dystrophy, cystic fibrosis, and some forms of cancer, diabetes, and heart disease. Public health research has also identified numerous risk factors for chronic disease, such as high cholesterol, high blood pressure, obesity, and smoking. These risk factors are even easier than genetic abnormalities to detect in individuals.</p>
<p>Predictive diagnostic tests have created a new medical limbo between health and illness. Now it is possible to be labelled "at risk" without being ill or ever developing the disease in question. In fact, many predictive tests in clinical medicine are designed to be overinclusive to minimize the number of cases of illness missed by physicians, allowing them to treat or at least detect medical problems as early as possible. But when used by private insurers, predictive tests are turned to a contrary purpose: denying insurance coverage, or charging more for it, and thereby causing fewer people to receive the health care they need. The aim of preventive medicine is to extend the benefits of modem science. Yet, ironically, the more predictive tests there are available, and the more broadly risk categories are drawn by cautious clinicians and epidemiologists, the fewer people private insurance will serve.</p>
<p>In the United States, private insurance is the primary means that Americans use to pay for health care and to provide for needs that are too big to meet through normal work income and savings. It is also the principal vehicle for fulfilling family financial obligations. Through health insurance, life insurance, disability insurance, pensions, and their related dependent benefits, Americans create their own networks of social aid within the larger society of strangers. Social insurance programs, such as Social Security pensions, disability insurance, and Medicaid, are designed only to be safety nets, not primary sources of support.</p>
<p>The private insurance industry, operating under rules set by law and public policy, controls access to the vital first line of defense against financial catastrophe. By determining who gets privately insured, for what misfortunes, and at what price, the insurance industry also affects the balance of responsibility -- and costs -- between the private sector and government. The more the industry uses predictive testing to limit access to private insurance, the more people and troubles fall to public programs. Furthermore, just as insurers want to limit their payouts, so employers want to limit their insurance premiums. They may increasingly adopt predictive tests to screen job applicants.</p>
<p>The use of predictive testing thus raises major questions about the future of access to good jobs, health care, and financial security in America. The United States already has a patchwork system of health insurance that omits coverage of nearly one of every six citizens. If used to their full potential, predictive tests may relegate even more Americans to the ranks of the excluded. But the current framework of insurance, which concentrates costs on people with high health risks, is not the only possible design. We do have alternatives for creating a system that protects those at high risk -- and all of us -- from financial devastation and exclusion from health care.</p>
<p> </p>
<h3><font class="headline">Why Insurers Want to Test</font></h3>
<p>The battle over AIDS testing initially seemed to go in favor of the opponents. Within a month after scientists announced the test, gay advocates in California had obtained state legislation to prevent insurance companies from using it; in the next two years Wisconsin, New York, Florida, Massachusetts, and the District of Columbia adopted similar restrictions. The commercial insurance industry, however, mounted an all-out effort to repeal the regulations, and by the end of 1989 only California's ban on testing health insurance applicants remained in place.</p>
<p>The money immediately at stake for insurers was not the chief reason for their concern. Insurance companies mobilized their political influence because they feared losing their ability to screen applicants and set rates according to the health risks that the applicants appear to represent. The companies consider control of those decisions crucial to their competitive strategies, even their financial survival.</p>
<p>In the jargon of the insurance business, the process of selecting risks is called "underwriting." Underwriting involves determining whether an applicant's likely loss experience is similar to that assumed by an insurance company in setting its standard rates. If the applicant represents a greater risk, the company may offer a "substandard" (that is, higher) rate or deny coverage altogether. By accurately predicting losses and setting premiums accordingly, insurers seek to maintain their solvency and profitability. They also use underwriting to design policies with specialized features for carefully selected groups of people. Indeed, life insurers compete not so much on price or service as by marketing special policies to target groups.</p>
<p>In health, disability, and life insurance, insurers use medical information and other factors such as age, gender, and occupation to determine coverage and rates. Besides asking applicants (and sometimes their physicians) to fill out questionnaires about their health, they may also require a physical exam, including laboratory and clinical tests, such as urinalyses or electrocardiograms. Tests are used in part to find out whether applicants are concealing important facts. For example, insurers sometimes screen blood for prescription drugs to determine whether applicants are being treated for a disease they did not disclose.</p>
<p>This information is not kept privately by the insurance firm checking out an applicant. The industry maintains a central laboratory, the Home Office Reference Laboratory (HORL), to perform most medical tests. The HORL shares its results with a central data bank, the Medical Information Bureau, which is a membership organization of about 700 companies. An applicant for health, life, or disability insurance to any of these companies must sign a consent form allowing the company to report its findings to the bureau. Once an applicant has filled out a questionnaire or had blood sent to the HORL, the results are available to other insurers. So despite the appearance of a highly competitive industry, the prospective purchaser of an individual policy effectively faces only one supplier.</p>
<p> </p>
<p></p><center> </center>
<p><strong>BEFORE THE ADVENT OF</strong> the AIDS antibody test, insurers were already using supposed indicators of homosexuality as underwriting criteria. Single men between 25 and 45, particularly if employed in occupations deemed stereotypically gay, were being denied individual policies for life and health insurance. In one sense, the blood test was a blessing to both insurers and gays. It enabled insurers to rely on objective medical evidence and took the focus off sexual orientation. But gays still perceived a threat because the blood test was likely to be imposed selectively on the basis of presumed sexual orientation. Moreover, those who tested positive were not certain to get AIDS, for the test does not disclose who has the disease. It discloses only the presence of antibodies in the blood to the human immunodeficiency virus (FHV), which causes AIDS. Although scientists now think at least 75 percent of those who test positive will probably develop AIDS, it is uncertain exactly who will contract the disease.</p>
<p> </p>
<p></p><center> </center>
<p><strong>ONCE GAYS DEFINED THE</strong> issue as discrimination, the industry's strategy was clear. This was not the first time that insurers had to defend their use of classifications that public sensitivities no longer readily accepted. In the late nineteenth century, several states banned the use of race in setting insurance rates. More recently, the use of gender in pension, disability, and automobile insurance has faced attack. The industry's case for AIDS testing reflected its arguments over the last century in defense of its underwriting practices.</p>
<p>HIV testing, the industry maintains, has nothing to do with attitudes towards homosexuals. For insurers, discrimination is the essence of the business, not a dirty word. It means differentiating among policy holders according to their risk of incurring the loss for which the policy will pay out. HIV tests are just one more means of determining risk status, and they are no more discriminatory than blood pressure readings. According to the industry, it would be unfair <i>not</i> to use the HIV tests.</p>
<p>Insurers distinguish between fair and unfair discrimination. As Spencer Kimball, a leading professor of insurance law, puts it, fair discrimination means measuring as accurately as possible "the burden shifted to the insurance fund by the policy holder and charging exactly for it." Fairness, in this view, means ideally that no one pays for anyone else. That, however, is not the only definition of equity.</p>
<p>By its very nature, insurance is redistributive. We could theoretically squirrel away our individual savings to provide financial security for any of the contingencies we commonly insure against. Through insurance, however, we join with others to "pool" our risks and our savings. Only some people in the pool will experience the insured harm (say, fire, theft, or illness). Since only those who experience the harm will receive a payout, the others necessarily pay to help them.</p>
<p>Some advocates for the industry call this redistribution built into insurance a "cross-subsidy" and deem it anathema to fairness when it can be foreseen. In a 1987 <i>Harvard Law Review</i> article, Karen Clifford and Russell Iuculano, who represent the American Council of Life Insurers, argue that insurers have a legal duty to separate policy holders with serious, identifiable health risks from those without such risks. "Failure to do so represents a forced subsidy from the healthy to the less healthy."</p>
<p>The argument makes sense only if we understand the purpose of insurance as allocating costs to the people who generate them, rather than spreading the costs of misfortune and thereby making them more manageable. All insurance entails cross-subsidy. That is what makes it insurance instead of personal savings. Insurers typically put the adjective "forced" in front of "subsidy" when defending an underwriting criterion against regulatory challenge. But there is no reason why any <i>particular</i> cross-subsidy is more coercive than all the other cross-subsidy that insurance entails.</p>
<p>The debate about HIV testing in insurance, then, comes down to a fundamental disagreement about the purpose of insurance, regardless of whether an insurance fund is operated as a commercial enterprise, a social program, or some hybrid. Ultimately, the disagreement concerns whether to distribute the benefits of insurance according to prior contributions or according to need. Medical testing of any kind is valid only to the degree that we want our insurance system to minimize redistribution from the healthy to the (potentially) sick. With enough predictive tests of sufficient accuracy, insurers could virtually eliminate risk-sharing and redistribution. We would each pay strictly for ourselves. The industry argument about fair discrimination assumes a vision of insurance as a personal savings plan operated by insurance companies instead of banks.</p>
<p> </p>
<h3><font class="headline">Fairness in Insurance</font></h3>
<p>What kinds of differentiation are fair? The industry answer is not helpful: Fair discrimination is what each state's Unfair Trade Practices Act allows, and unfair discrimination is what it forbids. These laws were all adopted at the behest of industry between 1947 and 1960. They typically define unfair practices as "making unfair discrimination between individuals of the same class," "discrimination between similarly situated individuals," or -- one of my personal favorites for its tautological brilliance -- "discrimination between insureds having like insuring characteristics."</p>
<p>How does one know whether people belong to the same class, are similarly situated, or have like insuring characteristics? Sesame Street, that universal mentor of the preschool set, has something to say on the matter. In one segment, the kids are shown three cardboard stars and a moon and asked, "Which one of these things is different?" The lesson is not that the moon is different, but that several equally valid answers depend on which criterion a person uses to differentiate -- say, color, shape, or size. The moon is different only if the children select by shape.</p>
<p>In health and life insurance, many different factors could be used to answer the question, "Which people are most likely to get sick?" The industry most commonly uses age, gender, medical history, and occupation, but it could avail itself of others. Why not use race? Blacks have higher rates of heart disease and kidney disease and lower life expectancy than do whites. Insurers could also use residence. Cancer rates vary by state, and people living near major medical centers are at greater risk of expensive surgery than others living farther away The industry could use veteran status. Vietnam veterans have higher rates of accidents and premature death than do nonveterans. The industry could use marital status. Illness is much greater among the widowed and divorced that among the married or single. On what basis, then, do we say that an insurer's classifications are fair?</p>
<p>The classifications used by the industry are dictated by neither medical science nor financial principles. They are a policy choice. The industry cannot use race because it is legally forbidden to do so as a result of a political choice made outside the industry. It does not use veteran status because it does not dare to penalize political heroes. But it does use medical criteria.</p>
<p>Insurers have decided that certain diseases render people ineligible for life insurance. These generally include diabetes, leukemia, schizophrenia, emphysema, coronary artery disease, and now AIDS. People with these diseases are deemed "medically uninsurable." Risk factors, such as uncontrolled high blood pressure, are also a basis for exclusion. The rationale is that people with these diseases and risk factors have a very high probability of early death. If they die prematurely, they will not pay enough money in premiums to cover the losses that they will generate, unless the insurer were to charge them such high rates as to make the insurance unaffordable.</p>
<p>According to the industry view, admitting people at high risk to a general insurance pool would be unfair to the other, lower-risk policy holders whose premiums would go up. Industry representatives portray any effort to ban the use of HIV tests by insurance companies as granting "favored status" to carriers of one disease. Since the industry already screens applicants for heart disease, cancer, stroke, and other diseases, why should AIDS be privileged?</p>
<p> </p>
<p><span style="font-size: 13.008px; line-height: 1.538em;"><strong>ONE CAN SEE HOW</strong> policy holders would not want to be burdened with the costs of people likely to be very sick or to die prematurely. Indeed, the industry tries to foster this lifeboat mentality by running advertisements explaining why we should not want to pay for people who run high risks. "If you don't take risks, why should you pay for someone else's?" asks one such advertisement, showing a man high up on a steel scaffolding. Never mind that the man is building an office tower that presumably contributes to our economy. (He's wearing a hardhat and a tool belt, so he's probably not climbing for the thrill of it.) In the insurance industry's view, fairness means concentrating the costs of accidents and illnesses on the individuals who bear the risks.</span></p>
<p></p><center>
<p> </p>
<p> </p>
<p></p></center>
<p>What if we step outside the privileged circle of people protected by private insurance policies? From that vantage point, equity might seem to require that those who are ill or at risk for illness and injury ought to have greater access to insurance, not less. If they face high medical expenses, they need health insurance coverage all the more. If they have dependents, they need life insurance all the more to protect their family's well-being. From a societal perspective, the people who require protection the most are precisely those whom commercial insurance companies find it economically necessary and fair to exclude.</p>
<p>Outside the privileged circle, people with diabetes or high blood pressure might feel that they have been singled out only because they have a condition that scientists and insurers now recognize as leading to early death or disease. But each of us is a living bundle of risk factors. We all have a multitude of characteristics -- socioeconomic status, heredity, race, gender, education, residence, family status, occupation, degree of happiness, eating habits, driving habits, work habits, and who knows what else -- that are or could be associated with illness and premature death. Scientists have investigated only some of these factors, and insurers have chosen to use only a few in setting rates. People with recognized risk factors are disadvantaged by our scientific knowledge. Given medical progress in identifying the precursors of disease, the number of people facing these new penalties of predictive knowledge can only grow. Groups of policy holders and the companies that insure them have every incentive to determine who is likely to be sick, disabled, and prematurely dead and to exclude these people from their risk-sharing plans or set higher insurance prices for them.</p>
<p>Many would argue that insurance companies are justified in charging more to high-risk people to encourage them to lead safe and healthy lives. When a person can reasonably be expected to understand the dangers of an action and refrain from it -- say, speeding, hang gliding, or smoking -- using that behavior as a basis for setting insurance premiums might serve the goals of education and prevention.</p>
<p>While incentives are sometimes a reasonable concern in designing insurance classifications, we ought to be wary of using them to set health insurance prices. Few health risks are truly voluntary. Even smoking, the favorite candidate, is doubtful. Nicotine is addictive, and the decision to start smoking, usually made when people are quite young, is heavily influenced by societal pressures, such as commercial advertising. Most known risk factors, including smoking, are heavily concentrated among the poor and less well educated. Many reasons for this disparity have their origins outside the sphere of individual choice. Alternative sources of satisfaction and stress reduction are less available to the poor. The poor tend to have more dangerous jobs. Even though we don't understand all the causal mechanisms, virtually every risk factor for disease has a high correlation with poverty. To increase health insurance prices for people already disadvantaged by poverty and poor health is to penalize them triply.</p>
<p>There is nothing wrong with creating incentives for healthy behavior, but health insurance is simply the wrong place for society to conduct is education of good habits. Health insurance should guarantee access to health care. Health is essential to life, happiness, and productivity. No matter whether people may have inflicted illness or injury upon themselves, we ought not to withhold compassion or medical care once they are sick. And denying health insurance on the basis of disease risk factors -- even the most controllable actions -- effectively denies care to the sick.</p>
<p>Here another tension of insurance becomes evident. Insurance is about sharing risks within a community. Underwriting is about exclusion. The industry term "uninsurable" applied to people deemed to be at very high risk suggests that insurability is a quality of individuals. In fact, insurability is the set of policy decisions by insurers about whom to accept. It is not a trait, but a concept of <i>membership</i>. It expresses the criteria used by a group to decide whom to include and exclude from its redistributive system. Treated as a scientific fact about individuals, the notion of insurability disguises fundamentally political decisions about membership in a community of mutual responsibility.</p>
<p>A system of competitive insurers based on medical underwriting guarantees that as insurers scramble for customers and seek to control their risks, society will be divided into more homogeneous risk classes, and more people will be left out of insurance pools altogether. From a commercial insurer's perspective, that may be good business practice. But from a social perspective, the splitting up of insurance pools means the erosion of mutual aid.</p>
<p> </p>
<h3><font class="headline">How Predictive Testing May Affect Health Insurance</font></h3>
<p>The insurance industry's use of HIV tests and other new predictive tests will significantly affect access to health care. In the United States, eligibility for health coverage depends on work, age, or disability. Employee group plans, the most common form of protection, cover approximately 60 percent of the population. Through Medicare, Medicaid, the Veterans Administration, the Indian Health Service, and various other federal and state programs, government provides coverage for an additional 20 percent. Of the remaining population, some 5 percent obtain individual insurance policies, and 15 percent have no coverage at all.</p>
<p>Many of those lacking insurance either have no way to obtain it or face much higher insurance premiums than do the typical members of large employee groups. About two-thirds of the uninsured are employees or their dependents, but the smaller firms where they tend to work can purchase group insurance only at high rates. Individual policies are often prohibitively expensive. Moreover, of applicants for individual health insurance, around 8 percent are rejected outright as medically uninsurable. Commercial insurers designate another 9 percent as substandard risks and charge them even higher premiums than are normal in the individual market, while Blue Cross-Blue Shield plans rate about 20 percent of applicants as substandard. Of course, anticipating rejection or higher rates, many of the uninsured who are sick or disabled do not bother to apply for individual policies.</p>
<p>Until recently, only applicants for individual policies and groups under 25 or so employees were subject to medical underwriting. But a survey by the Office of Technology Assessment recently found that more health insurers are beginning to screen group applicants for high-risk status. Three of every four commercial and Blue Cross companies were either screening or planning to screen for high-risk applicants in small group plans. For large groups, 58 percent of commercial insurers and 7 percent of Blue Cross-Blue Shield plans were using or moving toward screening. Medical underwriting in the group market will raise greater obstacles to employers wishing to provide all their employees with health insurance.</p>
<p>Employers themselves may also now increasingly take health risks into account when deciding whether to hire a prospective employee. In recent years firms of all sizes have faced staggering increases in health insurance costs; one way to keep those costs down is to avoid employing people with high risks of illness. In countries with national health insurance, employers have less incentive to exclude the potentially sick from jobs. But employers in the United States pay directly for the health costs generated by their own workers. The group plans sold by insurance companies are typically "experience" rated; that is, the premiums charged by the insurer are based on each employee group's profile. Moreover, a growing number of employers operate their own health insurance plans, chiefly to take advantage of a provision in the 1974 Employee Retirement Income Security Act (ERISA). Under that law, if a firm "self-insures," its health plan is exempt from state regulation as well as state taxes on insurance premiums. Today more than half of all workers are covered by employer self-insurance arrangements. As a result, employers today <i>are</i> insurers, and all the difficulties surrounding the use of testing by insurers come up with employers, too.</p>
<p>Rising costs and self-insurance give employers strong incentives to use predictive testing to screen out high-risk applicants for jobs. State and federal handicap discrimination laws have begun to protect employees from being fired simply because they have some potentially costly health risk that does not affect their current job performance. But these laws do not necessarily bar employers from refusing to hire an applicant who appears to be a health risk according to one of the new tests. Thus people deemed "uninsurable" may also become "unemployable," at least at firms with good jobs that carry health insurance as a fringe benefit.</p>
<p> </p>
<p></p><center> </center>
<p><strong>AS SCREENING AND UNDERWRITING</strong> exclude more people from private health insurance, the costs of their care fall primarily to Medicaid and public hospitals. Recent studies have estimated that of all people with AIDS, Medicaid covers 40 to 50 percent nationwide, while private insurance pays for only about 17 percent (and that share is probably declining). Taking all sources into account, hospitals are being paid only around 80 percent of their costs for AIDS cases in the Northeast, Midwest, and West and a mere 45 percent in the South. Public hospitals face the greatest burden. In cities hard hit by AIDS, such as New York, San Francisco, Newark, and Miami, municipal hospitals are having to shift resources from other services to AIDS care. This pattern is symptomatic of the wider problem. As private employers and insurers avoid the sick and high-risk individuals among us, they displace the costs onto the public sector, which simply lacks the resources to meet those demands on top of the others it already faces.</p>
<p>One possible response, advocated by many people in the insurance industry, is to create state high-risk pools to cover people whom insurance companies turn down. Such pools already exist in some fifteen states. Since people relegated to the pools are by definition (or at least assumption) at high risk for disease, they pay up to twice the usual rate for health insurance and face very high deductibles. These costs put high-risk pools out of the financial reach of most people. Still, the pools run at a loss. Insurance companies are then assessed to subsidize the pools, based usually on their <i>pro rata</i> share of business in the state. Some states also subsidize the pools out of state revenues.</p>
<p>High-risk pools permit insurance companies to continue skimming off the people who are least likely to become expensively sick and to shunt the others into a public program. Those people lucky enough to gain regular insurance protection pay cheaper premiums because they share their expense with others who also are unlikely to get sick or die early. True, they may face higher taxes to help pay for those who depend on public programs and public hospitals. But precisely because they enjoy the privilege of cheaper and better private insurance, they are not likely to be strong advocates for improved public services. The political effects of segmented insurance pools thus reinforce the economic forces at work when insurers are able to take the best risks and exclude the bad ones.</p>
<p> </p>
<h3><font class="headline">Life Insurance and the Bottom Line</font></h3>
<p>Most medical testing by insurance companies occurs in the sale of life insurance, where the monetary stakes are much greater for the insurance industry. Insurers are particularly worried about people buying insurance policies when they know themselves to be at high risk for early death, while the insurer does not. (This is known as "moral hazard" in insurance jargon.) Such purchases produce the phenomenon that insurers dread more than any other: adverse selection, that is, a skewing of policy holders toward those with heavier than expected losses. Insurers raise the specter of legions of people exposed to HIV taking out large life insurance policies. Without HIV testing, these policies would be priced at standard rates, but the policy holders would likely die in a few years after having paid only a fraction of the premiums on which the companies were counting. Thus, without testing, the companies say their solvency is in jeopardy.</p>
<p>The little information currently available about life insurance payouts to AIDS victims suggests that the industry has not yet suffered major losses. The reason is most likely that few people at high risk of AIDS had life insurance policies in the first place. Historically, life insurance has been sold primarily to married people with children, and then only to those with enough regular disposable income to make monthly payments. That set of people does not include large numbers of gay men and needle-sharing drug users -- the two largest risk groups for AIDS. Nonetheless, insurers worry that if they were now to be denied the ability to require tests, carriers of the AIDS virus and other people identified as high-risk by predictive testing -- would sign up for policies and produce big losses.</p>
<p> </p>
<p></p><center> </center>
<p><strong>THE ADVERSE SELECTION</strong> argument serves an important rhetorical function in the debate. It casts moral doubt on people with high risk for AIDS or any other life-threatening disease. The unspoken message is that people who buy insurance knowing they are high-risk are social parasites. Adverse selection, Clifford and Iuculano say, "unfairly burdens other policy holders who must support the increased claims through higher premiums." Thus the insurers' representatives subtly turn sick people into moral outcasts to justify excluding them from risk-sharing arrangements.</p>
<p>However, while they are healthy, many people simply have no opportunity to contribute to an affordable health insurance plan. When the uninsured later face extraordinary medical expenses, the only way they can spare their friends' and family's assets is to make them beneficiaries of life insurance policies. Of course, some people who discover they have a fatal disease might try to buy as much life insurance as they could afford, not only as a hedge against their own catastrophic expenses, but also to enrich their family or partner, or even to borrow against the policy and enhance their own current consumption. Insurance companies and other policy holders should be protected from such knowing exploitation of the risk-pooling mechanism. Though breadwinners have a legitimate interest in a moderate amount of life insurance, society need not create an entitlement to a huge payout when a person dies. But predictive medical testing by insurers has so many bad consequences that we ought to find other ways to prevent life insurance abuses.</p>
<p>Other mechanisms besides medical testing could ameliorate the problem. Insurance regulators could redesign "incontestability clauses" in life insurance policies. In most states, these clauses allow an insurer to refuse to pay if the policy holder dies within two years of the policy's issue and has misrepresented information on the application. After two years, the insurer may no longer contest the validity of the policy. Since the latency period for AIDS is considerably longer, we could extend these clauses for AIDS to some reasonable length, perhaps five years.</p>
<p>If we remember that fife insurance is primarily a mechanism to strengthen family income security, we could first ensure that basic levels of insurance are available to everyone. In addition to providing a guarantee of basic health insurance, we might expand the survivorship component of Social Security (broadening the concept of survivors to allow benefits to be paid to people with long-term commitments regardless of marriage, blood ties, or sexual orientation). With universal health insurance in place, the need to take out fife insurance at the moment of illness would diminish, thereby lessening concern about insurers' access to predictive medical information. With such strengthened arrangements for financial security, we should then permit private life insurers to test applicants for policies with large face values, perhaps all those higher than three times the median income.</p>
<p> </p>
<h3><font class="headline">Some Political Lessons</font></h3>
<p>If ever there were an issue that ought to have propelled us to national health insurance, the AIDS epidemic should have been it. No recent experience so graphically demonstrates the limits of private health insurance as a method of paying for sickness.</p>
<p>The insurance industry made clear its concern to escape as much of the cost as possible. Nonetheless, efforts to stop insurers from testing for AIDS and excluding the victims proved a failure. What can we learn for the next round?</p>
<p>Perhaps the biggest mistake in the HIV testing controversy was the failure to grasp the full import of medical underwriting and predictive testing. Representatives of public hospitals, Medicaid agencies, and state health and welfare departments were nowhere to be seen in the legislative and regulatory hearing rooms as HIV testing was debated. Nor did any of the disease-based groups, such as the American Heart Association and American Cancer Society, see their stake in the testing issue. Gay rights and AIDS advocacy organizations were left to do battle alone.</p>
<p>As a result, HIV testing was treated solely as an issue of discrimination and privacy, not as the profound structural issue it also is. Because gays do face substantial discrimination, and because prejudice against people who test positive was running rampant, the advocacy groups focused their arguments on the injustice of burdening a minority, the insurers' use of crude stereotypes, and the lack of counseling and confidentiality for people tested by insurance companies.</p>
<p>The charge of discrimination is often a powerful political resource in American politics, but it backfired here. Insurers were able to trump the charge with their own wild card: they threatened not to write business in states that restricted testing. Moreover, because opponents of testing framed the issue as discrimination against gays, they lost the opportunity for alliances with other groups whose members stand to lose from increased medical underwriting but do not see themselves as victims of discrimination.</p>
<p> </p>
<p></p><center> </center>
<p><strong>THE UNDERWRITING ISSUE</strong> is bound to come up again, given the rapid pace of discovery of genetic markers for disease and the near-weekly announcements of environmental, dietary, and behavioral hazards to health. If we understand the broad values at stake, we should be better prepared to frame these emerging issues and mobilize alliances to defend a wider vision of social protection.</p>
<p>In the struggle over AIDS testing, the insurance industry adopted the argument used by all industries seeking to resist public regulation: we cannot operate efficiently, perhaps not even at all, if we are burdened with social objectives. This argument might be persuasive if there were a wall between the public sector and private sector. But that is not our world. Private and public forms of social aid are intimately entwined, since the people and troubles that commercial policies do not cover get pushed into the public sector. Either public insurance programs fill the gaps in private insurance, or the victims wind up on the streets, the welfare rolls, the doorsteps of voluntary agencies, and the beds of public hospitals.</p>
<p>Private insurance companies might point out that their policy holders still pay taxes to care for the "uninsurables," but that argument misses an important point. Why should medical underwriting be used at all for health insurance? Why should certain diseases, such as AIDS, be socially financed, while others are privately financed? And why should insurers be the ones to decide, through their underwriting policies, which diseases taxpayers will have to finance and which ones private insurers will cover?</p>
<p>The political debate is likely to focus on which risk factors and tests insurers ought to be permitted to use in selecting applicants. The choice of permissible underwriting factors is not neutral. It defines a set of people likely to be excluded from the better coverage of most private programs. Medical underwriting on the basis of diagnoses has a particularly cruel and perverse result. After determining that a person is sick or at high risk, insurers turn around and deny aid for exactly that need. True, if insurers accepted an applicant knowing he or she was already sick, they would no longer be insuring but simply providing a payment mechanism. But that point merely illustrates the limits of private insurance as a method for financing care of the sick. Private insurers do not hide their interest in denying coverage to the high-risk; they insist it is their obligation to turn their backs on people once it is clear that they are or will become expensively sick.</p>
<p>The problems in health insurance are so severe that many major insurance companies are beginning to realize they must reform their practices or have their business either taken over or regulated by government. Small business associations and even some insurance trade associations are actively pressing for state and federal laws that would stop some of the cream-skimming. As Robert Laszewski, executive vice president of Liberty Mutual, recently told <i>The New York Times</i>, "The notion that an insurance company should be making a profit by figuring out which Americans not to cover is no longer viable."</p>
<p>When used to exclude people from such basic services as health insurance, predictive testing divides our society in dangerous and undesirable ways. The debate should focus, not on which tests insurers should use, but on how medical testing undermines our institutions of social protection.</p>
<p> </p>
<p><strong>TO TACKLE THE INSURANCE</strong> conundrum, we need to take community as our starting point. From there, it is clear that the purpose of health, disability, and life insurance, at least at levels providing security against devastating losses, is precisely to distribute according to need. An effective campaign for broader risk sharing has to demonstrate that insurance practices are issues of membership, and that the predictable result of medical underwriting is to exclude those people who need help the most. Such a perspective should help build coalitions among all the disease and disability groups who are similarly affected by insurance underwriting. Finally, we must reveal the distinction between private and public social aid systems as wholly artifice. The selection of people and troubles to be covered by each sector ought to be a matter of conscious public policy, not the result of efforts by the insurance industry to skim off the good risks. That process inevitably puts the high-risk and the poor into a public sector lacking both adequate resources and majority political support.</p>
</div></div></div>Tue, 05 Dec 2000 02:51:49 +0000141659 at http://prospect.orgDeborah StoneA Darker Ribbonhttp://prospect.org/article/darker-ribbon
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><font class="nonprinting articlebody"></font></p>
<p>I've always wondered where the money goes when I pay extra to the U.S. Post Office for a sheet of breast cancer stamps, when I buy daffodils from the American Cancer Society, or when I pledge a donation to someone running a race for the cure. Or, for that matter, when I give money to a Boston breast cancer center, which I do ever since I was terrorized by a lump that mercifully turned out benign. </p>
<p></p><p>For a long time, I've had a sneaking suspicion that the money goes to support business as usual for whatever institution collects it, and that there's little-to-zero connection between charitable giving and anybody's cancer. I've thought about trying to designate my donations for care of women who don't have insurance or for the grossly under-appreciated nurses at the hospital where I had my surgery. I've done none of these things, nor have I ever turned my investigative talents to tracking the stamp and daffodil money.</p>
<p>My ignorance and passivity are a small part of the larger phenomenon Ellen Leopold tries to explain in her social history of breast cancer. Why did the radical mastectomy persist for nearly a century as the gold standard treatment when it was so devastating and so ineffective? Why do so many women submit to medical regimes that do violence to themselves? Why does society charge individual women with lifestyle modifications and lump hunting and call it prevention? And why has there been so little progress in preventing and curing breast cancer?</p>
<p>In a curious way, the angry impulse that drives this book off course also powers it to a bigger critique than Leopold knows she's got. What's wrong is that Leopold seems to think breast cancer is unique among cancers and that, consequently, only a radical feminist analysis can explain why the fight against breast cancer was so half-hearted and why it took the shape it did. She sees the history of breast cancer as a series of battles between women patients and their male doctors, and these battles, in turn, as part of a larger cultural oppression of women. Profound gender discrimination, stereotyping, patriarchy, and cultural misogyny are her explanatory tools. </p>
<p>Of course, the premise is silly. There's nothing special about breast cancer, either in "the failure of the twentieth century to abolish" it (a fact she complains of several times) or in the general contours of our strategies against it. However, bag much of Leopold's gender analysis (though not all), and you get a path-breaking inquiry into the sociopolitical history of cancer writ large. </p>
<p><i>Time</i> magazine's recent cover story on Katie Couric's crusade against colon cancer makes the point, for it fits Leopold's description of the breast cancer campaign to a T. First comes public education about how to spot the tell-tale signs, followed by exhortations to get screened, unpleasant though the tests may be. Next, there are morality tales about people who failed to help themselves and paid with their lives. (Cartoonist Charles Schulz, we are told, "resisted being tested despite the fact that his mother, two uncles, and an aunt died of colon cancer. By the time physicians discovered his tumor ... there was little they could do.") Then there's advice to individuals to eat right, take vitamins, exercise, and perform other lifestyle talismans to ward off cancer. There is only glancing mention of probable environmental causes of cancer with no corresponding advice to public officials to address them. Typically, too, <i>Time</i> soft-pedals the lack of effective treatments while hyping the "exciting" developments that are barely out of the test-tube and--oh, by the way--"not all patients can tolerate" (ponder that little euphemism for a moment). Above all, the story chirps cheery advice on early detection ("your best bet to beat colon cancer today is to catch it early") with nary a word about the nasty business of treatment and the often poor prognoses.</p>
<p>Leopold gets tremendous leverage on her social analysis of cancer because she dares utter the unspeakable question, the one pondered silently by everybody who's ever been through cancer treatment or nursed somebody through it: Why, pray tell, are devastating, excruciating, gut-wrenching, slice-and-dice, quality-oflife destroying, sometimes-lethal cancer treatments an acceptable form of medical care? What much cancer therapy does to human beings--deliberately and knowingly--would be considered torture, poisoning, and even murder, if it weren't done by people with an M.D. after their names to people with a lot of informed consent mumbo-jumbo over theirs. People routinely describe cancer treatment as going through hell. Why do they submit themselves and their children to it? Of course, a few cancer patients don't, and some elect to stop treatment before their doctors would cease. But they are the exceptions.</p>
<p>The simple answer, the one we usually tell ourselves, is that people submit to treatment because they see themselves in a struggle with death. "In extremis, they were willing to be guinea pigs," Leopold says about indigent women in late nineteenth-century charity hospitals. She might be talking about anyone who's been given the medical sentence of "terminal."</p>
<p></p><p>But that's too simple. Outside of medicine, the proverbial "remedy worse than the disease," with its implied calculus, is an injunction to refrain from the remedy. In the most famous of the Federalist Papers, for example, James Madison pondered how to cure the ills of faction with a new constitution and used precisely this metaphor to reject the option of curtailing liberty. When we understand something as a cure worse than the disease, we don't go there. </p>
<p></p><p>How, then, does a whole society come to accept the deluded combination of cancer prevention by private detection and cancer treatment by harrowing "cures"? We ask a similar question about all domination and all submission to governance that is patently destructive to a people and a way of life. It is a deeply political question. The great insight of this book is to put it to the American cancer regime. (It should be said, though, that Leopold applies the question only to the breast cancer regime; the reader has to do the rest.)</p>
<p>About breast cancer per se, Leopold has two answers. The small one is this: Radical mastectomy is just another act of violence against women, and the persistence of mastectomy is made possible by male domination of women. (She's more subtle, but the story comes down to this.) The big answer, though, gets it right on the money: "Once the basic treatment paradigm (radical surgery) had been put in place and become more readily available, its position could only be consolidated with the active support of the population it was designed to serve." Read the book as a story about how the cancer establishment got the active support of the American population, and you've got a new window on twentieth-century medical history. </p>
<p>Thus, Leopold says, "women had to be educated to accept treatment... . They had to become patients, to demonstrate, by their willing submission to treatment, their absolute faith in the therapeutic value of surgery." The organization we now know as the American Cancer Society was set up for just this purpose--to "influence public opinion" through a "massive public relations campaign" designed to reassure them with hope of a cure. "Surgeons were cast as heroic lifesavers, rescuing women from the brink of death." The campaign continues unabated in places like <i>Time</i>, though now specialists in chemotherapy and bone marrow transplants have edged out scalpel-wielding surgeons in the pantheon of heroes.</p>
<p>If some of Leopold's gender analysis is misplaced (therapeutic violence and the failure to prevent or cure are not unique to breast cancer), some of it is understated. In effect, for a long time all cancer patients were treated like women--dumb (as if their cancer could be hidden from them), passive (the doctor informed them what he would do to them), and submissive (blanket permission forms are still the norm--I was asked to sign two for my biopsy). </p>
<p>Moreover, a mostly male medical establishment could, and still can, organize such sick-making, debilitating, life-disorganizing therapies because it knew, and knows, that women will do all the necessary nursing--from hold-ing the vomit pans and cleaning up the incontinence, to calming the terrified and soothing the agonized. This is not to say that men don't do some of this nursing, but if women didn't exist, most of it wouldn't get done and cancer treatment could never be what it is. </p>
<p>Think I'm exaggerating? One of my friends told me how she was screened along with her husband when he needed a bone marrow transplant. She was asked all kinds of questions about her capacity to provide a support network. Suddenly it dawned on them that <i>he</i> would not be eligible for the transplant unless <i>she</i> could provide satisfactory caregiving. Next time I saw my internist, I asked her if it's true that having a family support system is an eligibility criterion for bone marrow transplants. She fairly interrupted me to affirm, "Oh yes, if someone doesn't have a support system, the transplant is doomed to fail." Her answer, a clinical rationale for hospital policy, reveals just how much the medical profession takes for granted a whole social system of formal and informal nursing by women.</p>
<p>I would be remiss if I didn't tell you that the book contains two extraordinary correspondences between breast cancer patients and their physicians. With Leopold's skillful glossing, the letters speak volumes about cancer care, and these two chapters alone are worth the price of admission. </p>
<p>One woman, Barbara Mueller, was a patient of William Halsted, creator of the radical mastectomy; their letters date from 1917 to 1922. After Mueller's mastectomy, Halsted would never have told her that she indeed had cancer but for the fact that she needed to know the diagnosis in order to file her health insurance claim. The other woman, Rachel Carson, corresponded with Dr. George Crile, Jr., in the early 1960s. Carson, of course, was the scientist and writer who launched the modern environmental movement. Crile was an early and outspoken opponent of radical mastectomies and couldn't get his views published in the <i>Journal of the American Medical Association</i>. Carson knew him socially and turned to him when she realized the surgeon who did her mastectomy had lied about her cancer. </p>
<p>How, you might wonder, can a surgeon convince a brilliant, constitutionally skeptical scientist that she doesn't have cancer--after he's done a mastectomy on her? One reason everyone should read this book is to find out.</p>
<p>But don't stop there. While you're at it, find out about the stamps and the daffodils, would you? </p>
<p><br /></p>
<p><br /></p>
<hr size="1" /><center>
<p align="center"><font face="verdana,geneva,arial" size="-2"></font>
class="nonprinting"&gt;</p>
<hr size="1" /></center></div></div></div>Thu, 30 Nov 2000 18:47:35 +0000140637 at http://prospect.orgDeborah StoneRationing Compassionhttp://prospect.org/article/rationing-compassion
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><font class="nonprinting articlebody"></font></p>
<p></p><p>"<i>I had begun to feel that we were part of some psychology experiment whose design was to see how quickly we could abandon our humanity</i>."</p>
<p>--Dr. Linda Peeno, an ex-medical director and claims reviewer for HMOs, confessing why she quit, in <i>U.S. News and World Report</i>, March 9, 1998.</p>
<p></p><p><font size="+2" color="darkred"><b> B</b></font>ack in the twentieth century, the United States of America enjoyed the most extraordinary economic growth, the most incredible scientific advances, and the most successful global empire. Nothing could threaten the mighty nation--nothing, that is, except an enemy within. The nation's health care system grew and grew, until finally the nation's leaders realized that it meant to devour the entire national economy. </p>
<p></p><p>After decades of failed reforms proposed by economists, policy analysts, and other social-science wizards, the leaders turned at last to the Grand Psychologist. He alone, they thought, might understand the psyche of the health care monster and know how to tame it. The problem, he explained to the leaders, was people. Human beings have an unlimited, incurable desire to help sufferers. When humans take care of other people--especially vulnerable, frightened, sick people--a strange love overcomes their reason. They develop bottomless sympathies, delusions of kinship, and fierce loyalties to the people they care for. They will do anything to help. </p>
<p>The Grand Psychologist believed he could save the country if he could conduct a bold experiment. He would need to stifle human compassion. But he knew the leaders would be upset by his plan, since they needed to present themselves as compassionate in order to get elected. So he told them instead that he would use simple economic tools called incentives.</p>
<p>First, he asked for authority to change the way healers get paid. He would pit the financial self-interest of healers against their emotional impulses. He would penalize them for excess generosity. </p>
<p>He started with the hospitals, the biggest gobblers of medical dollars. He outfitted Medicare--the biggest supplier of hospital cash--with a new reimbursement system. Henceforth, every patient was assigned a diagnosis (never mind that most people have more than one thing wrong with them). For each diagnosis, Medicare set a firm limit on how many days of care it would pay for. If hospitals can't stand to see their patients suffer when the Medicare jig is up, the Grand Psychologist said, let <i>them</i> eat the costs of their soft-heartedness. Having to think of each patient as a diagnosis with a price tag instead of a person with life, he reasoned, would soon dampen hospitals' compassion.</p>
<p>And sure enough, hospitals reacted by evicting Medicare patients before their Medicare money ran out. They also tried to squeeze higher rates out of other insurers. But very quickly, these other insurers felt suckered--so they, too, adopted tougher controls on what sorts of treatments could be lavished on the sick. Nurses and doctors had no choice but to stop doing as much for sick people. They shuttled sick patients off to somewhere and somebody else--to nursing homes, rehab centers, or relatives--and they called it "discharge planning" to convince themselves they were actually still doing something for the patients. </p>
<p></p><p><font size="+2" color="darkred"><b> R</b></font>iding high on his success making hospitals less hospitable, the Grand Psychologist urged the nation's leaders to use his new "prospective payment" plan everywhere that people take care of the sick--nursing homes, clinics, mental health hospitals, private homes, and even doctors' offices. The leaders obliged.</p>
<p></p><p>But the Grand Psychologist soon realized that financially squeezing medical institutions didn't pinch healers hard enough to curb their costly compassion. Everywhere, the healers were breaking rules, committing minor acts of fraud or disobedience in order to get their patients' care covered. They were lying to insurers, fudging patients' records, exaggerating symptoms, upcoding illnesses, stretching the truth to fit reimbursable diagnoses, concealing either too much progress ("insurer has determined that care is no longer necessary") or too little ("hopeless case--care not medically effective"). Sometimes, they just gave care without billing for it. In poverty-stricken neighborhoods and public hospitals, doctors and nurses even knowingly allowed people to use other people's Medicaid cards to get treatment. Home care nurses and aides visited their clients on their own time, after hours, to provide help and comfort that insurers refused to pay for. Primary care doctors, who were supposed to act as gatekeepers to other services, were advocating for their patients instead. </p>
<p></p><p><font size="+2" color="darkred"><b> T</b></font>he Grand Psychologist saw these healers as so many addicts to altruism, unable to stop caring and abide by the law. So he ratcheted up the pressure. The government slashed its Medicare budget, ensuring that hospitals, nursing homes, and home care agencies had to slash theirs, too. Meanwhile, he proselytized about unnecessary care, waste, and inefficiency, so the people would believe their healers were doing something wrong by caring so much. He planted the seeds of distrust between sick people and their healers. </p>
<p></p><p>As institutions cut personnel to stay within their straitened budgets, caseloads for the remaining staff increased. Nurses and aides were allowed less time with each patient. Managed care companies began paying doctors on a per capita basis, no matter how sick or time-consuming the patients. They said they were adjusting payments to account for the complexity and severity of cases in each doctor's practice, but they used a formula they knew accounted for only a tiny fraction of the differences. They started giving doctors bonuses for denying care. HMOs, whose doctors were captive employees, imposed productivity quotas: See so many patients per hour, or else. </p>
<p>Having little time to spend with each patient, healers were less likely to develop those intractable feelings of affection, compassion, and dedication to which they were unfortunately so prone. </p>
<p>Just to sever the bonds between primary care doctors and patients a little more cleanly, managed care plans required patients who needed hospital care to be treated by fulltime hospital staff doctors instead of their primary care doctors. These "hospitalists," as they were called, were sold to the medical profession and the public as specialists in complex and acute cases--better qualified and more efficient than mere generalists. Family doctors were no longer paid for visiting their patients in the hospital, providing the reassurance of a familiar face and the expertise of a long-term relationship. Hospitalists, after all, could treat--or not treat--with detached objectivity. And anyone who complained about the new worship of efficiency was tarred as a lazy slouch or a parasite on the social good.</p>
<p></p><p><font size="+2" color="darkred"><b> S</b></font>till, there were complaints. People resented being treated like parts on an assembly line or debits on a financial statement. Their legislators posted daily sob stories on the congressional video monitors. The newsweeklies profiled victims of insurance denial and screamed things like "HMO HELL" across their covers. </p>
<p></p><p>But the Grand Psychologist had other strategies to discredit healers and turn professional altruists into deviants. He declared that doctors hadn't sufficiently proven the effectiveness of their treatments, then accused them of practicing by habit, tradition, conventional wisdom, sympathy--everything but science. He espoused "evidence-based medicine," as if it were something new and most of what the healing professions had been doing before his dictature was witchcraft. He pumped ever more funds into randomized clinical trials to expose placebo effects. (Never mind that the placebo effect might represent the healer's true power, the power to give patients hope and enable them to heal themselves.) He aimed to deny reimbursement for any care that hadn't been scientifically proven in a large-scale randomized study. By challenging the scientific basis of medicine, the Grand Psychologist undermined people's trust in medical care and justified insurers' refusal to pay for it. </p>
<p>Alas, people continued to seek treatment for their suffering, and healers refused to give up on their patients. They continued to offer unproven therapies and comfort measures with no medical impact. The Grand Psychologist had no choice. He had to make compassion a crime.</p>
<p>He mounted a campaign against fraud and abuse, sending government auditors into hospitals, nursing homes, and home care agencies to scrutinize their records. The auditors found some genuine fraud--indeed, plenty of it. But predictably they also found zillions of clerical errors, incomplete forms, and other technical irregularities, for which the institutions were slapped with humongous fines and punitive damages. Inevitably, too, the auditors found zillions of cases where applying insurance rules called for professional judgment. They used the authority of the federal government to declare healers' pro-patient decisions wrong--and fraudulent. At last, those overly compassionate healers were exposed as the criminals they were. And finally this little reign of terror made healers think twice about helping their patients.</p>
<p>After a long day, the Grand Psychologist tidied his lab, sank into his armchair, lit his pipe, and admired his work. The experiment had been a complete success. ¤</p>
<p><br /></p>
<p><br /></p>
<hr size="1" /><center>
<p align="center"><font face="verdana,geneva,arial" size="-2"></font>
class="nonprinting"&gt;</p>
<hr size="1" /></center></div></div></div>Thu, 30 Nov 2000 18:47:35 +0000140763 at http://prospect.orgDeborah Stone