CURRENTS
Volume 15, Number 4, 1992.

An Open Letter to the President-Elect

Dear Mr. Clinton:

As an economist, I recognize that, at the margin, free advice
is probably worth what it costs. At the risk of overloading your
agenda, however, let me suggest four specific near-term actions
affecting federal regulation that would increase economic growth.

First, appoint a responsible official as administrator of the
Office of Information and Regulatory Affairs. The gross annual
cost of federal regulation is now about $600 billion and has been
an increasing constraint on economic growth. The office with the
primary responsibility to review federal regulations, however,
has not been led by a political appointee for four years, and
your friends in Congress have seriously limited the effectiveness
of this office. You will find, as did President Carter, that an
effective review of federal regulations is an important part of a
coherent economic program.

Second, review the Basle standards on bank capital. Those
standards on bank capital, implemented over the past several
years without congressional review or approval, have seriously
limited the supply of credit, primarily to small business.
Moreover, those standards may be the primary common reason for
the recession or slow growth in all of the major economies. As
long as the government insures deposits, some regulation of bank
capital is required, but those standards are badly designed and
merit careful review and possible change.

Third, propose a repeal of the Davis-Bacon Act. The
regulations under that act, approved in 1931 to limit the
employment of blacks in the construction trades, substantially
increase the cost of construction contracts financed with federal
money. You have proposed a substantial increase in federal
spending for public infrastructure. The value of that initiative,
however, will depend on the amount and type of construction
completed, not on the amount spent. You are well-advised to
propose a repeal of that act as part of your infrastructure
program.

Finally, above all, do no harm. The major potential new
regulatory burden is your tentative early endorsement of a
"play or pay'' plan for health insurance. The major burden
of that measure would be on small businesses and the low-income
employees of those firms. There are much better ways to address
the problems of those without health insurance. Review the health
care problems and policy alternatives very carefully before you
commit your administration to a major health care initiative.

W.N.

Insurance and the Regulation of Medical
Care

For many years the government has subsidized the demand and
restricted the supply of medical care. One consequence has been a
rapid increase in both the relative price and real expenditures
for medical care. The relative price of medical care has
increased at an increasing rate; in 1991 the relative price of
medical care increased 5.8 percent--a rate, if sustained, that
would double the relative price in the next twelve years. Total
expenditures for medical care have increased from 5 percent to 13
percent of GDP over the past thirty years and are now the most
rapidly growing component of both private payrolls and government
budgets.

The primary cause of this rapid escalation of medical care
prices and expenditures has been the progressive broadening of
the number of people and the range of medical services covered by
health insurance. The share of personal health care expenditures
paid directly by patients has declined from about 50 percent to
about 20 percent since 1960. Given the dominance of third-party
payments, neither patients nor physicians now have an adequate
incentive to control the costs of medical care.

One other consequence has been a progressive broadening of the
regulation of the medical sector in an attempt, so far
ineffective, to constrain the escalation of medical care prices
and expenditures. The compensation rates to hospitals by
Medicare, for example, have been controlled since late 1983, and
control of the compensation rates to physicians by Medicare was
implemented in 1992. Congress is now considering a major bill
that would extend the Medicare compensation rates, with limits on
balance billing, to all private payers.

Those issues--the interaction of insurance and the regulation
of medical care--were the focus of the third annual Regulation
conference on April 20 and May 1 of 1992. Most of the articles in
this issue are selected from papers presented at that conference.
As has been the case with prior Regulation conferences,
the selection of papers to publish was not easy and does not
reflect on the quality of the other papers. A brief summary of
the other papers is a useful complement to those published in
this issue.

The first panel addressed the effects of incomplete health
insurance coverage. George Fisher, a practicing physician,
summarized the characteristics of the uninsured. Around 35
million Americans do not have health insurance; most of this
group, however, is only temporarily uninsured, usually between
jobs, and the characteristics of the group are very
heterogeneous. Arnold Epstein, a physician at the Harvard Medical
School, summarized the results of a major study of the medical
care and health status of the uninsured. The major results of
that study are that the uninsured use about two-thirds of the
amount of medical services as the insured (normalized for age,
race, and sex but not income) and that the health status of the
uninsured is somewhat worse than the insured both before and
after the use of a medical service. Thomas Hoerger, an economist
at Vanderbilt, summarized the evidence on who pays for the
uninsured. As of 1989, hospitals supplied about $11 billion of
uncompensated care, primarily to the uninsured. Services provided
to the uninsured are financed, roughly, 26 percent by
the uninsured directly, 20 percent by state and local governments
and private donations, 10 percent by Medicare, and the rest by
cost-shifting from charges to the insured. Survey evidence
suggests that the average physician provides about $25,000 of
uncompensated care per year, again primarily to the uninsured.
The general lesson from the panel is that the lack of health
insurance leads to significant problems, in terms of both the
health status of the uninsured and the costs of uncompensated
care, but that these are relatively small and manageable problems
in a medical sector in which total expenditures are now about
$800 billion a year and increasing rapidly.

The second panel summarized the effects of broader health
insurance coverage on the relative prices and regulation of
medical care. Gary Robbins, an economic consultant, summarized
the results of a study of the effects of tax-subsidized and
tax-financed health insurance on the relative price and
utilization of medical care. At the margin of current conditions,
over 50 percent of additional subsidies are reflected in higher
prices and less than 50 percent in increased services. John
Goodman, an economist at a Dallas policy institute, attributes
most of the increased regulation of the medical sector to an
attempt to control the increased prices and expenditures that are
primarily the consequence of broader health insurance coverage.
For the most part, those regulations have not been effective in
offsetting the "moral hazard'' problems caused by insurance
and have themselves become a source of increased costs. The
general lesson of the panel is that broader health insurance has
been a major source of the perceived problems of American medical
care.

Other conference papers not published in this issue addressed
a range of topics. Richard Scheffler, an economist at Berkeley,
reviewed the record of hospital reimbursement under Medicare; one
important conclusion of his paper is that the prospective payment
system has increased the inflation rates on services covered by
private insurers. Sheila Shulman, a public health specialist at
Tufts, reviewed the record of the Orphan Drug Act; she proposes a
change in test criteria to better focus on valuable drugs that
are not likely to be commercially viable. John Holahan, an
economist at the Urban Institute, summarized the effects of
federal mandates on Medicaid spending; he concludes that those
mandates have had a significant but small effect, with the
largest effect on spending by the southern states. Katherine
Swartz, also an economist at the Urban Institute, evaluated the
"pay or play'' proposals and concluded that they are best
perceived as a transition to a national health insurance program.
Gerald Musgrave, an economic consultant and the author of a
different article in this issue, summarized the case for
"Medisave'' accounts as a means to increase the incentives
of patients to control the use of medical services. The
conference participants were also treated to the informed
political realism of luncheon addresses by Rep. Willis Gradison
of Ohio and Gail Wilensky, the health policy adviser in the Bush
White House.

Where do we go from here? The major lesson that we should
have learned from this conference is that the inflation in
medical care prices and expenditures can only be reduced by
reducing the growth of demand for and increasing the supply of
medical care. There are specific, relatively small policy
problems resulting from the lack of health insurance, but any
measure to broaden health insurance to those who would otherwise
be uninsured should be part of a broader reform to reduce the
growth of the total demand for medical care. Direct regulation of
medical prices and expenditures, as in any other market, would
surely lead to rationing of access to medical care on some other
criteria.

The more things change, however, the more they stay the same.
President-elect Bill Clinton campaigned as the candidate of
change, but his tentative proposals on health policy ignore the
major lessons of recent decades. The uninsured would be provided
health insurance by mandates on employers or additional
government expenditures, and total expenditures would somehow be
limited by direct controls. Those measures would exacerbate the
primary conditions that have led to a perception of a health care
crisis--higher prices, higher expenditures, and broader
regulation. Clinton has promised to propose a major health
program in the next few months. Congress would be well-advised to
act more deliberately. Genuine change implies a change of
direction, not of magnitude. More of the same would only increase
the magnitude of our major health care problems.

W.N.

Chipping away at Industrial Policy

It is time to gloat a little. I refer to the November 20,
1992, Washington Post, specifically to the front-page story
headlined "U.S. Again Leads in Computer Chips.'' The Post
story reports preliminary 1992 sales-figure estimates showing
that, for the first time since the mid-1980s, American
semiconductor manufacturers will edge out their Japanese rivals
for largest share of the world computer chip market. The article
might just as well have been titled "Industrial Policy
Misses Boat Again.''

The decline and fall of the American semiconductor industry
have been part of the economic nationalist-industrial policy
mantra for years. That collapse was supposedly most clearly
revealed in the figures for aggregate world market share: in 1982
the U.S. industry had 54 percent of the world chip market,
compared with 34 percent for the Japanese; by 1989 those numbers
had basically flipped, with Japanese market share rising to 52
percent and the U.S. share dropping to 35 percent. It was a case,
in the words Clyde Prestowitz used to title his book, of
"trading places.''

Somehow, though, the sky has managed to recover from its fall
over the past couple of years. In 1990, for the first time since
1979, American aggregate market share gained on the Japanese. And
now, according to estimates by VLSI Research, American firms will
actually edge out their Japanese competitors in 1992, 44 percent
to 43 percent. It is a bitter setback for the doom-and-gloomers.

The story behind the rebounding numbers provides a valuable
lesson on the perils of government's intervening to assist
so-called strategic industries. Panic over the declining fortunes
of American chipmakers provoked Washington to attempt a number of
protectionist and industrial policy responses. All of them
failed, yet the industry recovered anyway. With the benefit now
of hindsight, it is clear that the industrial policy crowd
completely misdiagnosed what was happening in the semiconductor
industry.

In actual fact, the reports of American chipmakers' demise
were always greatly exaggerated, those apocalyptic aggregate
market share statistics notwithstanding. In the first place, the
dramatic reversal of fortunes that the 1982 and 1989 figures seem
to reveal is largely an artifact of currency fluctuations. Using
a constant 1990 yen-dollar exchange rate, the apparent 37-point
swing in market share differentials (the U.S. industry up 20
percentage points in 1982, down 17 in 1989) reduces to a much
more modest 12-point swing (the U.S. industry down 3 points in
1982, down 15 in 1989).

Furthermore, the figures cited do not include the American
"captive producers''--companies (most prominently IBM) that
produce semiconductors for their internal use but not for sale.
Japan doesn't have captive producers. In 1987 IBM and the other
captives produced an estimated $5.1 billion worth of computer
chips, as compared with $12 billion in sales by U.S.
"merchant'' producers. Include the captive producers in the
market share statistics and the U.S. industry never lost its lead
over the Japanese.

Even so, the trends for the U.S. semiconductor industry were
decidedly negative for much of the 1980s. The reason for that is
that American firms were more or less routed in one particular
product market: high-volume, standardized "commodity''
memory chips, the biggest selling of which are known as DRAMs
(dynamic random access memory, pronounced dee-ram). Here a few
words of oversimplified explanation are in order. Semiconductors
may be divided into two broad classes: "commodity'' chips,
which are mostly memory chips that store and retrieve data, and
"design-intensive'' chips, which generally are logic chips
that process and manipulate data. Commodity chips are
standardized in design and function and are sold in high volumes;
design-intensive chips have specialized or even customized
applications and are sold in lower volumes. The market for
commodity chips, as their name implies, is price-driven;
design-intensive chips, by contrast, are highly differentiated
with high "proprietary'' content. Competitiveness in
commodity chips is a matter of manufacturing efficiency;
competitiveness in design-intensive chips depends primarily on
innovative design.

During the first half of the 1980s, one American chipmaker
after another left the commodity memory business in the face of
relentless Japanese competition. By 1986 only two U.S. firms
continued to make DRAMs for sale; the U.S share of the merchant
market plunged from nearly 100 percent a decade before to less
than 10 percent. In the design-intensive segment of the industry,
though--most notably in microprocessors--American firms
maintained their preeminence.

The evacuation from the DRAM market was widely regarded as a
catastrophe for American microelectronics. It was the accepted
conventional wisdom that competitiveness in commodity memory chip
production was the key to competitiveness in semiconductors
generally. According to the influential 1987 report on defense
semiconductor dependency by the Pentagon-sponsored Defense
Science Board, DRAMs "in many respects represent the
bellwether of the semiconductor industry.'' They "are the
most challenging semiconductor chips to manufacture
competitively, and their development establishes the pace for
progress in semiconductor technology.''

Thus, it was thought that losing out in DRAMs had broad
negative implications for American competitiveness. To quote from
A Strategic Industry at Risk, a 1989 report by the National
Advisory Committee on Semiconductors, ";obt;cbhe loss of
position in memory is particularly disturbing because
leading-edge memory drives technological advances in a broad
range of process and manufacturing areas.'' The fear was that
falling behind in DRAMs would start a chain reaction of
subsequent losses in other areas of the semiconductor industry,
not to mention the upstream semiconductor equipment industry and
the downstream electronics industry.

In particular, the American lead in design-intensive chips was
seen as unsustainable unless the commodity memory market could be
reclaimed. The Defense Science Board report stated: "In the
absence of a domestic mass-production revenue base needed to
preserve a viable domestic production equipment industry, the
specialty ;obthat is, design-intensive;cb producers themselves
may become dependent on foreign suppliers for their materials,
equipment and fabrication technology, and would then be at a
disadvantage when under competitive assault by firms controlling
the access to those resources.''

In the larger picture the Japanese ascendancy in DRAMs looked
like yet another advance in Japan's bid to replace the United
States as the world's top economic superpower. In the 1950s and
1960s Japanese companies took the lead in basic industries like
steel and textiles, moved on in the 1970s to consumer industries
like automobiles and consumer electronics, and then cracked into
high tech with DRAMs, machine tools, robotics, and flat video
screens. American economic superiority was fast disappearing as
Japanese firms continued relentlessly "up the food chain.''

Such fears gave rise to a number of interventionist government
policies designed to rescue the semiconductor industry. First, as
a settlement of antidumping cases filed against Japanese chip
producers, the U.S. and Japan entered into a 1986 agreement to
establish price floors on Japanese chip exports not only to the
United States, but around the world. The agreement further
targeted 20 percent of the Japanese semiconductor market to be
reserved for U.S. and other foreign suppliers. Next, starting in
1988, the Pentagon began bankrolling Sematech, a consortium of
the largest U.S. chip producers engaging in cooperative R&D.
The Pentagon's annual contribution is $100 million, or roughly
half of Sematech's budget. Finally, the government offered
special antitrust immunity to U.S. Memories, another consortium
that was supposedly to enter actual commercial production of
DRAMs. There was also talk of $500 million in federal loan
guarantees for U.S. Memories, but the consortium fell apart
before ever getting off the ground.

Notwithstanding all the federal aid, the American
semiconductor industry never really staged a comeback in DRAMs.
U.S. firms in 1991 still had only about 18 percent of the world
merchant market. What, then, has accounted for the American
success story of recent years? Basically, there was a major
strategic shift in the course of the semiconductor industry
during the late 1980s that the conventional wisdom, and its
slavish followers in industrial policy circles, never saw coming.

According to the conventional wisdom, by the early 1980s the
semiconductor industry had reached "maturity''; it had
outgrown its raucous entrepreneurial youth and settled into a
strategic model based on mass production of standardized
products. Within that model, economies of scale and low-cost
manufacturing efficiency dominated; the future of the industry
lay along a predictable path of incremental improvements in
manufacturing technology. Yes, there were segments of the
industry where specialized designs could yield high profits, but
they were niche markets that were becoming progressively
marginalized. At any rate, producers specializing in filling
those niches could never hold their own against the leaders in
the high-volume commodity products.

The mature mass-production model explains why DRAMs were
considered so important. Developing each new generation of
chip--64 kilobits of memory, 256 kilobits, 1 megabit, 4 megabits,
and so on--did indeed drive progress in manufacturing processes
and equipment. If success in semiconductors was simply a matter
of squeezing ever more transistors onto ever smaller slivers of
silicon at ever lower cost, then indeed it was difficult to
picture American companies' keeping up with the Japanese without
keeping up in DRAMs.

Furthermore, it is understandable why people thought that the
Japanese would dominate a mature industry. They had all the
advantages to play that game. The huge vertically integrated
Japanese producers had resources that dwarfed most American
producers. Less expensive capital, long-term cooperative
relationships with suppliers, and mastery of quality control all
gave Japanese firms enormous advantages in low-cost
manufacturing. By contrast, the Silicon Valley start-up business
culture was condemned by Robert Reich and other industrial policy
types as "chronic entrepreneurialism.'' Unless American
producers consolidated or joined together in consortia (as in
Sematech and U.S. Memories), that is, unless they became more
like the Japanese, they would always be outmatched.

As events would have it, though, the supposedly mature
semiconductor industry has had an unexpected regression into
adolescence. At present, in fact, it looks like a Peter Pan that
may never grow up. A shakeup in technology has turned the
predictable world of standardized mass production upside down.

Because of revolutionary new design tools, chipmakers have
become able to design new chips at a pace only recently
considered inconceivable. The number of new chip designs has
exploded from 10,000 a year in the mid-1980s to over 100,000 a
year currently. As a result, the cost of producing specialized,
design-intensive chips has plummeted, and the market for them has
expanded correspondingly. The old combination of generic chips
and customizing software is giving way to already-customized
hardware. In other words, there has been a major shift in the
industry's balance of power away from standardized commodity
memories and in favor of specialized, design-intensive products.

While commodity memory chips remain a major segment of the
industry, growth has been flat, and opportunities for profit have
been few and fleeting. Japanese firms still dominate the
commodity market, but vanquishing the Americans proved a Pyrrhic
victory. The departure of U.S. companies did not relax the
competitive pressure; competition among Japanese companies
remains ferocious, and now upstart Korean producers are doing to
them what they once did to the Americans: undercut them on price
at every turn.

Amazingly, the massive investments that Japanese companies
have made in commodity memory--which once inspired such awe and
fear--are now being dismissed as a costly mistake. The recent
Washington Post story, in explaining the U.S. industry's rebound,
had this to say: "The Americans were aided by a Japanese
blunder in choosing to invest heavily in making computer memory
chips, a relatively simple type of chip that is no longer so
profitable.'' If only Clyde Prestowitz, Charles Ferguson, Robert
Reich, Lester Thurow, and others could be made to write that
sentence on the blackboard a hundred times.

The U.S. industry, meanwhile, has concentrated its resources
on design-intensive chips and has ridden their wave back to
market share preeminence. Leading the way have been smaller,
innovative, entrepreneurial companies that no one had ever heard
of or that did not even exist back when DRAMs were being lost.
Many of those upstart companies--with names like Cirrus Logic,
Altera, Xilinx, and Weitek--have enjoyed rapid growth and huge
profits without even manufacturing anything; they design the
chips and then farm out production to subcontractor foundries.

The new perpetual-adolescence strategic model is perfectly
suited to American strengths. Design innovation has always been
an American strong suit, and in the current alignment creating
value with a better design is much more important than cutting
costs marginally on the manufacturing floor. Moreover, with a
profusion of new products and the rapid acceleration of product
life cycles, flexible customized production replaces standardized
mass production. In such an environment the smaller, more nimble
companies nurtured in the American entrepreneurial system are
much better adapted than are the lumbering vertically integrated
dinosaurs of Japan.

Of course, there is no guarantee that American companies will
continue to exploit their advantages, or that future developments
in the industry will not turn the tables back in favor of the
Japanese. However the tale continues, its twists and turns up to
this point have very clear public policy implications. Namely,
trying to second-guess the marketplace through industrial policy
interventionism is a losing proposition, especially in a
turbulent high-tech marketplace where obsolescence is measured in
months.

Industrial policy types have been railing for years about how
the Reagan and Bush administrations, with laissez faire
ideological blinders, allowed America to cede leadership in
microelectronics to Japan (whatever that means). It is now
apparent that everything they were saying was 100 percent wrong.
They claimed to be forward-looking supporters of a
"strategic'' sunrise industry; in fact, they turned out to
be reactionary defenders of a sunrise industry's sunset sector.
Fortunately, the various interventionist policies launched in
response to their pressure proved more irrelevant than harmful;
American chipmakers blithely ignored everything the industrial
policy "experts'' (including some of their own CEOs) were
telling them and prospered accordingly.

Unfortunately, many of the principal figures of the industrial
policy crowd are now preparing to assume important positions
within the incoming Clinton administration, an administration
enthusiastically committed to governmental intervention on behalf
of "strategic'' industries. The parade is not likely to be
canceled on account of a few drops of fact.

B.L.

Is Lead a Heavy Threat?

In 1981 the Environmental Protection Agency forcibly evacuated
Times Beach, Missouri, at a cost to the American taxpayer of over
$60 million. The goal was to eliminate the threat posed by
dioxin, then viewed as the most potent of human carcinogens. Now
the U.S. Centers for Disease Control acknowledge that the risks
posed by exposure to dioxin, while present, were overestimated.

Today lead has replaced dioxin at the top of the federal
government's list of toxic threats, and in fall 1991 the Centers
of Disease Control lowered the safety threshold for blood-lead
content from twenty-five to ten micrograms per deciliter. Feared
for its potential impact on child development, the presence of
lead has rapidly given impetus to a vast array of government
programs from required lead testing for all children receiving
Medicaid assistance to the creation at the Department of Housing
and Urban Development of the Office on Lead-Based Paint Abatement
and Poisoning Prevention, which will spend almost $75 million in
fiscal year 1993 to assist in the removal of lead from homes.
Although the use of lead in consumer paints was banned in 1973,
it is estimated that as many as 57 million homes still have
lead-based paint. According to HUD, the cost of safely removing
that paint will average more than $7,000 per unit. Testing alone
can cost several hundred dollars per unit. Mandating either or
both at the point of sale for housing will increase the cost of
purchasing a new home. The Environmental Defense Fund estimates
that deleading the 24 million homes most in need of abatement
will cost approximately $240 billion.

Those policies are in addition to state and local actions,
such as New York City's decision to spend $3 million to reduce
lead levels in the city's water supply, and other federal
actions, such as an education project of the President's
Commission on Environmental Quality (spearheaded by the
Environmental Defense Fund) and the cleanup of lead-contaminated
Superfund sites. Despite such intense activity, leaders within
the environmental establishment claim that existing efforts are
inadequate. That has led some observers to speculate that the
actual goal is to remove lead from the periodic table of
elements.

While one would expect such drastic action if there were a
scientific consensus on the threats of lead exposure, there is no
scientific consensus. While scientists near universally agree
that lead is a dangerous neurotoxin capable of stunting childhood
development at moderate to high blood levels, they do not agree
on the threshold at which those effects begin. Indeed, the
history of lead research is marked by an acrimonious debate over
the relative risks posed by lead poisoning and at what levels
public action should be encouraged because, as with most toxic
substances, the damage from lead is a function of the dosage.

The concern that lead exposure could retard the intellectual
development of young children arose in 1979 as a result of a
study conducted by the University of Pittsburgh's Herbert
Needleman, now chairman of the board of the Alliance to End
Childhood Lead Poisoning, a Washington, D.C.-based advocacy
group. The study purported to show that children with relatively
high, but nontoxic levels of lead had demonstrably lower IQs than
their less-exposed counterparts. Although the IQ drop was
relatively small--three to four points--if millions of children
were afflicted, the result would cause significant concern
because early exposure to low levels of lead could impair a
substantial portion of America's youth.

Needleman's data did not go unchallenged, however. Beginning
in 1981, several experts began to question the methodology Dr.
Needleman used to get his results. At issue was whether he had
sufficiently controlled for confounding variables such as
schooling, socioeconomic background, child age, and parental IQ
that could significantly affect children's IQ scores. One critic,
University of Virginia psychologist Sandra Wood Scarr, claimed
that a cursory review of Needleman's original data revealed that
they supported none of the conclusions he wished to draw. Critics
not only accused Needleman of failing to account for confounding
variables, but questioned whether he, intentionally or not, had
manipulated his data to arrive at the conclusion that low-level
lead exposure could harm children.

The challenges to Needleman's research--hardly conclusive, as
no expert has been afforded the opportunity to evaluate his raw
data thoroughly because Needleman now claims to have lost
them--prompted an investigation by the National Institutes of
Health's Office of Scientific Integrity and the University of
Pittsburgh. The report has yet to be released pending an appeal
by Dr. Needleman, and so the validity of his research remains
clouded in doubt. Given that Needleman's research lies at the
heart of the federal government's lead risk assessment, the
charge of scientific misconduct is cause for concern.

Federal health officials claim that Needleman's study is not
the sole basis for federal policy, however. Dr. James Mason, head
of the Public Health Service, cites eighteen "sophisticated
studies'' from around the world that justify public concern about
lead: "It is on these studies--not on any single study--that
the federal policy has been based.'' What is interesting is that
the particular studies Dr. Mason cites do not support the
government's reducing the standard for safe lead exposure levels
to ten micrograms per deciliter. Several of the studies cited do
not analyze lead levels that are low enough to be relevant to the
debate. Moreover, others show no statistical correlation between
low-level lead exposure and IQ levels. For example, Dr. Mason
cites a study from England, although the author of the British
studies, Dr. Marjorie Smith, has claimed that there is no such
evidence about lead exposure. Dr. Smith asserted that "it is
still not possible to conclude with any certainty that lead at
low levels is affecting the performance or the behavior of
children.'' She contended that "parental IQ is the most
important influence on child IQ,'' though she noted that several
other factors such as family size, social class, and quality of
marital relationships were also significantly related to a
child's IQ. Smith concluded that there was "no overall
evidence that tooth lead concentrations were related to child IQ
once these other factors were taken into account.''

Of the international studies, the only reliable one that found
any connection between low-level exposures to lead and reduced
IQs was conducted in Edinburgh, Scotland. After adjusting for
confounding variables, the study found an IQ decline of less than
1 percent, a range that is barely perceptible given the margin of
error in childhood IQ tests of plus or minus 3 to 5 percent. As a
result, it is difficult to argue, on the basis of those results,
that lead is a primary factor in determining a child's IQ. As Dr.
Smith noted in an analysis of a study by the Institute of Child
Health in England, "moderate elevations in body lead burden
play only a minor role, if any, in determining a child's IQ when
compared with parental and socioeconomic factors.''

Other studies that are frequently identified include
longitudinal studies conducted in the United States and
Australia. In those studies the scientific evidence does not
appear to support the policy of lowering the lead threshold to
ten micrograms per deciliter. Those studies do, however,
reinforce the scientific consensus on the potential damage to
childhood intellectual development from moderate lead
levels--twenty to forty micrograms per deciliter. Nonetheless, as
noted in the recent Port Pirie, Australia, cohort study published
in the New England Journal of Medicine, "the
deleterious effects of lead are not large, and ... only a small
fraction of the overall variation in IQ can be attributed to lead
exposure.''

The threat of lead poisoning, while real, is not so severe as
many activists assert. At worst, it is only a minor player in
retarded childhood development. There is no conclusive evidence
to suggest that the detrimental effects of lead exposure occur at
blood lead levels below twenty micrograms per deciliter. To claim
the existence of an international consensus on the threat of lead
at levels well below that level is further belied by the fact
that the U.S. threshold is lower than Canadian and European
thresholds. Even the lead poisoning document in which the Centers
for Disease Control defended the lower standard, Preventing
Lead Poisoning in Young Children, fails to recommend any
remedial action other than continued monitoring for lead levels
below twenty micrograms per deciliter.

Lead poisoning is often viewed as primarily a problem of the
inner city. All estimates concur that inner city black children
are among those at greatest risk from lead poisoning, as they are
for many other afflictions from malnutrition to urban violence.
It should be noted that not only can malnutrition increase the
absorption of lead into the bloodstream, but the lack of proper
nutrition and other factors have their own impact on childhood
IQ.

Sadly, many of the efforts to help those children may only
compound their suffering. Lead programs, such as the testing and
remediation of low-income dwellings, will only increase the cost
of housing for that segment of the population. Some cities are
already experiencing that phenomenon. Certainly, the threat of
being homeless is greater than the threat of living in a home
with lead paint. Moreover, relying primarily on testing and
remediation at the point of sale for home deleading would take
more than two decades to delead the majority of dwellings.

An EPA representative says that the lessons from past
experience in regulating asbestos should be particularly
instructive in the case of lead. Indeed they should, as millions
of dollars were spent on what was often an inconsequential or
even a nonexistent health risk. Unfortunately, it is not clear
that those lessons have been learned. In New York City a
Greenwich Village public school shut down temporarily after a
lead abatement contractor raised the alarm of potential lead
poisoning from flaking paint. According to the New York Times,
abatement costs for the school will exceed $500,000--money that
will not be spent on textbooks, teachers' salaries, or school
lunch programs.

Moreover, as with asbestos, improper remediation actually
increases the threat of lead. A 1990 Boston study found that the
blood lead levels of children in the deleaded homes went up, even
though the children had been relocated during lead removal
process. While deleading methods have improved since then, the
study points to the danger of improper deleading. Should existing
and proposed lead "education'' efforts result in a scare
similar to that with asbestos, it is reasonable to expect that
much of the deleading will do more harm than good.

Because money spent on lead abatement and remediation programs
cannot also be spent on other, more serious problems, the
opportunity costs of lead programs are outrageously high, and the
potential lead hysteria created by government "education''
programs is unconscionable. As with asbestos, efforts aimed at
reducing threats to children could actually have the opposite
effect; and as with dioxin, many of the efforts may be based on
faulty assessments of the potential risks. Before the government
embarks on yet another series of environmental risk initiatives,
it should ensure that its priorities are grounded in sound
scientific evaluations of risk. Unfortunately, with lead, that
does not appear to be the case.