Tackling 21st-Century Tech Risks

I was privileged to have the philosopher and critic Richard Rorty as a colleague for a short time at the University of Virginia. Rorty, who died in 2007, was about as sophisticated a cultural observer as there can be among us American provincials. When I visited him in his office one day, he handed me a book he said someone commended to him but confessed he hadn’t gotten to read. “It looks interesting,” he said with a characteristic shrug. “Maybe you’ll find it useful.” I’m not sure why he thought so, but any book given by Dick Rorty needed to be read.

That was 1999, and the book was a translation of a 1992 work by the German sociologist Ulrich Beck, called Risk Society. One of Beck’s key premises is that the modern world is typified by the exportation of risk. His examples include the industrial disaster in Bhopal, India, in which thousands of people died from a gas leak at a Union Carbide pesticide plant. Doing a little background work on Beck, I came to understand that his ideas were hugely influential in Europe, especially among advocates of the “precautionary principle” who would put the burden of proof on claims that no harm will be caused by an action or policy.

With some exceptions, the idea of managing risks proactively and proportionately, let alone embracing the precautionary principle, hasn’t caught on here as it has in Europe. In the U.S., technological risk management has been seen as too much government intrusion that threatens innovation. Instead, as Harvard’s Sheila Jasanoff has observed, the American approach has been to “normalize” new technologies like certain genetic manipulations as merely extensions of what already happens in nature.

But that approach can put a lot of weight on distinguishing between genuinely new wrinkles and what’s been done before, and it doesn’t help sort out high- and low-risk products. So for genuinely innovative technologies that cannot plausibly be lumped as minor variants of existing and proven technologies, there is little choice but to bear the brunt of the full force of the U.S. regulatory hammer. Instead of restraining government oversight, the result is often sweeping and unnecessary bureaucratic obstacles to innovation that aren’t well-matched to actual threats. The case I know best is that of human experiments, which are overseen by the same process, whether they are pen-and-pencil surveys or phase-one cancer trials.

Happily, albeit with little notice, the way the federal government regulates the risks of innovative science and technology is undergoing substantial change, one that started in the George W. Bush years and is gaining momentum under Barack Obama. The challenge is to reshape the regulatory philosophy of the 20th century to fit 21st-century research and development. Rather than the one-size-fits-all approach of the post-World-War-II era, government is trying to fit the oversight of emerging technologies — such as nanotechnology, synthetic biology and genetic engineering — to their risks.

What’s intriguing about this shift is that the idea of managing technological risk has not only been quietly embraced by both parties and across ideological lines, but it crosses over many and diverse scientific and technological fields. Last March the White House told federal agency heads that “regulation and oversight should avoid unjustifiably inhibiting innovation, stigmatizing new technologies, or creating trade barriers.” The statement by several presidential office heads emphasized principles like scientific integrity, public participation, benefits and costs of federal oversight, communication, flexibility, risk assessment, and risk management.

Take the case of nanotech. The last three presidential administrations have supported a National Nanotechnology Initiative (NNI) that has invested $14 billion in research since 2001 to speed the development of materials and devices with novel and valuable electronic, chemical, mechanical, and optical properties. At the same time, however — recognizing that public and business confidence in nano products could be threatened without good evidence of safety–the federal government, at the urging of the president’s science advisors, significantly increased research on the health and environmental implications of nanotechnology. The idea is to gain information while an industry is still young to avoid unnecessary, knee-jerk, and obstructive regulation later on. Along the same lines, in December the administration announced that it will develop a National Bioeconomy Blueprint, including “regulatory reforms that will reduce unnecessary burdens and impediments while protecting health and safety.” In another example, the human research oversight system is now being revised to concentrate oversight resources on the riskiest studies.

All this is a good start toward replacing a one-size-fits-all regulatory philosophy with one that focuses on evidence-based risk. So far, though, an important potential problem has been underappreciated by planners as they’ve sought to predict on the basis of solid science the actual risk posed by various new technologies: the threat that emerging technologies like nano or synthetic biology could have illicit “dual use” by terrorist groups. A single documented biological attack using new technology could result in a huge blowback that would undermine investment and public trust for years, a consequence that the then-nascent field of gene therapy research suffered after the death of a clinical trial subject in 1999.

The dual use problem came up just a few weeks ago at the end of 2011, when bird flu experiments aimed at bolstering public health efforts raised concerns for the virus’ potential misuse by bioterrorists. Of course, not all risks can be anticipated, but we can surely learn from our mistakes. As the president’s bioethics commission concluded in its synthetic biology report in 2010, “risk assessment activities across the government need to be coordinated and field release permitted only after reasonable risk assessment.” Coordinated risk regulation in advance of product development would encourage America’s innovators to more aggressively explore low-risk/high-potential-value terrain while freeing up resources to move dual use oversight into the 21st century.

Jonathan Moreno is a Senior Fellow at American Progress and Editor-In-Chief of Science Progress.This article is reposted from the Huffington Post technology page.

Get our Email Newsletter:

Connect:

What We Work On

I was privileged to have the philosopher and critic Richard Rorty as a colleague for a short time at the University of Virginia. Rorty, who died in 2007, was about as sophisticated a cultural observer as there can be among us American provincials. When I visited him in his office one day, he handed me a book he said someone commended to him but confessed he hadn't gotten to read. "It looks interesting," he said with a characteristic shrug. "Maybe you'll find it useful." I'm not sure why he thought so, but any book given by Dick Rorty needed to be read.
That was 1999, and the book was a translation of a 1992 work by the German sociologist Ulrich Beck, called Risk Society. One of Beck's key premises is that the modern world is typified by the exportation of risk. His examples include the industrial disaster in Bhopal, India, in which thousands of people died from a gas leak at a Union Carbide pesticide plant. Doing a little background work on Beck, I came to understand that his ideas were hugely influential in Europe, especially among advocates of the "precautionary principle" who would put the burden of proof on claims that no harm will be caused by an action or policy.
With some exceptions, the idea of managing risks proactively and proportionately, let alone embracing the precautionary principle, hasn't caught on here as it has in Europe. In the U.S., technological risk management has been seen as too much government intrusion that threatens innovation. Instead, as Harvard's Sheila Jasanoff has observed, the American approach has been to "normalize" new technologies like certain genetic manipulations as merely extensions of what already happens in nature.
But that approach can put a lot of weight on distinguishing between genuinely new wrinkles and what's been done before, and it doesn't help sort out high- and low-risk products. So for genuinely innovative technologies that cannot plausibly be lumped as minor variants of existing and proven technologies, there is little choice but to bear the brunt of the full force of the U.S. regulatory hammer. Instead of restraining government oversight, the result is often sweeping and unnecessary bureaucratic obstacles to innovation that aren't well-matched to actual threats. The case I know best is that of human experiments, which are overseen by the same process, whether they are pen-and-pencil surveys or phase-one cancer trials.
Happily, albeit with little notice, the way the federal government regulates the risks of innovative science and technology is undergoing substantial change, one that started in the George W. Bush years and is gaining momentum under Barack Obama. The challenge is to reshape the regulatory philosophy of the 20th century to fit 21st-century research and development. Rather than the one-size-fits-all approach of the post-World-War-II era, government is trying to fit the oversight of emerging technologies -- such as nanotechnology, synthetic biology and genetic engineering -- to their risks.
What's intriguing about this shift is that the idea of managing technological risk has not only been quietly embraced by both parties and across ideological lines, but it crosses over many and diverse scientific and technological fields. Last March the White House told federal agency heads that "regulation and oversight should avoid unjustifiably inhibiting innovation, stigmatizing new technologies, or creating trade barriers." The statement by several presidential office heads emphasized principles like scientific integrity, public participation, benefits and costs of federal oversight, communication, flexibility, risk assessment, and risk management.
Take the case of nanotech. The last three presidential administrations have supported a National Nanotechnology Initiative (NNI) that has invested $14 billion in research since 2001 to speed the development of materials and devices with novel and valuable electronic, chemical, mechanical, and optical properties. At the same time, however -- recognizing that public and business confidence in nano products could be threatened without good evidence of safety--the federal government, at the urging of the president's science advisors, significantly increased research on the health and environmental implications of nanotechnology. The idea is to gain information while an industry is still young to avoid unnecessary, knee-jerk, and obstructive regulation later on. Along the same lines, in December the administration announced that it will develop a National Bioeconomy Blueprint, including "regulatory reforms that will reduce unnecessary burdens and impediments while protecting health and safety." In another example, the human research oversight system is now being revised to concentrate oversight resources on the riskiest studies.
All this is a good start toward replacing a one-size-fits-all regulatory philosophy with one that focuses on evidence-based risk. So far, though, an important potential problem has been underappreciated by planners as they've sought to predict on the basis of solid science the actual risk posed by various new technologies: the threat that emerging technologies like nano or synthetic biology could have illicit "dual use" by terrorist groups. A single documented biological attack using new technology could result in a huge blowback that would undermine investment and public trust for years, a consequence that the then-nascent field of gene therapy research suffered after the death of a clinical trial subject in 1999.
The dual use problem came up just a few weeks ago at the end of 2011, when bird flu experiments aimed at bolstering public health efforts raised concerns for the virus' potential misuse by bioterrorists. Of course, not all risks can be anticipated, but we can surely learn from our mistakes. As the president's bioethics commission concluded in its synthetic biology report in 2010, "risk assessment activities across the government need to be coordinated and field release permitted only after reasonable risk assessment." Coordinated risk regulation in advance of product development would encourage America's innovators to more aggressively explore low-risk/high-potential-value terrain while freeing up resources to move dual use oversight into the 21st century.
Jonathan Moreno is a Senior Fellow at American Progress and Editor-In-Chief of Science Progress.This article is reposted from the Huffington Post technology page.

What We Believe

Science Progress proceeds from the propositions that scientific inquiry is among the finest expressions of human excellence, that it is a crucial source of human flourishing, a critical engine of economic growth, and must be dedicated to the common good. Scientific inquiry entails global responsibilities. It should lead to a more equitable, safer, and healthier future for all of humankind.