Magnus on ethics in bioengineering research: ‘Society is…pushing science to go faster and faster, and is also frightened of what might come.’

David Magnus Ph.D. ’89 is the director of the Stanford Center for Biomedical Ethics and the Co-chair of the Ethics Committee for the Stanford Hospital. The Stanford Center for Biomedical Ethics conducts bioethics research focused on a range of fields, including genomics, end-of-life care, cultural diversity, neuroscience, the healthcare marketplace and technology development. Further, the Center contributes to national and international policy discussions. The Daily sat down with Magnus to discuss the integration of ethical considerations into bioengineering research and how ethics is incorporated into the bioengineering major.

The Stanford Daily (TSD): Where is your research focused, and how do you ensure that your research is ethically conducted?

David Magnus (DM): All of my work I would describe as related to social, ethical or policy implications around the topics at hand. The technical, empirical parts of research that we do are really empirical social science studies … We do things like surveys, focus groups, interviews … things that create data that’s relevant for assessing the ethical, social implications. All of my work is in that dimension. It might be empirical, and it might be technical, but it’s definitely all focused on ethics.

TSD: How does BIOE 131: Ethics in Bioengineering (ETHICSOC 131X) teach students about ethics?

DM: When they were first creating the bioengineering major, the biggest feedback they got from main campus was [to] have a major for bioengineers at Stanford and have ethics be a main component of what they do and what they learn. Training a bunch of people to use this incredible technology and build new things without having a background in ethics struck people on main campus as a bad idea.

Given that pushback, Russ [Altman, who co-teaches the course] was very keen on making sure that there would always be at least some ethics component in the bioengineering major. I think the original vision was that there would also be ethics modules that show up throughout the curriculum in different ways, and that’s happened, but not maybe as much as we had originally envisioned.

With a little help from me, Russ was able to procure a grant to help fund the creation of the course as well as to help introduce ethics into bioengineering more generally. The course has gotten bigger every year … to the point where now we have around 180 students signed up this quarter.

TSD: Do you think that bioengineering research is outpacing the rate of scientific discovery that society is okay with?

DM: I would say it’s more complicated than that. I would say that society has a schizophrenic attitude toward new technology … Part of society loves the fact that there are new treatments. They love cutting-edge things that are going to benefit them. I spend time in the hospital; I run a clinical service for ethics, and we do rounds in the Intensive Care Unit every week — trust me, the patients that we see are desperate for new things that are going to come along and make a big difference in their lives where nothing else works.

So, we love new technology and we have a huge bias toward new things that we hope are going to save us, make us better, make the world better and make us happier. But, at the same time, there are the Promethean concerns which are expressed in a million different ways [whenever new technologies come out], like imagery of Frankenstein, such as “frankenfish” or “frankenfoods.”

I don’t think it’s as simple as saying that the science has outpaced society’s willingness to accommodate. I think society is both pushing forward and actually pushing science to go faster and faster, and is also frightened of what might come.

TSD: What kind of advice do you have for researchers? Are there things that you would say every engineer needs to do in their research to remain ethical?

DM: There are some very obvious and clear guidelines that everybody has to know and be aware of. Some of those are regulatory in nature, but those are often very narrow and don’t address the big picture concerns.

If you’re doing something with human subjects, you have to make sure risks are reasonable in relation to benefits, that risks are minimized, that participants are informed, things like that. But those I think don’t go far enough for addressing the larger issues.

For that, I think that a couple of things are really important. One, always remember the public. We don’t do enough to recognize the importance of research participants, or the public who’s going to be bearing the benefits and the burdens of different treatments or technologies that we develop. [We need to ensure] that they have some say earlier on in the development process. What has been happening for a long time is scientists do work, and each step builds on the previous step, and we’re going down a certain path, [but] the public is largely unaware of what that path is. And then it crosses a boundary that is from the scientists’ point of view just the logical next step in the development of the research, but now it suddenly crosses some threshold where society becomes aware of it and everybody goes nuts.

What’s problematic about that is the fact that we’re doing a terrible job of educating the public and having the public involved. We haven’t done an adequate job of [considering] prophylactic ethics, of thinking things through in advance.

We at Stanford have been actually on the cutting edge of trying to get very embedded in the research so that we know what’s going on. [We] have much more prophylactic ethics, including public involvement much earlier on in the development of those pathways. A good example of that is what we’ve done in synthetic biology.

Synthetic biology is an incredibly new set of powerful technologies. [But] before we had synthetic biology … we had a team of ethicists and people in religious studies working with scientists closely to see what the research was before it had even gotten successful, so that we were already thinking about the ethics of it, thinking about the religious implications, and writing about that earlier on in the development … We published a paper in Science [Magazine] back in 1999 called “Ethical Considerations in Synthesizing a Minimal Genome,” and it accompanied research by Craig Venter and folks at the Institute for Genomic Research.

And it was just a good illustration of this collaborative project that we had been working on together so that when several years later, when [Eckard] Wimmer’s group actually created a synthetic virus, we actually were able to publish things that said, “Here’s the ethical issues, here’s the challenges, this isn’t a threshold that you should be that freaked out about, but there are some issues and here’s what we need to do, how we need to regulate it, how we need to think about it,” and we can start to get the community of scientists together to think through some of those issues and do some self-regulation.

So when it got announced and the media initially got hysterical we said, “Calm down, here’s an article from a few years ago. We’re already dealing with it. It’s all sorted out.” We need more of that. It’s not enough just to have ethicists doing that. Since a lot of these technologies are going to involve risks and benefits to the public, and gambles in a way, especially for technologies with a dual-use capacity, it is important to have some sense that there has been a significant public engagement process so that we can say that there is a public buy-in.

TSD: Can you give me an example of what the public engagement process might look like?

DM: So there’s a couple of different approaches that have been taken. One is called deliberative democracy … You bring in a bunch of people from the public, and you make decisions about how your sampling is going to work. You might oversample for certain parts of the population that you’re particularly concerned about being vulnerable. A certain period of time is spent educating them about the issues and the topics, let them deliberate and then let them give feedback or even ultimately make decisions about what your policies are going to be.

There is another approach, called the deliberative stakeholder engagement process, where … the stakeholders themselves actually work together in the deliberative engagement process where you meet together and arrive at a consensus about policies. That’s happening much more frequently in the last five or ten years.

It’s still completely alien to the way most scientists think. I’m still always shocked when I run into groups who say “we had a hard enough time getting bioinformatics, and clinical geneticists, and basic geneticists to all meet and agree on something. How could we possibly also bring in the patient voices?” But, in the end that’s a really bad idea and they need to bring those voices in earlier.