This course seeks to create an informed public, aware of the technical, medical, legal, and ethical issues associated with implantable medical devices. The course features conversations with experts from a variety of relevant fields to discuss the present and future technological, ethical, legal, and social challenges associated with implantable medical devices.
DISCLAIMER: This course is intended for educational purposes only. None of the information provided in this course constitutes medical or legal advice. If you have medical or legal questions, contact your physician or attorney.

Enseigné par

Lauren Solberg

Assistant Professor

Michael Bachmann

Associate Professor

Adam Shniderman

Assistant Professor

Molly Weinburgh

William L. & Betty F. Adams Chair of Education

Brittany Bachmann

Transcription

Yeah. It's really, that's a really hard question. I mean one thing. So, two things that we haven't talked about that I think are important. One is, notice. Because at some point. Right? Someone is going to get, we're talking about medical devices. Right? So, how did the medical device get in the body of the person who's now worried about their packability. At some point, they had a discussion with a medical professional about having the medical device. Right? And, I would want, given these risks, the risks that we're talking about I would want them to have a full understanding of those risks. If we're talking about somebody who never travels, they're never going to leave the US, that doesn't mean their risk is zero. Right? But, it might mean that they're less of a risk than somebody who is globe trotting and exposing themselves to lots of different states or a greater number of potential attackers let's say. So, notice about the risks involved. Which is not a simple question. I think it's critical in this area and giving people the option of turning off the wireless function or choosing a device that doesn't have wireless capabilities. You said what could we do? And, after talking about all the things, all the bad things that could go wrong, I'm worried. I'm still hesitant to say this is what we must do to regulate these things now. Because over regulation or the unintended consequences, unintended effects of regulating new technologies in some instances be much worse than the harm that was attempted to be saved by the regulation in the first instance. So, I can give some examples in the medical context but I'm worried, I'm nervous about, I want to be very clear about the risks and I want people to know those risks going into their use of medical devices. But I'm not sure that we're, and we need to study the issue, but I'm not sure that we're ready to say these five things must be stopped, whatever they are. Yeah. And it's not. Yeah. And it comes up, that comes up in technology all the time just generally how much do we regulate. And, you know, you could see it in the context of Uber. Right? I mean, people want to protect the taxicab industry but they also want the ease and benefit of having Uber. Is there a way to have both? In some instances there is, in some instances there isn't. Maybe, you don't want to protect the taxicab industry. Maybe, it's a bad example. But yeah, the over regulation of technology I think, especially one that's based on fears and not based on hard data about the risks or an attempt to quantify the risks, is I think risky. In this context especially, because privacy is such an abstract concept. And privacy advocates are often, rightly they're often quite absolutist about drawing lines in the privacy context. And that unfortunately, leads to policies that are not always the wisest or don't maximize welfare. So, for example, it would be easy to imagine a scenario, it would be easy to imagine or endorse a rule that says that if person A collects your personal data they can't share it with anybody. Right? It makes sense. Somebody collects your medical information and you would say, please keep that safe and don't share it with anybody without my permission. You cannot share it. Right? But then, what if it turned out that they were a medical researcher and they could do all sorts of really useful things with that data? The data they've collected about 200,000 patients, let's say. Right? And, what if they didn't have an easy way of getting in touch with you to get your permission to use the data? Some people would still say they can't use it. But other people would say if there's a life saving application of their use of the data and as long as it's done in anonymized fashion, totally fine to go ahead and do it. I mean the example I'm thinking of is the Vioxx study that Kaiser did. Where they looked at hundreds of thousands of patient records it may have even been millions, and could see that patients in department A who were receiving this drug were showing up over here in department B. And department A and Department B might not have been, it may not have been appropriate under many people's understanding of privacy rules, for department A to be sharing that data with department B. But the end result was that they discovered this unintended consequence of the drug, the Vioxx drug, which was leading to many many heart attacks and ultimately probably hundreds of thousands of deaths. So, in that instance, it seems to me like the balance of the privacy concern is real. Right? We want to protect, safeguard people's privacy. But, you have to balance that against the lifesaving applications that can be discovered by pooling data, sharing data, cross-referencing data. And so, drawing hard lines without thinking about the trade offs or the costs and benefits or having a real understanding of what might be done with that data, both for good and for bad, is unfortunate.