regulation

The thalidomide tragedy, which resulted in thousands of deaths and disabilities in the late 1950s and early 1960s, changed medicine forever. One of its outcomes was the establishment of more robust mechanisms for the regulation of medicines and medical devices.

There is no doubt that the tightening of regulation has prevented countless deaths and disabilities, and saved many lives. But regulation cannot always protect us from harm and events disturbingly similar to the thalidomide tragedy continue to occur. Let’s look at two recent examples.

Vioxx pain drugs

In the 1990s, a new class of anti-inflammatory medicines emerged – the “COX 2 inhibitors”. These pain drugs were touted as being less likely to cause gastric ulceration than existing treatments.

One of these, rofecoxib (Vioxx), manufactured by Merck, was later withdrawn from the market, when it emerged that it increased the risk of myocardial infarction (heart attacks).

It emerged that the company had deliberately misinterpreted and concealed some of the information it had about these risks, thus delaying the withdrawal of Vioxx from the market.

Questions were also raised about conflicts of interest – on the part of academic researchers who collaborated with Merck in running trials of Vioxx, members of the data safety monitoring board whose job it was to monitor trials of Vioxx, and members of FDA committees who assessed Vioxx.

A number of class action lawsuits have followed, including one in Australia in 2010, which ruled against Merck. This decision was subsequently reversed, but this was because the judges decided it was not possible to causally link the particular claimant’s heart attack to his use of Vioxx.

Merck has subsequently come to a settlement agreement with Australian patients.

DePuy hip replacements

Yet another class action lawsuit concluded in Australia this June. The action was brought against DePuy International Ltd and Johnson & Johnson Medical Pty Ltd, which were accused of being negligent in their design, manufacture and supply of a particular kind of hip implant.

The story leading up to this will sound familiar: a promising new medical device – the DePuy ASR hip implant – was developed and marketed in the mid-2000s. The company claimed these implants would would reduce friction and wear, and improve patients’ mobility.

Complication rates soon proved to be much higher than expected. Around 2,000 of the 5,500 Australians who received the device have required, or are expected to require, revision surgery.

The device was finally withdrawn in Australia in 2009 and worldwide in 2010.

The company has subsequently been accused of not testing the implant adequately, and of knowing – and denying – that its device did not meet manufacturing specifications.

As with the Vioxx case, concerns have been raised about possible conflicts of interest on the part of some of the surgeons who recommended the implant to their patients, and the regulators who evaluated it.

Is there more to come?

These two eerily similar events raise the question: can we do anything to reduce the likelihood of similar occurrences in future?

There is certainly scope to tighten our governance of the pharmaceutical and medical device industries, and the behaviour of those who interact with them. We can also make our regulation of new medicines – and devices and surveillance of existing products – more robust.

There are, however, several important limits to our capacity to prevent harms from medicines and medical devices – all of which help to explain why history keeps repeating itself.

First, pharmaceutical and medical device companies are commercial entities which invest billions of dollars in developing new medicines and devices. Tight regulations are in place and outright fraud is fortunately very rare.

The commercial imperative is, however, powerful. As a result, there is always the possibility that studies of new medicines and devices will be designed, and their results interpreted and disseminated, in a manner that overstates their benefits, and underplays their risks.

Second, most patients who are injured by medicines and medical devices sustain these injuries in the course of routine medical or surgical therapy – either because of unpredictable adverse events, such as allergic reactions to antibiotics, or because of unintended medical errors.

The adage that “all medicines are poisons” is, unfortunately, true, and we need to accept that even the best physicians and surgeons are only human and will inevitably make mistakes.

Third, we need to balance our desire for innovation and access to new technologies against our desire for safety and control. While there is definitely room to improve regulation and surveillance, we don’t want our clinicians and regulators to be so risk-averse that health technologies cannot make it onto the market or survive once they get there.

Finally, while we might like to think that academic researchers, clinicians and regulators are committed solely to their the pursuit of knowledge, patients and the general public, the reality is they all need to earn money, and attract funding for their work. This inevitably creates a situation in which their “primary commitments” compete or conflict with other loyalties or with self-interest.

We need to accept that “conflicts of interest” are part and parcel of all social roles. Therefore, there will never be a group of people whose only commitment is to protect patients.

When this sobering fact of human nature is combined with the dangers of the commercial imperative, the inevitability of unpredictable side-effects and medical errors, and the need to balance our desires for safety against our desire for innovation, the future looks uncertain.

The best we can hope for is that our systems of checks and balances will continue to be refined so that the “thalidomides of the future” will be caught and addressed as early as possible.

The article begins with an explicit mention of the CRISP-Cas9 technique for “editing” genomes, and a brief tour through the many diseases which modern medicine and biotechnology is seeking to treat. He goes on to say that research in these areas is essential, a point he frames in terms of reducing the global burden of disease. He then says:

Biomedical research, then, promises vast increases in life, health, and flourishing. Just imagine how much happier you would be if a prematurely deceased loved one were alive, or a debilitated one were vigorous — and multiply that good by several billion, in perpetuity. Given this potential bonanza, the primary moral goal for today’s bioethics can be summarized in a single sentence.

One point of common ground here, and indeed with Pinker, is that there really is a problem with much bioethics regulation – the processes of research governance and ethics committee oversight. It can be slow, cumbersome, unpredictable, perverse, contradictory and so on. But even so, no one is suggesting we dispense with it altogether – only that it be improved. Pinker himself says:

Of course, individuals must be protected from identifiable harm, but we already have ample safeguards for the safety and informed consent of patients and research subjects.

He also points out, later on in his article, that biomedical technologies which sometimes seemed very promising often fail:

Biomedical research in particular is defiantly unpredictable. The silver-bullet cancer cures of yesterday’s newsmagazine covers, like interferon and angiogenesis inhibitors, disappointed the breathless expectations, as have elixirs such as antioxidants, Vioxx, and hormone replacement therapy.

It is interesting that several of the technologies he mentions didn’t just fail, their failure was often covered up by manufacturers and the regulatory system designed to catch these failures arguably didn’t intervene quickly enough. He doesn’t mention this however; his argument the quasi-libertarian one that scientists should be allowed freedom to explore and innovate without excess regulation. He is less good at noticing the ways in which such innovation takes place in economic conditions which make regulation essential to cope with market failures.

He has two basic arguments for the need for bioethics to get out of the way:

First, slowing down research has a massive human cost. Even a one-year delay in implementing an effective treatment could spell death, suffering, or disability for millions of people.

And:

Second, technological prediction beyond a horizon of a few years is so futile that any policy based on it is almost certain to do more harm than good.

To the first, while on its face it looks plausible, if his second argument is valid, then the problem is we just don’t know whether the research we are doing now actually will save all those lives. Indeed, by investing in this research and not that we may be wasting all those lives he points to. We just don’t know. As to the second, it is almost trivially true, but nonetheless these predictions are what policy-makers and research funders and investors (not forgetting the rest of us) have to make. But he says:

In the other direction, treatments that were decried in their time as paving the road to hell, including vaccination, transfusions, anesthesia, artificial insemination, organ transplants, and in-vitro fertilization, have become unexceptional boons to human well-being.

In other words, some of our bets paid off, some didn’t, and sometimes some of us bet the wrong way on the basis of moral objections which in hindsight look ridiculous. It’s not clear to me why bets the wrong way on other grounds than moral ones get a free pass, but only moral qualms get mocked in this way. I am sure we can make a list of bets the wrong way where moral qualms ought to have played a part and didn’t so we ended up with disastrous consequences for millions. Again, he doesn’t talk about that. The trouble with consequentialist (or decision theoretic?) reasoning of Pinker’s type is that you have to count all the consequences of all the options, not just the ones which favour your own biases. Indeed, if you are going to be a consequentialist you have to be rather good at predicting outcomes (not as bad as he says we are, in other words). But in any case, he reserves special opprobrium for moral reasoning, thus:

Biomedical advances will always be incremental and hard-won, and foreseeable harms can be dealt with as they arise. The human body is staggeringly complex, vulnerable to entropy, shaped by evolution for youthful vigor at the expense of longevity, and governed by intricate feedback loops which ensure that any intervention will be compensated for by other parts of the system. Biomedical research will always be closer to Sisyphus than a runaway train — and the last thing we need is a lobby of so-called ethicists helping to push the rock down the hill.

My initial reaction to the Pinker article was heated. I have spent my entire working life as a bioethicist (roughly speaking, from my first postdoc, as I was not in bioethics as a graduate student or before) with people claiming that what I do is variously a waste of time, a waste of money, ideologically suspect (from any and all directions), intellectually sloppy and so on. Sometimes these criticisms have been levelled at me personally (fine, I can bite back if I need to, and I’m not perfect and sometimes the criticisms have been fair), sometimes at my work (that’s the academic life, I can take it) and sometimes they have been levelled at me and my peers _merely_ because of presumed attitudes, beliefs and values I must have simply because I am a “bioethicist” and “this is what bioethicists think”. Sociologists often do this (not all sociologists…) and historians of medicine often do this (not all historians…). It’s tiresome. If someone wants to know what _I_ think there are various ways of finding out, but a priori judgements of what I “must” think because I am a bioethicist really… get my goat.

My initial reaction to Pinker’s article was that it was an egregious piece of grandstanding which if it had come from the wilder shores of twitter we’d call trolling. However, that would be to attribute motives and intentions to Pinker I cannot verify. What I can say is that it is in a reasonably well established genre of writing which appears quite frequently in the professional medical press (for example, an unsigned editorial about 15 years ago in the Lancet titled “the ethics industry”), and it is perfectly reasonable and sensible to look at the writing as a genre piece, focus on its rhetoric, implied audience and so on. We should also look directly at the arguments, and when I’d cooled off and read Julian and Alice’s pieces I can see that there are arguments in Pinker’s article which have merit and I agree with them. But still, the rhetoric matters. Just as if I am parking my car and someone comes up in my face and shouts “get out of the way” I am liable to take that as a verbal assault even if he then gives me some good and compelling reasons why I might like to move my car a little. This person knows the effect of getting in my face and shouting at me, and it has little to do with the merit of his argumentation. Authors of style guides are pretty good at knowing how rhetoric works too. So are linguists and psychologists.

Never mind. I will get over it. I don’t have to take it personally, after all I don’t recognise myself in his description of bioethicists, so presumably he’s not talking about me anyway. (He wouldn’t know me from Adam).

Turning once more to the arguments, the one argument which has not been touched on directly, and I think is important, is that he says bioethics should not thwart:

…research that has likely benefits now or in the near future by sowing panic about speculative harms in the distant future. These include perverse analogies with nuclear weapons and Nazi atrocities, science-fiction dystopias like “Brave New World’’ and “Gattaca,’’ and freak-show scenarios like armies of cloned Hitlers, people selling their eyeballs on eBay, or warehouses of zombies to supply people with spare organs. Of course, individuals must be protected from identifiable harm, but we already have ample safeguards for the safety and informed consent of patients and research subjects.

Several commentators have questioned whether we can be that sanguine about how well bioethics does the latter. But it’s the former that bothers me. He’s making an asymmetrical argument: we should discount hypotheticals about bad things; but instead be guided by hypotheticals about good things. To my way of thinking they are no less hypothetical. There’s a branch of the sociology of knowledge which explores this in some detail, the sociology of expectations, and one of the main findings in that field is the biomedicine relies on creating narratives about plausible social futures of technologies in order to attract investors and research funders, persuade regulators and so on. No one, nowadays at least, ever chucks money at researchers saying, go and do something interesting, tell us when you’re done.

So my more general point is that bioethics is _precisely_ a way of telling stories about new technologies and exploring them and seeing what we make of them. There are other ways to do this – for example, as Pinker’s own example shows, making films, writing novels and stories, and so on. Or indeed writing business plans, IPOs and suchlike. This is how humans think stuff through. It’s part and parcel of how we make technologies. It’s not an extraneous factor, that can be shoved out of the way. So, once again, rhetoric matters, and not as a bit of optional packaging, but as part of the real work itself.

Some day I am going to write a book about this. Maybe I should thank Pinker for getting my introduction going with a bang.

Richard visited VELIM in February 2005 while he was on an Australian Bicentennial Fellow visiting the Centre for Applied Philosophy and Public Ethics, University of Melbourne. He and Dr Ainsley Newson were colleagues at Imperial College London.