Postmarketing Surveillance Is Good And Normal

Scientific American notes a recent study saying that a third of drugs approved by the FDA over the past ten years have since been recalled, been given new boxed warnings, or been given new “safety communications”.

A few people have asked me whether this means the FDA is too lax and needs to tighten its standards. Let’s look at this in more detail.

What the study actually says: of 222 drugs approved by the FDA in the last ten years, only 3 (1.3%) were taken off the market. Another 30% or so received “boxed warnings” or “safety communications”, basically the FDA’s way of adding new rules to their safety information. For example, the FDA might approve a drug for the general population, then issue a boxed warning saying “actually, we noticed that this drug can cause seizures as a side effect, don’t take it if you have a history of seizure disorder”.

The most serious category are the drugs taken off the market. As mentioned above, these are about 1% of the total. This doesn’t sound like the story of a weak regulatory agency failing to do its job. This sounds like a really impressive success rate. In fact, if our standards are so stringent that we’re insisting on a 1% false positive rate, I’m kind of horrified thinking of all the false negatives we must be throwing out.

But doesn’t even a 1% false positive rate mean the FDA failed in some way?

No. Let’s consider the most recent psychiatric drug to be withdrawn. This is nefazodone, an antidepressant withdrawn after the discovery that it causes liver failure once every 300,000 patient-years. That is, if 300,000 patients took it for one year, there would be one extra case of liver failure.

How, exactly, do you want to discover this in pre-approval studies? An average drug study has maybe 500 patients. Do you want to run an average drug study for six hundred years? Or do you want to figure out how to run a drug study that’s six hundred times bigger than average? And these numbers are underestimates – one extra liver failure might be a coincidence. You’d want to see two or three extra liver failures before you start thinking the drug might be involved. The average clinical trial costs something like $50 million. Are you sure you want to multiply this number by six hundred to catch a side effect that will affect one out of hundreds of thousands of people?

So what we actually do is post-marketing surveillance. The FDA demands an average drug study on 500 people to make sure that there aren’t any common problems. Then over hundreds of thousands of patients over the space of decades (nefazodone was out almost ten years before it got withdrawn) people collect records and see if there’s any unusual disorder that happens more often when people are on the drug.

Here’s another example from earlier this year: the FDA issued a safety communication that a new drug called Viberzi can cause pancreatitis in people who don’t have a gall bladder. I’m not sure how many patients in the original approval study didn’t have gall bladders, but I bet it wasn’t enough to draw any useful conclusions from. Should all drugs be delayed until they can do separate studies in the gall-bladder-less population? And how would we know beforehand that gall bladders were the problem? Why not separate studies in people with one kidney? People taking antidepressants? Red-haired people? Chinese people? Don’t laugh, Tegretol has a fatal side effect that’s only been observed in Han Chinese.

And then you do the kidney study, the antidepressant study, the red-haired study, and the Chinese study, and darnit, you forgot to look at people who eat way more sauerkraut than any normal person. There you go, linezolid just killed your patient.

You are never going to be able to figure out everything pre-approval. At best, you can prove that the drug is reasonably safe for the vast majority of people. Then later you find something that only happens once in a hundred thousand years, or only to people without gall bladders, or only to Han Chinese, or only if you eat too much sauerkraut. And then the FDA issues a safety communication about it. That’s what it looks like when the system works.

II.

Except I want to look a little further into FDA safety communications and boxed warnings, the large majority of the events found in the study. Some of these are important updates about things like the drug being dangerous to people without gallbladders. Others are…well, the FDA really likes warning people about stuff.

This February, the FDA issued a safety communication about chlorhexidine, aka antiseptic soap. This has been used since the 1950s by loads of surgeons, doctors, and random people who need antiseptic soap for something, and it’s currently a WHO Essential Medication. The FDA wanted us to know that, during fifty years of worldwide use of this product, about one person a year had experienced a severe allergic reaction (by comparison, there are 200 deaths a year from peanut allergies). The safety communication said that if you found yourself having a severe allergic reaction to antiseptic soap, you should call 911.

Last summer, the FDA added a boxed warning to all benzodiazepines (Valium, Xanax, Ativan, Librium, etc) and all opiates (morphine, Norco, Percocet, Vicodin, Oxycontin, etc) warning doctors that it could be dangerous to prescribe those two classes of drugs together. Doctors, who had been warned against prescribing those two classes of drugs together for fifty years, collectively said “well, duh”, and then continued prescribing those two classes of drugs together the same as always, because dealing with anxious people who have chronic pain is really hard.

In 2006, the FDA added a boxed warning to warfarin, saying it could cause major bleeding. Warfarin is a 50-year old medication taken by millions of people each year, which has probably saved hundreds of thousands of lives over the past half-century. It’s an anticoagulant, which means the whole point is to make your blood clot less and bleed more. Mentioning that warfarin can cause major bleeding is a lot like mentioning that sleeping pills might cause tiredness, or weight gain pills could make you fat. Nevertheless, after fifty years and tens of millions of patients, the FDA decided to issue a boxed warning about this. The medical community collectively say “Well, duh” again and got on with their lives.

Also in 2006, the FDA added a boxed warning to Ritalin, saying it might slightly increase the risk of heart disease in children. Everyone kept prescribing it anyway, and later on some better studies showed that it might not slightly increase the risk of heart disease in children. I’m not sure what the current status of this debate is, but it sure hasn’t stopped like half the children I meet from being on Ritalin.

In 2004, the FDA added a boxed warning to every single antidepressant – yes, every single one – warning that they might increase suicide risk in teens. There is still a heated debate about this, with some recent review articles seeming to confirm, and other people pointing out that, when the FDA warning discouraged people from giving antidepressants to teens, teen suicide attempts suddenly went way up. Anyway, antidepressants are hardly alone here – other psychiatric drugs that received boxed warnings include all typical antipsychotics, all atypical antipsychotics, all benzodiazepines, all stimulants, lithium, Depakote, Lamictal, and I think literally every single psychiatric drug except buspirone.

What I’m saying is – the FDA issuing a safety communication or boxed warning doesn’t mean the drug was a mistake, or you should be scared that something went wrong. It’s a routine part of the pharmacological monitoring system. This doesn’t mean it should feel routine to your doctor – they should get worried every time a new one comes out and make sure they’re not inadvertantly harming their patients without gall bladders – but it should feel routine to somebody looking at this on the institutional/systems level.

III.

So does this study mean that the FDA is too lax and needs to tighten its standards?

I’m not sure. Maybe the best answer is “not necessarily“. We definitely shouldn’t be aiming for a 0% post-marketing event rate. Is a 33% post-marketing event rate too high?

I’ve seen a lot of discussion on this recently which I think takes shortcuts. It points out that the FDA has a higher/lower post-marketing event rate than European agencies. Or that the FDA takes longer/shorter to approve drugs than it did a couple of years ago. Or that its standards are stricter/looser than some comparable area of the federal bureaucracy.

None of this matters. What actually matters is the number of people helped by incentivizing new drug development and getting it to market quickly, versus the number of people harmed by the safety problems that slip through the cracks.

It looks like some of the post-marketing surveillance events here were pretty silly, while others were pretty serious – two people died from that drug that causes pancreatitis in people without gall bladders. Are those two deaths justified in the context of saving thousands of other people who had whatever condition that drug cures? I don’t know without a lot more work. The only study I’ve ever seen on this is stuff in the vein of Isakov, Lo, and Motazerhodjat, which always finds the FDA is too conservative. Maybe they’re wrong, but if someone wants to prove they’re wrong, they should do the same kind of cost-benefit analysis and let us know that they came to a different result.

The finding that 33% of approved drugs get post-marketing safety events may factor into such a calculation. But without the rest of the calculation, it’s just a meaningless number.

74 Responses to Postmarketing Surveillance Is Good And Normal

The FDA wanted us to know that, during fifty years of worldwide use of this product, about one person a year had experienced a severe allergic reaction (by comparison, there are 200 deaths a year from peanut allergies).

I would expect that the number of people who are exposed to peanuts (note that you have to count incidents, not the actual number of people) is a lot bigger than the number of people who are exposed to such soaps, even if “loads” of doctors use them.

My wife got to be the one person experiencing a severe allergic reaction a few years ago. Still, it was _obviously_ an allergic reaction (hives everywhere the chloraprep was, and nowhere else), and I wonder if a specific warning really does more than add to the noise.

Peanut oil is used widely in all kinds of preparations, including creams and emollients, and these can be prescribed to treat eczema. Having an allergy (particularly for a baby/young child) might be very important information to communicate that “this contains peanut oil”on the box.

Anyway, antidepressants are hardly alone here – other psychiatric drugs that received boxed warnings include all typical antipsychotics, all atypical antipsychotics, all benzodiazepines, all stimulants, lithium, Depakote, Lamictal, and I think literally every single psychiatric drug except buspirone.

Wait, 200 deaths/year from peanut allergies worldwide? That sounds incredibly low, based on how many people have nut allergies, and how easy it is to accidentally eat something with nuts in (or in comparison to how many people die from really stupid things like being crushed by fridges).

This also seems like a great example of the noncentral falacy at work. Saying that one out of every three drugs has had a “major safety action” such as withdrawing the drug or issuing a warning is not that different from talking about “criminals such as the Unabomber and Martin Luther King Jr.” (to stick with the example from the linked post).

Even if it’s not literally saying that withdrawing a drug is the same as issuing a warning, it’s putting the two in the same category and implicitly suggesting that they are similar–they both belong in the category of “major safety action,” after all! In fact, I’m a bit suspicious that the underlying study might have done this intentionally. I can’t think of a good reason to include all these events as in one gerrymandered category other than to make the (very common) warnings scarier by associating them with the (very rare) withdrawals. But maybe that’s too cynical.

I’m even more suspicious now that I look at the headline of the Scientific American article: “Nearly 1 In 3 Recent FDA Drug Approvals Followed by Major Safety Actions: The withdrawal of these drugs poses concerns about a push for less regulation.” That headline pairs “major safety action” very strongly with “withdrawal.” If I were to read that headline uncritically, I would get the extremely clear impression that one in three drugs were withdrawn or had something equally bad happen. That can’t be accidental.

Rare (but important) events suck because of the crazy amount of evidence you need to draw conclusions.

The usual way this is dealt with is to find a proxy for the rare events that is good enough or deal with it in layers. Rather than testing that self-driving cars result in fewer deaths (requiring 10-100 billion miles) instead we can test that they result in fewer accidents (10-100 millions miles of testing). Rather than testing our storage system can store your data for thousands or millions of years, we look at the disk failure rate and assume that each of our redundant copies is independent enough and extrapolate.

Unfortunately independence tends to break down at low probabilities. What is self driving cars malfunction catastrophically every 100 million miles and result in near certain death? What is all your data is stored in the northern US and Yellowstone erupts? What if you are using the same software system to heat all of your backups and it malfunctions? What if there is a nuclear war? There are all sorts of potential problems and they only need to be true very rarely to blow up your model.

I think the Talebian view is to put an asterisk on any of these extrapolations or estimates based on models. I don’t think there is anything to be done to be better, there simply isn’t enough data to draw fully meaningful conclusions.

The real issue with medicine here I that I don’t think there is an easy way to model these sorts of things and diminish the search space. I doubt we really understand all of the possible interactions that we are playing with.

Even if there were a way to model this perfect, I think the hard thing is to properly assess the trade-off.

For instance, a drug is being studied that seems to have an X probability of curing [insert your favorite disease], but there’s a biomarker elevation in the clinical trials which have a Y probability of causing an Z% increase in myocardial infarction incidence. Do you approve it? Seems like it depends on the utility function of curing the disease vs having an extra myocardial infarction. But the utility function differs depending on who you ask. For the folks at the FDA, who may be called in by Congress to answer “why did you approve of this drug that has caused heart attacks in Americans”, myocardial infarction may carry such a large negative utility as to overwhelm everything else. For the patient who is suffering from the disease with very little options, the utility functions may be rather different.

Unfortunately independence tends to break down at low probabilities. What is self driving cars malfunction catastrophically every 100 million miles and result in near certain death? What is all your data is stored in the northern US and Yellowstone erupts? What if you are using the same software system to heat all of your backups and it malfunctions? What if there is a nuclear war? There are all sorts of potential problems and they only need to be true very rarely to blow up your model.

This seems really close to Nassim Nicholas Taleb’s “Extremistan vs Mediocristan” idea. Although I’ve only been exposed to the idea from watching Taleb talks and reading him on blogs (haven’t read The Black Swan, at least not yet).

This is a great essay, and one I’ll be linking to since I’ve seen people point to this study in multiple places already. (Well, to news articles overhyping the study…)

One point which I’m sure you know, but didn’t make explicitly, is that it’s worth differentiating the three(?) forms of safety communication. One outlines a known-and-accepted risk, like warfarin. These are probably more-or-less irrelevant – good for reminding doctors of things, but not a useful part of a drug-approval discussion. Another describes a general-and-new risk, like Ritalin and heart disease (maybe). This seems significant and hard to solve – it’s low-odds, but a broad risk factor that makes a drug less positive than previously thought. The third describes a population-specific risk, like Tegretol. This is perhaps most significant (to the people it affects), but doesn’t lower the general-population value of the drug – it only changes NNT/NNTH equation for that population. As far as I can see, only the second of those three categories represents a general mis-evaluation of a drug.

At best, you can prove that the drug is reasonably safe for the vast majority of people. Then later you find something that only happens […] to Han Chinese

Ok, 82% might normally be called “the vast majority” of whatever, but I’d kind of hope that even a normal drug trial could pick up on a drug that killed 18% of people. I want “the vast majority” in terms of drug trials to mean something safer than “you have no more than a one in five chance of dying”.

This drug gives you a 1 in 5 chance of dying in the next 4 months, and a 4 in 5 chance of living to see your kid graduate college in 5 years.

A friend of mine’s wife died because none of the chemo she was given did more than slow her cancer down a little bit. Her son was 16 at the time. I’m sure she would have been willing to risk dying a year early for the chance to live 5 more years.

There’s a difference between “the FDA should be less conservative for drugs that treat deadly diseases for which no good treatment already exists” and “the FDA should be less conservative for all drugs”.

Yes, I know there’s one exemption that a lot of HIV drugs got through. But is it sensitive enough to pick up things like Antistotle’s friend’s wife’s cancer, where chemotherapy might work in theory but didn’t in practice?

The Isakov, Lo, and Motazerhodjat paper Scott links in the second-to-last paragraph actually argues for such an approach (rather than less conservative generally) and provides a metric for it. Lung cancer is way towards the “should be less conservative” side (way more than HIV, in fact, though presumably HIV would have been higher before the treatments for it were invented and fast-tracked through the exemption you mentioned).

Lung cancer is extremely, excruciatingly, fucking awfully bad, and to those who like to estimate the risk of getting it from smoking and decide they’ll take the gamble, all I have to say is “Here’s a cliff, jump off it, I guarantee the end result will be the same and this is way quicker and less painful and less degrading”.

That kids (teenagers to young twenties) are now finding smoking tobacco to be cool and romantic and rebel-chic makes me want to tear my hair out >:-(

So that I don’t have to repeat everything, let me quote from my comments below:

I’m going out on a limb here and guessing that *most* new drugs are for stuff we can’t already treat, or that we can’t treat well. If it you’re going to spend millions developing a new drug, then at least 50 million in a clinical trail, and the truckloads of documentation for the FDA, you’re not going to waste it on a drug that is > < that much better. Or you're going to go for "we cured Metastatic Breast Cancer" or whatever.

Which means that a LOT of those drugs were going to people who REALLY needed something and the current somethings weren't doing it.

(more detail below, but that’s the gist).

I’m a really YUGE fan of putting the decisions in the hands of people who have the most stake in it.

Yes, I realize this. But I find it very jarring when someone can blithely write that the most numerous population in the world is so small that it shouldn’t even show up in moderately-sized trials. You could combine the populations of the US, Canada, Mexico, Australia, and Europe, including Han populations there, and they would all be (slightly) outnumbered by the Han Chinese just living in China and Taiwan (which is, to be fair, almost all of them). If your study didn’t pick up on a not-drastically-rare effect in Han Chinese, it’s not because it would be difficult to scrape up enough of them to see the effect (and it’s certainly not because you didn’t consider that they might be meaningfully medically different from whites — this is much more likely than a special effect in people taking antidepressants), it’s because you decided in advance that you weren’t interested in looking for an effect in Han Chinese.

So yeah, my comment was mostly motivated by the inclusion of “being Han Chinese”, an extremely common condition known to be very likely to have special medical interactions, in a list of random whacko categories that you’d have to be crazy to plan for in advance. It doesn’t belong.

To be fair “unexpected side-effects in particular Han Chinese that have these allele” is a case of Elderly Hispanic Women syndrome, and it’s a little harsh to jump all over a Western-based manufacturer for not anticipating it might have weird effects on one particular sub-section of a population, even granted that that sub-section is a very sizeable grouping.

How many Chinese drug researchers are contemplating “but will our new wonder drug we hope to market in Europe and the USA have a bad side-effect on the Irish, and those of Irish descent, who as a population have a higher than usual for white people level of haemochromatosis“?

I see what you’re getting at though, Michael Watts, in saying that race, esp of large populations, should be part of the prespecified special population category. But that really depends on where you live. In US, for instance, black is a respecified special population category in clinical trials, but you’d be stupid to include that if you did the study in China. If the company was trying to apply for license in China, you bet the China Food and Drug Administration will ask for studies in Han Chinese.

, it’s because you decided in advance that you weren’t interested in looking for an effect in Han Chinese.

Seriously? You believe people sat around in a meeting and said “we’re not interested in looking for an effect in Han Chinese”?

The simple answer I think is that, as you note, most Han Chinese people live in China and Taiwan. Meanwhile drug development companies are mostly in Western countries where there aren’t that many Chinese, and the drug companies’ staff already have an awfully large number of problems to keep track of when doing a trial.

Not to mention, if you’re a Western drug developer and you want to scrape up with Chinese to do a test, that means you’re probably going to be running that test in China (or Taiwan), requiring a whole new level of complexity (eg translating everything) and negotiating that business scheme.

Scott also mentions red-haired people and people on antidepressants, so he isn’t talking exclusively about super crazy edge cases. Also, the drug didn’t have a 100% kill rate on the Chinese, so your one-in-five comment seems hyperbolic

Could be that the FDA is too restrictive when you only take lives saved into account, but that lives taken by over-permissiveness are worse than lives taken by over-restriction. Cynical gut answer could be that FDA gets blamed less when someone dies because of their over-restriction. Over-restriction also probably makes people trust medicine more than over-permissiveness, which decreases people’s trust. There are some externalities to this.

25 out of 30 were overly conservative, but that’s 25 out of the top 30 causes of death in the US, which is presumably a list that’s disproportionately likely to have deadly diseases on it. If you want to calibrate your expectations another way, the disease that’s nearest to the FDA status quo is HIV, so you might think of it as “overly conservative for diseases that are at least as deadly as HIV and overly aggressive for diseases that are less deadly than HIV”.

Part of the problem is that, as a state agency, the FDA is liable to lawsuits from grieving families who accuse them of “your carelessness in licencing this drug meant that we were deprived of the last few precious months we had with our family member who died sooner than need be due to the bad side-effects” and that it is no defence to say “Well, they were gonna die in eighteen months’ time anyway”.

It’s not the FDA that has sovereign immunity, it is the United States Government. And the United States Government has decided as a matter of law not to exercise sovereign immunity in a broad range of cases, and the FDA doesn’t have an independent right to say “No, we’re still immune, nyah nyah nyah!”

Hence, the FDA can be and sometimes is sued by grieving families, etc, for being insufficiently vigilant in its regulations. They almost always win, but arguably they could start losing if they adopted a stance as permissive as some here would prefer.

222 drugs approved by the FDA in the last ten years, only 3 (1.3%) were taken off the market.

That’s *it*?

I’m going out on a limb here and guessing that *most* new drugs are for stuff we can’t already treat, or that we can’t treat well. If it you’re going to spend millions developing a new drug, then at least 50 million in a clinical trail, and the truckloads of documentation for the FDA, you’re not going to waste it on a drug that is > < that much better. Or you're going to go for "we cured Metastatic Breast Cancer" or whatever.

Which means that a LOT of those drugs were going to people who REALLY needed something and the current somethings weren't doing it.

So how many more people would have been helped if we'd gotten 444 new drugs in that ten years?

How many more would have been saved? How many more productive hours would we have gotten?

Maybe instead of tightening up the FDA, we loosen it's standards a bit. Allow it to tag drugs as "Provisional" and require they have a Neon Zombie Green label that says "This drug probably works sometimes, and maybe once 500 thousand times someone gets a shriveled testicle from it. Or dies." then let the patients and doctors sort it out.

I'm not really in favor of the whole "right to try" legislation because that's being driven by both fear and scam artists. But I really think that if there's a drug that's been developed by reasonable rational scientists (of course I only know three people in that industry. One is a radical organic vegan goth, the other calls himself Merlin, and the third. The other is Egyptian Copt that married a cousin of mine. This makes the guy calling himself Merlin the *sane* one.) and gone through a raft of scientific studies and clinical trials[1], then maybe we admit as a society that it ain't all bunnies and unicorns out there, and sometimes there's tradeoffs.

I've been in "chronic pain" since the late 80s. Mostly knees, then back, then that sort of migrated to my feet. Now it's elbows and most recently wrists. Most of it is physical damage from trying to do interesting things, but at 50 most of it's not going to get better. Of course, it might help if I quit playing with guns and swords, riding bicycles, and letting people throw me to the mat. But that's boring, and boring is already death.

Would I trade say the next 18 years of being pain free for two years off the end of my life? I might be willing to make that trade. At the rate I'm progressing I'm almost certain I'll take that trade by the time I'm 60–even if it's down to "10 years of pain free for 2 years off the end".

So yeah, maybe the FDA is a little too risk averse, or a little to uncreative on managing that risk.

[1] And yes, I know "Big Pharma" likes to fudge and bias their results. If they're guilty of fraud hit them with the corporate death penalty and open source their patents. Make that stick twice and you'll get better behavior.

I’m not at all convinced that’s true. I’m guessing too based on limited personal experience, but here’s my anecdote.

I’ve had Crohn’s Disease for ~35 years. For 15 years I’ve been taking a Asacol, which is a mesalamine drug packaged to release in the colon. One day the doctor told me he couldn’t renew my prescription because the drug was withdrawn from the market – something about the enteric coating that protects it until it gets to the colon being maybe bad for pregnant women (something which I’ll never be).

The replacement is something called Delzicol. This is also a mesalamine drug – the same exact active ingredient. The only difference from Asacol is that the packaging is different, so as not to possibly endanger unborn fetuses.

Oh, there are two other differences: $800/month, and a reset clock on the patent expiration.

I can’t help but think that there’s no coincidence that the manufacturer decided to wait until just before the patent expired to pull the old drug and replace it with something that would give them a freshly-protected monopoly — without having to actually invent a new drug at all.

In my personal case, there’s actually a happy ending (so far) that might be worth keeping in mind for everyone. It seems that when pharma companies put out these “new” super-expensive drugs, they need to get doctors used to prescribing them. In some cases they set up marketing programs that you only need to call them and ask about (i.e., they’re not need-based or anything) where they’ll write off any of the cost above what your insurance is willing to pay. Apparently this isn’t unusual, as it’s the second drug that I’ve gotten such benefits for.

The biggest takeaway here for me was the casually mentioned fact that the FDA only approved 222 drugs in the past 10 years. I feel like I’d have guessed something like a couple thousand prior to seeing that number. I’d always known intellectually that drugs with a niche market had a harder time of things than they ought to, but this really underscored for me that if you aren’t a Prozac or a Lipitor or a Viagra you can go the fuck home, because there’s only 222 slots in a decade’s worth of drugs.

Don’t put too much stock in this, because I’m not a subject expert and I could be misunderstanding figures, but apparently the FDA only gets somewhere between 30-40 applications for new drugs each year.

That number’s a lot lower than I would have thought, and I wonder how much of it is due to the difficulty of making new drugs versus the fact that a company knows they face a tough regulatory process and so only bring really strong candidate drugs.

The benefits of the free market and competition under the system of capitalism!

I imagine that the American market being the largest single market in the West, even a slice of the pie in the USA is very valuable by comparison to setting up elsewhere. And if you are the lucky company that hits on the one drug that will cureEDIT: treat, you don’t necessarily want to cure it, but if yours is the best treatment you have dominant market share of tens of hundreds of thousands for years [common disease] you are quids in.

Also very important to note is that the package inserts in general are mostly legal. Even if a study comes out negative about something that could cost you millions with a safety of 5%, that’s still a lot of money on average. So you just write it down anyway. This is also the reason why “it’s written in the insert” is not a proof of anything.

“Then over hundreds of thousands of patients over the space of decades (nefazodone was out almost ten years before it got withdrawn) people collect records and see if there’s any unusual disorder that happens more often when people are on the drug.”

Scott, Serzone was withdrawn in Canada and the USA in 2004 because of hepatic toxicity. I understand generic versions were still available in the US after that, but not in Canada. I think this was because Bristol-Myers Squibb voluntarily withdrew it from the market, so the drug wasn’t banned and generics could still be sold. From what you write I’m guessing the FDA has now banned nefazodone itself? I’m in Canada and not up to speed with your market. If that’s the case, the question is why did the FDA allow a product to remain on sale when the manufacturer of the branded product, with the research investment to recoup, regard it as too risky to continue?

BTW, if you haven’t yet read ‘Bad Pharma’ by Ben Goldacre, you really must. It’s not a paranoid rant against big pharma, but an exposition of the less than perfect nature of the industry we must work with.

(a) voluntary withdrawal is not the same as “we were forced to withdraw this after losing a court case*” or “evidence of thousands of patients turning purple and exploding after starting a course of this”. Voluntary withdrawal does not necessarily mean that there is merit to claims of risk:

*The combination of doxylamine and vitamin B6 was first introduced to the US market as Bendectin in 1956. At that time, Bendectin was a 3 ingredients prescription medication. The third one, dicyclomine, a Pregnancy Category B anticholinergic/antispasmodic, was omitted from the formulation starting in 1976 due to its lack of efficacy. Bendectin (doxylamine/vitamine B6) was voluntarily removed from the market in 1983 by its manufacturer, Merrell Dow Pharmaceuticals, following numerous lawsuits alleging that it caused birth defects, although an FDA panel concluded that no association between Bendectin and birth defects had been demonstrated. In litigation, Bendectin was supposed to cause all kinds of fetal malformations and problems including limb and other musculoskeletal deformities, facial and brain damage, defects of the respiratory, gastrointestinal, cardiovascular and genital-urinary systems, blood disorders and cancer. The most famous case involving the drug is Daubert v. Merrell Dow Pharmaceuticals (1993). These suits were led by celebrity plaintiff attorney Melvin Belli . The star witness for the case against Bendectin, William McBride, was later found to have falsified research on teratogenic effects of the drug, and was struck off the medical register in Australia.

The Bendectin case, and the subsequent removal of the drug from the US market, has had a number of consequences. Firstly, there was an immediate increase in the rates of hospitalization for nausea and vomiting in pregnancy. Secondly, it created a treatment void in terms of having a safe medication that could be used for alleviating morning sickness in US pregnant women, a condition which, in the most severe form, called hyperemesis gravidarum, could be both life-threatening and cause women to terminate their pregnancy.

The lack of availability of a safe and effective drug for the treatment of nausea and vomiting of pregnancy resulted in the use of other, less studied drugs in pregnancy. Thirdly, it has been claimed that subsequent to the Bendectin experience, drug companies stayed away from developing medications for pregnant patients. As a result, only a few medications were approved by the FDA for obstetrical indications in the past several decades. Lastly, the perception that all medications are teratogenic increased among pregnant women and healthcare professionals. The unfounded fear of using medications during pregnancy has precluded many women from receiving the treatment they require. Leaving medical conditions untreated during pregnancy can result in adverse pregnancy outcomes or significant morbidity for both the mother and baby. Ongoing education of physicians and the general public has resulted in improvements in the perception of medication use in pregnancy; however, further advances are required to overcome the devastating effects of the Bendectin saga.

So if the FDA puts a ban on a voluntarily-withdrawn drug, there’s a definite chance other pharma companies are going to drop research into drugs for that condition like a hot potato, leaving patients worse off.

(b) the yelling from people who will pop up and claim “this drug is the only one out of the scores we’ve tried that works for me/my husband/our cute moppet child and if you ban it we’ll start a campaign and put pressure on our congressperson to get it unbanned”, complete with sympathetic media stories featuring heartstring-tugging pictures of Cute Tousled-Haired Moppet and “why oh why do the heartless bureaucrats want this adorable child to suffer” copy.

No, I think it’s still available as generic in the US. I don’t think it was ever banned, just voluntarily withdrawn, which counts as a “drug withdrawal”.

It’s still available because there are some people for whom it’s the only antidepressant who works, who are happy taking the small liver risk, and who demand the right to continue taking it. This is a tiny demographic and it’s almost never prescribed for new people.

When I read the title of this post, for a moment I thought this was going to refer to all those updates that go “And after installation, do you mind if we scrape every single particle of private data forever and ever whenever you use your computer or device and send it back to our company where we will use it in vaguely unspecified ways that probably involve trying to monetise it and target you with crap ads?”, and Scott was arguing this was fine and no big deal and not a pain in the backside.

Then I saw it was about drugs and that was a relief 🙂

(1) Doesn’t the “mixing this antibiotic with eating bananas and sauerkraut could kill you” and the “whoops, we had no idea this affected Han Chinese only like this” cut against your Elderly Hispanic Woman argument? That is, sometimes an idiosyncratic effect in one person is an indicator of something damn serious/really great for that specific type of person and if you ignore it on the basis of “lightning can strike once”, you’re causing problems for yourself in the future?

(2) I’d be less dismissive of “The FDA is the Ministry of the Bleedin’ Obvious when it comes to telling doctors what they already know about drugs that have been around for fifty years” because yes, anecdotes are not data, but I’ve had experience in my family of doctors having no bloody idea about possible drug interactions when prescribing stuff, having to look a drug up in their pharmacopoeia in front of me when talking about prescribing it (fair enough, they can’t be expected to know every drug on the market), and one particularly bad incident with my father when he was changed from the medication he was taking to a different medication which caused him to have, later that night on the day he first took it, severely elevated heart rate to the point where he thought he was having a heart attack (and he had long-standing mitral valve problems), so I think that the FDA putting “don’t mix this with that” on the boxes is a good idea in general. Because never mind the patients who go “I had no idea I wasn’t supposed to get blind drunk/indulge in Harmless Recreational Drugs on this medication!”, there are doctors out there who will go “I had no idea warfarin might be a problem if my patient, who works around Sharp Bladed Machinery, cut themselves badly”.

Basically, this post makes me think we’ll never get the Miracle Individually Tailored Drugs that naive optimism about progress likes to invoke because the more you find out about how the body and how drugs work the more you discover people are not a predictable mass of uniform units, there is so much variation and idiosyncratic reactions.

So the FDA may have genuine clunkiness problems but in general it does do a good job of regulating drugs!

Doesn’t the “mixing this antibiotic with eating bananas and sauerkraut could kill you” and the “whoops, we had no idea this affected Han Chinese only like this” cut against your Elderly Hispanic Woman argument?

Not really. Elderly Hispanic Woman could equally be called the Green Jelly Bean Effect from the relevant xkcd. To some extent, the Elderly Hispanic Woman effect is why this stuff is so hard to pick up. Everyone knows that if you run 20 tests at p<0.05, you'll get one hit by accident. In a world where you have exactly 20 potential effects, you'll need to test each one to p<0.0025 to be sure at p<0.05 that you don't have any bad ones. Now, in the real world, where there are thousands…

When I read the title of this post, for a moment I thought this was going to refer to all those updates that go “And after installation, do you mind if we scrape every single particle of private data forever and ever whenever you use your computer or device and send it back to our company where we will use it in vaguely unspecified ways that probably involve trying to monetise it and target you with crap ads?”, and Scott was arguing this was fine and no big deal and not a pain in the backside.

and send it back to our company where we will use it in vaguely unspecified ways that probably involve trying to monetise it and target you with crap ads?

At a considerable tangent, I am puzzled by this general concern. Companies want to target me with ads for things I am likely to buy–that’s the whole point of targeting. If companies have good information on what I am likely to buy, I am more likely to receive ads for things I might want, less likely to get “crap ads” for things of no interest to me.

So why do so many people see this as a problem? I can see other problems with other people getting information about me, but not that one.

It works like the AI convincing you to let it out of the box–ad companies are good at convincing you to buy things that you otherwise don’t want to buy–basically, they’re hacking your brain through software. I don’t want to end up buying something because ad companies took advantage of flaws in my wetware.

Furthermore, if companies can target you in detail, they can raise the price to the maximum you’re willing to pay and capture all the consumer surplus.

They can also create a situation where you can only avoid being charged extra at the cost of an endless stream of trivial inconveniences, which ends up being deadweight loss. Imagine that you’re a Star Trek fan, and statistics show that Star Trek fans are more likely to default on their loans unless they submit 10 pages of documentation, which reduces their risk to that of an average person. You end up having to submit 10 pages of documentation just to be treated like you would if the company couldn’t make that analysis.

If it were “things I am likely to buy”, I wouldn’t mind. But it’s not “things I am likely to buy”, it’s “you clicked once on a link to this subject so forever and ever we are going to try and sell you fitted kitchens/world cruises/ex-Soviet army tanks” and no, I don’t want that 🙂

Mainly, it’s the intrusiveness of the advertisements, as companies get more desperate for clicks and views. This year is the first time I ever installed an ad-blocker and I was driven to it by one particular website where my options were either install an ad-blocker or never visit the site again, and the content there meant I did want to keep visiting and using that site.

Ads are no longer side-bar and/or top of screen; they’re shoved in between posts, they follow you down the page, they keep putting in sponsored content tailored to look like posts (the equivalent of newspaper advertorials or ‘this may look like an independent feature on holidays in Greece but it’s actually a puff-piece paid for by a travel agency that paid to fly out and put up in one of their hotels our reporter’ lifestyle articles). I know it’s how companies raise revenue, but ironically by making their ads more frequent and more intrusive and more ‘tailored to your interests’ (except they’re not), they forced me to switch off all ads completely by the ad-blocker, so they’ve shot themselves in the foot.

It’s not only that, it’s that you’ll get full-motion video ads that suck down bandwidth, or are coded like garbage and slow down the browser. Or, just autoplaying audio, though now at least the tabs show if they’re playing audio so at least you can find it now.

Also, since the ads often get Frankensteined together from a bunch of different places, you’ve got a larger chance of getting malware slipped in by the ads, even on nominally trustworthy sites.

I try to support sites I like with Patreon or donations, but I don’t turn off adblockers anymore.

Well, this just means their software is not that good yet 🙂 Maybe they could even try a direct approach, sort of like “what sort of advertisement would you like to see”. I guess it is so obvious that sites like Facebook have to be doing it already. Letting the users decide which ads they want to see seems very simple and cheap – you don’t need to collect huge amounts of data and have your (often rather bad) algorithms figure out what the people want, you just ask.

I let a couple of e-shops send me their newsletters (and google’s e-mail handily sorts out these from regular mail, so it does not get too cluttered…or it would not if I didn’t read my e-mails in Thunderbird which puts it all back together), since I am actually interested in their offers (at least sometimes). I also find that Amazon’s algorithm “if you liked this you might like that” work quite well. I actually bought one or two books I did not know about based on their suggestions.

I think people are “creeped out” by for example Google using the information that they just purchased a flight ticket to Greece to offer them Greece-related ads. It feels like Google spying on your private conversations. Sure, very likely nobody actually reads those and they just search for keywords etc. but it does feel like a violation of privacy.

I am very happy there are instant messaging apps today which use peer to peer encryption (Whatsapp, Viber). I tried installing the Facebook messenger to my phone once but when I noticed the ridiculous amount of things it wants access to, I stopped the installation.

“I see your phone is almost out of charge and it’s night time in the inner city. Your next Uber will cost $60 per mile.”

“I see your house is on fire. This fire extinguisher will cost $8,000 and an unbreakable lifetime 60-year subscription to Amazon Prime.”

Imagine there was a little hobgoblin on your shoulder that knew everything about you and that shouted to the world exactly what the maximum price was that you were willing to pay for a given good or service as you were shopping, so people can price at that level minus one penny.

Behavioral software can be immensely powerful at being that little hobgoblin. Software that is nominally my friend could be selling that data to merchants.

In some ways consumers today are spoiled by the “see a price at the store, pay it” model, which is the historical aberration from one-on-one negotiation. The thing is that sellers have never had the insight into me as a consumer as they did before the hobgoblin software came around. What can I trust in my life? Do I need to write out scribbled notes to my wife that we’re nearly out of batteries so that Alexa doesn’t hear?

I’m not 100% sold on everything I’ve said here, and there are ways that consumers can band together and fight back, but I can see why people are worried about where this is going. Please drink a verification can.

“I see your phone is almost out of charge and it’s night time in the inner city. Your next Uber will cost $60 per mile.”

On the other hand, when the tables are flipped and we see a lot of consumer surplus and little producer surplus (there’s a bus route going where I want to go, gas is cheap, my next Uber better cost .50 cents per mile or I won’t pay), we don’t see that as some horrible evil being perpetuated on poor, innocent, Uber now do we.

Your argument seems premised on the notion that producer surplus is inherently evil but consumer surplus is inherently good. And information works both ways. When there’s more competition and people are more aware of the competition, their bargaining position becomes stronger, too.

100% with David on this one. I LOVE targeted advertising. I see it as a huge boon for all of humanity. It has directly improved my life, and I’m looking forward to technology ushering in an age where my (hypothetical) son will be able to go his entire life without seeing a single ad for feminine hygiene products.

Mass market advertising is so grossly inefficient it’s almost scandalous. It essentially requires the same “double coincidence of wants” that caused the barter system to be replaced by currency. Budweiser pays for access to everyone watching show X, even though some percentage of them don’t drink – which is a waste of money for Budweiser. Meanwhile, you are forced to watch ads for a product you will not possibly consume, which is a waste of YOUR time. It’s a huge waste of resources on both ends, and any way we can solve it is definitely going to add a ton of value to society.

Most of the arguments against it I see (including here) seem to be some version of “muh privacy!” Get over yourselves. Nobody cares about you personally. They just want to make sure their precious advertising budget is being used on people who might actually want their product. And yes, “you clicked a relevant link once” is (for now), the BEST signal we have that you might want the relevant product. It’s certainly better than “our studies show that the average NFL fan likes beer therefore we should advertise our beer during NFL games even though we know a good 20% of the audience will almost certainly be non-beer drinkers”

Wait a minute. Not to be needlessly contrarian, but if there are only twenty-or-so new drugs approved per year, maybe we actually should expect doctors to have an encyclopedic knowledge of the ones they prescribe. These are after all people who got through pre-med o-chem classes, where they were mercilessly weeded out based on their ability to remember and regurgitate barely-systematic data with lots of idiosyncratic exceptions. Alternatively, we should expect them to (discreetly) Google new stuff before they prescribe it. I get that human memory is frail and fallible, but that level of detail mastery doesn’t feel much different than what academics, lawyers, business analysts or software developers do every day.

I think often it’s “wait a minute, the drug rep was round the other day with some new stuff for this, what’s it called again?” and they look it up to be sure it won’t turn you purple and make you explode if they prescribe it along with what you’re already taking 🙂

There is still a heated debate about this, with some recent review articles seeming to confirm, and other people pointing out that, when the FDA warning discouraged people from giving antidepressants to teens, teen suicide attempts suddenly went way up.

Wait… what?

I mean, sometimes we just plain have contradictory data. But usually a) there aren’t big meta-studies/systematic reviews / large-scale surveying / otherwise hard-to-deny evidence on both sides; and/or b), one could at least come up with a just-so story to explain why there’s an apparent discrepancy.

Perhaps the distinction between long-term and short-term effects? I seem to recall Scott (?) saying something about people who have only just gone on anti-depressants having a temporarily higher suicide risk, because the anti-depressants give them the energy to go through with it. Looking at teen suicide statistics would show the (loss of the) overall improvement, but depending on how you controlled for the fact that only depressed people are given anti-depressants, a direct study might see the short-term risk instead.

From a utilitarian perspective, a life lost because of a lack of treatment should be as bad as a life lost due to an unforeseen side effect. But could one argue that ill consequences from inaction are somehow better than the same outcome from action? I find that I hold this view when it comes to armed intervention in conflicts and the like (e.g. NATO in Afghanistan): unless we have a clear objective, and a pretty good idea on how to achieve it, I much prefer we don’t intervene – even if it means standing by and watching an unfolding catastrophe. Somehow, I feel more responsible for a decision to act, than for a decision not to (or for indecision).

Similarly, I think it can be easier to come to terms with suffering from a disease for which there is no cure than suffering from undocumented side effects. Perhaps this is a matter of having someone to blame? Or is it that the potential results of hypothetical action are much more abstract – the results of an armed intervention or more liberal pharmaceutical research are much less certain than the observed consequences of an action that was taken.

In the end, I think it is natural to err on the side of inaction, but perhaps not rational (except in the sense of self-interest for pharma and FDA, who will get blamed more for unintended side effects than from lack of new drugs). Is this yet another kind of cognitive bias?

But could one argue that ill consequences from inaction are somehow better than the same outcome from action?

It’s not clear why regulation (which prevents some treatment from being used) should be considered “inaction” on the FDA’s part, even as it causes inaction (or some other less optimal action) on the part of medical professionals.

But could one argue that ill consequences from inaction are somehow better than the same outcome from action? I find that I hold this view when it comes to armed intervention in conflicts and the like (e.g. NATO in Afghanistan)…

The issue with this analogy is that in foreign intervention, there’s actually a huge risk you make the situation much, much worse.

That’s sort of an issue with some disease treatments/medications, but certainly not all. Hence the popularity of “freedom to try” bills where if you have a terminal condition and have not responded to existing treatments, you are allowed to try non-FDA approved stuff. The idea being “we can’t possibly make things worse.”

There’s also the issue of informed consent. It’s not as if we had a popular vote in Iraq and 90% of people said “Yes, we want America to invade!” whereas with medicine, the patient typically WANTS to take the drug. Sure, maybe you can insist they don’t REALLY understand the risks and they aren’t REALLY adequately informed, but at some point, paternalism has to end…

AISafety.com hosts a Skype reading group Wednesdays at 19:45 UTC, reading new and old articles on different aspects of AI Safety. We start with a presentation of a summary of the article, and then discuss in a friendly atmosphere.

Altruisto is a browser extension so that when you shop online, a portion of the money you pay goes to effective charities (no extra cost to you). Just install an extension and when you buy something, people in poverty will get medicines, bed nets, or financial aid.

Metaculus is a platform for generating crowd-sourced predictions about the future, especially science and technology. If you're interested in testing yourself and contributing to their project, check out their questions page

80,000 Hours researches different problems and professions to help you figure out how to do as much good as possible. Their free career guide show you how to choose a career that's fulfilling and maximises your contribution to solving the world's most pressing problems.

Beeminder's an evidence-based willpower augmention tool that collects quantifiable data about your life, then helps you organize it into commitment mechanisms so you can keep resolutions. They've also got a blog about what they're doing here

MealSquares is a "nutritionally complete" food that contains a balanced diet worth of nutrients in a few tasty easily measurable units. Think Soylent, except zero preparation, made with natural ingredients, and looks/tastes a lot like an ordinary scone.

Triplebyte is building an objective and empirically validated software engineering recruitment process. We don’t look at resumes, just at whether you can code. We’ve had great success helping SSC readers get jobs in the past. We invite you to test your skills and try our process!

Jane Street is a quantitative trading firm with a focus on technology and collaborative problem solving. We're always hiring talented programmers, traders, and researchers and have internships and fulltime positions in New York, London, and Hong Kong. No background in finance required.

Giving What We Can is a charitable movement promoting giving some of your money to the developing world or other worthy causes. If you're interested in this, consider taking their Pledge as a formal and public declaration of intent.