Psychiatric medications, science, marketing, psychiatry in general, and occasionally clinical psychology. Questioning the role of key opinion leaders and the use of "science" to promote commercial ends rather than the needs of people with mental health concerns.

Wednesday, March 26, 2008

Academics Dr. John Kelsoe and Kurt May have fired the warning shot: Genetic testing for mental disorders is on its way. Like much else in the mental health field, I fear that marketing may yet again trump science. Kelsoe and May's new test is out and it claims it can assess the risk for bipolar disorder (sort of) for a fee of $399.

Both Furious Seasons and Daniel Carlat have already opined wisely on the topic. The first issue is the science behind such testing -- if the science does not support the validity of the test in determining if someone actually has a mental disorder, then the test is a sham. So what does the science say? According to an article in Science, one genetic variant used in the test was associated with a tripling of risk for bipolar disorder. The catch: The variant was only found in 3% of individuals with bipolar disorder and 1% of the people without bipolar disorder. A genetic variant that is only possessed by 3% of people with bipolar can hardly be considered as widely useful. A combination of five variants in another study was found in 15% of individuals with bipolar disorder compared to 5% if those without the condition. As I understand it, the current test, as put forth by Kelsoe and May through the company Psynomics, tests for a combination of the previously mentioned variants. Again, the set of variants they are using are not very common even among people with bipolar disorder. So even if you are bipolar, the odds are high that this test would not label you as such. In the world of testing, this is called low sensitivity, which means that a test is nothing to cheer about.

Additionally, according to the Science piece, other researchers were unable to replicate Kelsoe's findings, making the test yet more questionable.

The thing about bipolar disorder is that it can be diagnosed by (drum roll please)... interviewing a patient thoroughly! That's right, a well-trained interviewer can simply ask questions to determine whether an individual has bipolar disorder. Imagine that. There is often a hullabaloo made over patients with bipolar disorder being initially misdiagnosed as depressed -- the way to solve this problem is not to perform a fairly useless genetic test, but rather to actually spend time with patients, perform a thorough assessment, and listen to them. How's that for a wild idea? If your response is: "But there's no time to actually talk with the patients," then no cookie for you! It is likely true that many people later diagnosed with bipolar were initially seen in primary care settings for a brief appointment, in which they were diagnosed as depressed (the underlying bipolar piece was missed). Again, giving a scientifically dubious test because "Gee, it's based on genetics so it has to be accurate" rather than training physicians to improve interviewing skills will only worsen the problem.

When I have more time emerges, I will post again on the topic. This idea of genetic testing for mental disorders certainly needs much more attention. When academics go into marketing, strange things can happen, as I have documented here on many occasions.

Tuesday, March 18, 2008

Since the Zyprexa trial is ongoing in Alaska, I thought I should return to the wonderful world of Zyprexa. I encourage readers to follow the Zyprexa coverage at Furious Seasons, Pharmalot, and PharmaGossip. Today, I will discuss the link between key opinion leaders and the marketing of Zyprexa. To preface, a coveted Golden Goblet Nomination could be handed out to several individuals based on their involvement in Zyprexa marketing...

In March 2000, Zyprexa received FDA approval for treatment of manic episodes. One document laid out the multipronged marketing maneuvers that Lilly utilized to move Zyprexa shortly after its approval. Some of the details of this document have been well-covered in a terrific piece of investigative journalism at Furious Seasons. This post will provide some coverage of the link between Zyprexa and the key opinion leaders who helped popularize the drug across the nation.

Once approved for bipolar disorder, Lilly utilized several tactics to market Zyprexa for bipolar disorder, including a satellite conference beamed to about 6000 physicians, and 8000 treatment team members in 1000 facilities. The faculty providing this educational service included many of the big names in academic psychiatry, including Paul Keck, Jan Fawcett, Hagop Akiskal, and Alan Schatzberg.

Lilly also bankrolled dinner meetings, anticipated to draw 150-400 physicians per sitting. Dr. Schatzberg was also listed as a speaker for such dinners. One mental health service provider was impressed enough with receiving such excellent medical education that you can find it on his CV.

In the document outlining Zypexa's big marketing launch, Paul Keck's name appears in the following contexts:

Satellite symposium provider

Trainer of "local speakers." I believe this means he would train local physicians in various markets to then discuss Zyprexa with their colleagues.

Faculty for bipolar weekend symposia

Faculty for audio conferences

Faculty for a satellite CME workshop

Faculty for "dissemination of Bipolar information to 30,000 customers"

Faculty on a "closed symposium" resulting in a CME newsletter and a CME audiotape, both of which were mailed to 30,000 individuals

Author of two journal supplement articles

Paul Keck was also a member of a task force chartered by the American Psychiatric Association that served to revise the organization's guidelines to provide a more favorable view of atypical antipsychotics (including Zyprexa) in the treatment of bipolar disorder. No conflict of interest there, eh?

"Often," Keck said, "patients with bipolar disorder requirecomplex treatment regimens to manage all phases of their illness,creating a compliance challenge for patients and a managementchallenge for clinicians. These studies suggest that physiciansmay be able to use olanzapine as a foundation to simplify patients’treatment regimens, and the combination of olanzapine and fluoxetinecould be an effective treatment choice

It is likely that Keck was not performing all of his "educational" functions for Lilly in exchange for lollipops. He was likely receiving a healthy dose of cold, hard cash. Yet in the article, nothing is written about his financial links to Lilly. Keck has also appeared in press releases saying nice things about Symbyax (fluoxetine/olanzapine combination).

To be fair, Keck has also stumped for Pfizer's Geodon in press releases. Oh, and he also said nice things about Abilify in a press release. I suppose that if one is going to be a true key opinion leader, a real mover and shaker, one should be prepared to say nice things about whatever new drug is released, since each new drug naturally represents an "important" treatment option. Keck, like Alan Schatzberg and Charles Nemeroff, is also currently listed as a member of the clinical advisory board for Neuroscience CME, a for-profit entity awash in drug industry money. Dr. Daniel Carlat has previously written that the "educational" content produced by this organization is biased, and I find that easy to believe. It's not hard to find examples of poorly done industry-funded CME. In fact, you might be interested in reading about a CME activity in which Nemeroff seems to have pulled data out of thin air.

In sum, the usual fun and games were in play when Zyprexa was initially being pushed for bipolar disorder. Some of the biggest names in psychiatry left their fingerprints all over the marketing of Zyprexa and one of these key opinion leaders recently won the presidential election for the American Psychiatric Association. I suppose, then, that American psychiatrists are generally either unaware of conflicts of interest or don't care about them.

The beautiful thing about being a key opinion leader is that one's name recognition is huge. Among psychiatrists, I bet that Schatzberg's name is better known than that of Bill Clinton, since Schatzberg's byline appears on journal supplements and CME so frequently. That can't hurt when running for president of the national professional organization. I will be very interested to see how Schatzberg handles questions about conflicts of interest and drug industry influence on his profession. Don't be expecting any major efforts at reform in the near future.

Monday, March 17, 2008

Note: This is a guest post from Susan Jacobs (see byline at bottom of post)

It is easy to point our fingers at greedy pharmaceutical companies when it comes to the rising costs of our prescription meds. However, the average citizen probably isn't aware of just how much these companies control our lives.

A perfect example of this control can be found on every other page in a leading medical journal. I'm speaking, of course, about the copious amounts of ad space.

It is easy for people to presume that the scientific evidence presented in various medical journals is based on unbiased information. Nothing could be further from the truth, unfortunately. Just as a network television channel strives to please their sponsors at the expense of a program's content, a medical journal that is filled with ads will always be at the mercy of its financial backers.

Just as pharmaceuticals fund studies and pay doctors to give lectures, so too do they buy journal ads and reprints of favorable articles—lots of them. Often a drug company may find one of its products featured in a scientific article while another of its products is dolled up in a high-gloss ad a few pages later. Yet the journals keep quiet about these financial arrangements.

So, just how much money is the integrity of a medical journal (not to mention, the mental and physical well-being of its readers) worth a year? According to The Social Policy Research Institute, the New England Journal of Medicine receives approximately $18 million a year from pharmaceutical companies, while JAMA, the Journal of the American Medical Association receives around $27 million.

It was the New England Journal of Medicine that brought the most attention to this problem in recent years, after publishing a favorable study of the "safe" drug, Vioxx. Of course, we now know just how wrong they were. (It is worth noting that 2 of the 13 people involved with that study were actually employees of Merck.)

When Boston Magazine's Karen Donovan questioned the Journal's editor about the Vioxx scandal, he replied, “I am not a person who wants to make more rules. I just want people to behave.” It is particularly frightening to think that this increasingly corrupt industry is being held to an honor system of sorts, particularly one that is so indelibly damaged.

By-line:

Susan Jacobs is a teacher, a freelance writer as well as a regular contributor for NOEDb, a site helping students obtain an online nursing degree. Susan invites your questions, comments and freelancing job inquiries at her email address susan.jacobs45@gmail.com.

If you have feedback for Susan, please post a comment and/or send her an email.

Friday, March 14, 2008

I won't pretend that I have a lot to contribute to discussion about the Spitzer prostitution "scandal." It seems to be the biggest crisis to hit the USA since 9/11 according to the bizarre media fascination with the event. On this blog as well as many other blogs to which I frequently link, much more important issues are discussed, yet the media typically chooses to ignore such issues in favor of covering stories that feature more T & A.

For example, I'd say the growing number of states suing Eli Lilly over its marketing of Zyprexa and its alleged coverup of the drug's risks is more newsworthy than two adults having consensual sexual relations. Yes, Spitzer is a gold-medal winning hypocrite who damaged his family. His behavior was immoral. Duh. But why should a crime that included so very few victims become such a blockbuster media sensation when there are problems of much grander scale occurring at the same time about which the public is essentially unaware?

For more on the incredibly strange logic involved in all of the Spitzer rigamarole, please check out Glenn Greenwald's smokin' hot post.

Thursday, March 13, 2008

About a year ago, I wrote a bit about the case of British psychologist Lisa Blakemore Brown. She was being prosecuted by the British Psychological Society (BPS) at the time regarding her alleged lack of fitness to practice psychology due to "paranoia." The best source of info on the topic comes from two spots: Aubrey Blumsohn's collected posts at the Scientific Misconduct Blog and a transcript (at Furious Seasons) of a hearing involving the allegations against Blakemore Brown.

It would make sense that a professional society such as the BPS would take action in cases of serious misconduct, such as sexual relations with clients, fraudulent billing practices, or other forms of exploiting one's clientele. It makes much less sense to prosecute an individual on trumped-up charges of mental illness, particularly when a "star witness" is testifying against Blakemore Brown's mental state even though he never interviewed her. In one choice snippet of testimony that I noted several months ago, this witness claimed that if Blakemore Brown has actually experienced a significant degree harassment and persecution, and then responded by becoming fearful and distrusting, she would be deemed paranoid in his judgment. In other words, regardless of circumstances, any type of fearful response to any type of situation, no matter how threatening, is indicative of paranoia. At the time, I wrote:

Hold the train.Seriously, STOP.So even if people really are out to get you, you are paranoid if you believe that people are out to harm you.Apparently the natural response of fear when one is objectively, realistically threatened, is now paranoia.

The Complaints Committee therefore found no evidence of professional misconduct on your part. The matter is now closed as regards to the Society.

The sham investigation concluded after many years with a finding of not guilty. But what about the cost to Lisa Blakemore Brown, personally, professionally, and financially? I don't know if a protracted apology was included in the findings, though I strongly doubt it. Regardless, no apology can make up for having one's name dragged through the mud, being compelled to attend several sham hearings, and for the tragicomic attempts of the BPS to keep all of this under wraps. As pointed out by Aubrey Blumsohn, it sure is strange that the BPS has the time and resources to pursue cases against practicing psychologists based on sham evidence while it takes a stance of silence on the many issues of scientific and financial malfeasance involving the drug industry discussed on this site and others.

As a disclaimer to my last point, let me again mention that I'm not against the drug industry. I am against the drug industry misrepresenting scientific findings in order to meet its marketing needs. Is that such a crazy position? Am I paranoid?

Wednesday, March 12, 2008

The 2007 U.S. drug sales and prescriptions data have been released. According to IMS Health, psych meds did quite well in the United States last year. Highlights:

Of all drugs, Seroquel (quetiapine) generated the fifth most money in sales. Not bad for an antipsychotic, or broad spectrum psychotropic or whatever it is marketed as these days. $3.5 billion in U.S. sales alone in 2007. Maybe AstraZeneca's investments in key opinion leaders were excellent business decisions.

Antipsychotics generated more money in sales than did antidepressants ($13.1 billion vs $11.9 billion). If nothing else, it helps to again nail down the official paradigm shift favoring the antipsychotics as treatments for everything under the sun.

Worry not, because antidepressants are still the most prescribed class of medications. Antipsychotics just cost a lot more, hence their generation of higher sales numbers.

The global numbers will also be interesting. I'm not sure when they are scheduled for release, but you can bet that the figures for antipsychotics will be mind-blowing.

Tuesday, March 11, 2008

Intro. A few months ago, I wrote about key opinion leaders (KOLs) in psychiatry arguing that we shouldn't be making such a big deal about their payments from drug companies. After all, they were just receiving chump change. At the time, my motivation was spurred by a great piece in the New York Times on the issue of physicians receiving payments from drug companies. Physicians are often paid to become "key opinion leaders," aka salespeople. Often possessing academic positions, these KOLs give speeches to fellow physicians in which they extol the virtues of a drug in exchange for cash. Of course, physicians might be leery if a sales representative was discussing the latest wonder drug, so using an "independent" physician uses a basic marketing trick, the third-party technique, in order to give the marketing message a veneer of credibility. For the past few years, KOLs have been lighting up the upscale restaurant scene across the nation, discussing the benefits of atypical antipsychotic (er, broad spectrum psychotropic) treatment for a wide variety of ills. From schizophrenia to bipolar disorder to anxiety to well, pretty much anything you can imagine, antipsychotics are the treatment du jour.

"I don't make much." One KOL in the wonderful world of atypicals has been Melissa DelBello. In particular, her specialty is children with bipolar disorder. I've previously stated my beef with the child bipolar paradigm and I'll discuss a couple of my contentions a bit later in the post. DelBello has been involved in research regarding the treatment of child bipolar disorder (not saying I necessarily agree with the term; just using it because she used it). As a KOL, DelBello has given talks supported by AstraZeneca, manufacturer of Seroquel. As for her reimbursement for such talks, she said "Trust me. I don't make much."

Here is where it gets interesting. After Dr. DelBello released her study, Astra Zeneca began hiring her to give several sponsored talks. Another doctor told The New York Times he was persuaded to start prescribing drugs [Page: S10722] such as Seroquel after listening to Dr. DelBello. But when the reporter from the New York Times asked Dr. DelBello how much money she got from Astra Zeneca, she told the paper: ``Trust me. I don't make much.''

Well, I decided to find out how much, and I went directly to the University of Cincinnati who, by the way, has been extremely cooperative, helpful, and responsive. Soon I figured out just how much ``not that much'' money is. Dr. DelBello's study, which helped put Seroquel on the map, was published in 2002. That next year, she got more money than she has ever received from the pharmaceutical companies--at least that is what the documents that I have say.

In 2003, Astra Zeneca alone paid her a little over $100,000 for lectures, consulting fees, travel expenses, and service on advisory boards. In 2004, Astra Zeneca paid her over $80,000 for the same services.

So, if I have this correct, $180k over two years is "not much money." Hey, this is quite similar to a response from another moonlighting entrepreneur with a license to practice medicine. To quote from the New York Times...

The psychiatrist receiving the most from drug companies was Dr. Annette M. Smick, who lives outside Rochester, Minn., and was paid more than $689,000 by drug makers from 1998 to 2004. At one point Dr. Smick was doing so many sponsored talks that “it was hard for me to find time to see patients in my clinical practice,” she said.

“I was providing an educational benefit, and I like teaching,” Dr. Smick said.

Right. The companies provide you with the slides and the key marketing points, and you call yourself an "educator." Um, doesn't that actually make you a marketer? And the clincher: Who has time for patients in clinical practice when you are off stumping for the hot drug of the week?

The KOL-Pharma Marriage: For all I know, Dr. DelBello is a great human being. I disagree with her a great deal on the child bipolar thing, but there are certainly many very bright and reasonable people who see things differently than myself. Personally, I have difficulty seeing the child bipolar bandwagon as anything other than a massive campaign to re-brand a broad spectrum of unruly behavior under one heading that can be used to call out for antipsychotic treatment. Researchers in the area of child bipolar perceive that scientific progress is being made because they have "discovered" a condition that affects millions of youth. Big Pharma loves it because, conveniently, they can treat this newfound condition with their cash cow atypical antipsychotics. And the marriage between child bipolar researchers and Big Pharma becomes even tighter through the well-paying speaking gigs in which KOLs pimp atypicals as the treatment for child bipolar, a condition that was considered quite rare until KOLs and Pharma "educated" us about this "neglected and undertreated serious medical condition."

Taking large payments then writing them off as "not much" just adds another brick to the wall of conflicted interests that dominates medicine these days. The physician-marketer (aka KOL) can perhaps take great pride in the rate of treatment for child bipolar expanding by perhaps 4000% of late. In fact, the spread of atypical antipsychotics for children, the elderly, and everyone else is such good news that I think KOLs should be up for some sort of marketing award. Go Team Seroquel!Viva Zyprexa! Rock on Geodon Crew! Gimme an I-N-V-E-G-A! Pimp that Abilify!

Believe it or not, I'm not against industry-academic collaboration. But I am against industry-academic corruption. When there are no checks and balances on a system, one should not be surprised when it is subverted by a combination of power and money. When academics turn into industry spokespeople, or become informationlaunderers, or become medal winners in the conflict of interest department, why on Earth should we simply trust them as if they had no skin in the game?

Friday, March 07, 2008

Dr. Daniel Carlat has been busy. He aptly notes that Pristiq is an Effexor copycat that apparently provides no special benefits over soon to be generic venlafaxine. Hey, didn't I just write a piece or two about Effexor? In addition, Carlat continues to hammer the corrupting, I er, continuing medical education industry. He also documents the use of deceptive "surveys" to market antipsychotics. Excellent work -- keep it up! Dr. Grohol at PsychCentral pointed out another set of potential problems with the surveys.

Furious Seasons puts forth Ye Olde Pimp Slappe on antidepressant use in bipolar disorder with a side dish of I Told You So. He has indeed questioned the use of antidepressants in bipolar disorder and the latest data continue to question the utility and safety of such practices. Philip Dawdy notes accurately that he is the only person in the USA to host the infamousZyprexa documents online. He also broke a number of excellent stories on said documents. All for the salary of zero dollars. So why not send him some money? He's doing a fundraiser currently. You can donate here. Hey, I'd like to rake in some donations for myself. I think I provide a somewhat valuable service, and my day job doesn't exactly make me rich. But when Dawdy is doing such work much more productively than myself and he doesn't even have a day job (file under journalism in crisis), I think he deserves your financial consideration, not I. So if you ever had the kindhearted intention of sending me cash to support my work, send your money to Philip Dawdy.

Health Care Renewal is chronically excellent, as y'all know already. Recent stories include a fat conflict of interest involving the head of the Obesity Society, yet another chapter in the sordid University of Medicine and Dentistry of New Jersey affair, and a take on the baseless lawsuit from HipSaver.

As usual, Pharmalot and PharmaGossip have continued to provide all the news that's fit to print. Of particular interest to my readers (I think) was the marketing of Abilify. Perhaps yet more interesting, the 6th episode of RX -- Sex, Drugs, and Quarterly Goals is up. Everyone should check out all six episodes. I'm hooked.

Pharma Giles has been generating his usual brand of dead-on satire. I was particularly amused by his take on the most recent Kirsch antidepressant meta-analysis.

Wednesday, March 05, 2008

This post will discuss how the latest meta-analysis claiming to show public health benefits for Effexor actually also showed that antidepressants aren't up to snuff. Part 1 detailed how the study authors found a very small advantage for Effexor over SSRIs, which they then suggested meant that Effexor offered significant benefits for public health over SSRIs. Ghostwriters, company statisticians, questions about transparency, etc. Even the journal editor jumped on board. All the usual goodies.

Bad News for SSRIs: But now, on to part deux. Remember that the authors used a Hamilton Depression Rating Scale of 7 or less as indicative of remission, which was the one and only outcome measure of import in their analysis. In their database of studies analyzed in the meta-analysis, there were nine studies that had an Effexor group, an SSRI group, and a placebo group. In these studies, there was a 5.5% difference in remission rates for SSRIs versus placebo. Read it again: there was a 5.5% difference in remission rates for SSRIs versus placebo. You should be shaking your head, perhaps cursing under your breath or even aloud. Using the number needed to treat statistic that the authors used in their analysis of Effexor versus SSRIs, that means you would have to treat 18 people with SSRI instead of a placebo to get one additional remission that you would not get if all 18 had received a placebo. Damn -- that is pathetic! In these same nine trials, the difference between Effexor and SSRIs was 13%, for a number needed to treat of 8. One might conclude that Effexor was more than twice as effective as SSRIs based on these figures, but one would be wrong. Please see my prior post for why depression remission should absolutely not be used as the only judgment of a drug's efficacy. Granted, the numbers for SSRIs were based on nine trials, which limits the generalizability of the findings, but the findings sure fit well with the Kirsch series of meta-analyses that found only a small difference for SSRIs over placebo in all but the most severe cases.

If you told most people that you would have to treat 18 depressed patients with a SSRI rather than a placebo to get one additional remission in depressive symptoms, you'd get laughed out of the room, but that is exactly what Nemeroff et al found. Do the authors conclude with: "The findings confirm earlier work by Kirsch and colleagues showing that the benefits of SSRIs over placebo are quite modest"? Not exactly. Here is their interpretation:

To achieve one remission more than with placebo, 8 patients would need to be treated with venlafaxine (NNT = 8) compared with 18 patients who would need to be treated with an SSRI (NNT = 18). From this perspective, the magnitude of the advantage of SSRIs versus placebo in the placebo-controlled dataset (NNT=18) is similar to the advantage of venlafaxine relative to SSRIs in the combined data set (NNT = 17).

This is right after the authors wrote about how a NNT of 17 was possibly important to public health (see part 1), which was about the time I fell out of my chair laughing. A more plausible interpretation is that SSRIs yielded very little benefit over placebo and that Effexor, in turn, yielded very little benefit (in fact, a statistically significant benefit over only Prozac) over SSRIs. But that sort of interpretation does not lead to good marketing copy or press releases that tout the benefits of medication well beyond what is reasonable. What if the press releases for this study read: "Nemeroff confirms findings of Kirsch: Antidepressants offer very little benefit over placebo." That would have been refreshing.

Sidebar: Here is my standard statement about antidepressants -- they work. Huh? Yeah, the average person (surely not everyone) on an antidepressant improves by a notable amount. The problem is that the vast majority (about 80%) of such improvement is due to the placebo effect and/or the depression simply getting better over time. Give someone a pill and that person will likely show some improvement, but nearly all of the improvement is due to something other than the drug. If most improvement is due to the placebo effect, couldn't we usually get such improvement using psychotherapy, exercise, or something else, which might avoid some drug-induced side effects? Moving on...

Key Opinion Leaders: But notice how this Wyeth/Advogent authored piece featuring Charles Nemeroff as lead author (as well as Michael Thase as last author) throws down a major spin job regarding the efficacy of antidepressants. As reported previously, their measure of efficacy was quite arbitrary. It could have been supplemented with other measures, as Wyeth is in possession of such relevant data, but such analyses were not conducted. But even using their questionable measure of efficacy, antidepressants put on a poor performance. Similarly, Effexor's advantage over SSRIs was meager. Yet the authors (remember, three medical writers worked on this paper) conclude that venlafaxine offers a public health benefit over SSRIs. Maybe the authors were afraid of being sued for writing anything negative in their paper? Or perhaps they just know who is buttering their bread. It is also possible that the authors truly cannot envision the idea that SSRIs offer such a meager advantage over placebo and that Effexor yields very little (if any) benefit over SSRIs. And that is the problem. The "key opinion leaders" are all stacked on one side of the aisle -- drugs are highly effective and each new generation of medications is better than the last. So plug in the name of the next drug here, and you'll see a key opinion leader along with a team of medical writers rushing out to show physicians that the latest truly is the greatest. Since we don't really train physicians to understand clinical trials or statistics particularly well, you can also expect many physicians targeted by such marketing efforts to simply lap up unsupported claims of "public health benefit."

Monday, March 03, 2008

In an amazing and highly troubling move, HipSaver, a corporation that manufactures hip protection gear, is suing the authors of a study who had the temerity to write in their article: "These results add to the increasing body of evidence that hip protectors, as currently designed, are not effective for preventing hip fracture among nursing home residents."

Though this is not my area of expertise, my loose familiarity with the research indicates that the above statement appears to be true. The study in question did not examine the HipSaver product, and the offending statement was made in the discussion section, where authors offer opinions about their findings.

HipSaver said that such claims are a slander upon the field of hip protectors. If we are going to start suing authors based on the discussion sections of their articles, then we may as well stop doing science immediately. Of course, much of what passes for science these days is iffy, so maybe nobody would notice if we just stopped doing clinical trials.

Read more at the WSJ Health Blog. Thanks to the reader who alerted me to this bizarre development.

A recent study in the journal Biological Psychiatry claimed to show that Effexor's (venlafaxine's) alleged advantages over SSRIs "may be of public health relevance." Unstated in the article, but a more accurate reading of their findings, is that antidepressants yield little benefit over a placebo. I'm breaking this into two parts. The current post deals with the authors' claims regarding venlafaxine's superiority over SSRIs. A second post will examine their understated finding that antidepressants are not particularly impressive compared to placebo.

The study was a meta-analysis, where data from all clinical trials comparing Effexor to an SSRI were pooled together. The authors used remission on the Hamilton Rating Scale for Depression (HAM-D) as their measure of treatment effectiveness. On the HAM-D, a score of less than or equal to 7 was used to define remission. They found that remission rates on Effexor were 5.9% greater than remission rates on SSRIs. Thus, one would need to treat 17 depressed patients with Effexor rather than an SSRI to yield one remission that would not have occurred had all 17 patients received an SSRI. Not a big difference, you say? Here's what the authors said:

...the pooled effect size across all comparisons of venlafaxine versus SSRIs reflected an average difference in remission rates of 5.9%, which reflected a NNT of 17 (1/.059), that is, one would expect to treat approximately 17 patients with venlafaxine to see one more success than if all had been treated with another SSRI. Although this difference was reliable and would be important if applied to populations of depressed patients, it is also true that it is modest and might not be noticed by busy clinicians in everyday practice.Nonetheless, an NNT of 17 may be of public health relevance given the large number of patients treated for depression and the significant burden of illness associated with this disorder. [my emphasis]

Public Health Relevance/Remission: The public health claim is pretty far over the top. If one had to treat 17 patients with Effexor to prevent a suicide or homicide that would have occurred had SSRIs been used, then yes, we'd be talking about a significant impact on public health. But that's not what we're dealing with in this study. The outcome variable was remission on the HAM-D, which is a soft, squishy measure of convenience. The authors state that remission rates are "the most rigorous measure of antidepressant efficacy," but to my knowledge there is no evidence supporting their adoption of the magic cutoff score of 7 on the HAM-D as the definition for depressed/not depressed. Are people who scored 8 or 9 on the HAM-D really significantly more depressed than people who scored 6 or 7? Take a look at the HAM-D yourself and make your own decision. I know of not a single piece of empirical data stating that such small differences are meaningful. So I'm not buying the public health benefit -- in fact, I think it is patently ridiculous.

Outcome measures can be either categorical (e.g., remission or no remission) or continuous (e.g., change on HAM-D scores from pretest to posttest). Joanna Moncrieff and Irving Kirsch discuss how using cut-off scores (categorical measures) rather than looking at mean change (continuous measures) can result in the categorical measure making the treatment appear much more effective than examination of continuous measures. Applied to this case, one wonders why the data on mean improvement was not provided. One can make a very weak case that Effexor works better than SSRIs based on an arbitrary categorical measure but not one shred of data was presented to show superiority on a continuous measure. If the data supported Effexor on both categorical and continuous measures, then I'd bet they would have been discussed in this article, as it was funded by Wyeth (patent holder for Effexor). Thus, the absence of data on continuous measures (e.g., difference in mean improvement on the HAM-D between Effexor-treated patients and SSRI-treated patients), is suspicious.

Even if the authors decided to use only categorical measures, it would have been nice had they opted to use multiple measures. They could have used the equally arbitrary 50% improvement criterion (HAM-D scores drop by 50% during treatment), for example. However, such data were not provided. So the authors decided to use one highly arbitrary measure, on which they found a very small benefit for venflafaxine over placebo. Whoopee.

I received an email from a respected psychiatrist (who shall remain anonymous) about this study. He/she opined:

...it would have been interesting if the authors had used other cutoffs for the Hamilton scale besides 7 to define remission; i.e., if they had done a sensitivity analysis. Apparently, Wyeth has all the raw data from the studies, so a lot of interesting science could be done with this very large aggregate database. For example, there are robust factor analyses of the Hamilton scale that indicate reasonably independent dimensions of depressed mood, agitation/anxiety, diurnal variation, etc., and it would be of great interest to determine the relative effects of the various drugs on these different illness dimensions

In other words, the authors could have attempted to see if there were meaningful differences between Effexor and SSRIs on important variables, yet they opted to not undertake such analysis. A skeptical view is that they analyzed the data in such a fashion, found nothing, and thus just reported the "good news" about Effexor. I don't know if they conducted additional analyses that were not reported. However, it would seem to me that someone at Wyeth would have run such analyses at some point, perhaps as part of this meta-analysis, because any advantage over SSRIs would make for excellent marketing copy. In fact, Effexor has been running the "better than SSRIs" line for years, based on rather scant data. If there were more impressive data, they would have been reported by now.

Prozac and the Rest: The findings showed that Effexor was only superior to a statistically significant degree (i.e., we'd not expect such differences by chance alone) when compared to Prozac (fluoxetine). The authors, to their credit, pointed this out on multiple occasions. However, their reporting seems a little contradictory when, on one hand, they report that venlafaxine was superior to SSRIs as a class (see quote toward the top of the post), but then note that the differences were only statistically significant when compared to Prozac. The percentage difference in remission favoring Effexor over Zoloft (sertraline) was 3.4%, over Paxil (paroxetine) was 4.6%, Celexa (citalopram) was 3.9%, and Luvox (fluvoxamine) was 14.1%. I think just about anyone would concur that the difference versus fluvoxamine seems too high to be credible, and it was based on only one study, making the fluke factor more tenable. Again, the advantage of Effexor over all SSRIs except Prozac was not statistically significant. Even if these differences were statistically significant, would the authors claim that needing to treat 26 patients with Effexor rather than Celexa to achieve one additional depression remission would improve public health? Small differences on a soft, squishy, arbitrary endpoint combined with not performing (or not reporting) more meaningful data = Not news.

The Editor Piles On: In a press release, the editor of the journal in which this article appears jumped on board in a big way:

Acknowledging the seemingly small advantage, John H. Krystal, M.D., Editor of Biological Psychiatry and affiliated with both Yale University School of Medicine and the VA Connecticut Healthcare System, comments that this article “highlights an advance that may have more importance for public health than for individual doctors and patients.” He explains this reasoning:

"If the average doctor was actively treating 200 symptomatic depressed patients and switched all of them to venlafaxine from SSRI, only 12 patients would be predicted to benefit from the switch. This signal of benefit might be very hard for that doctor to detect. But imagine that the entire population of depressed patients in the United States, estimated to be 7.1% of the population or over 21 million people, received a treatment that was 5.9% more effective, then it is conceivable that more than 1 million people would respond to venlafaxine who would not have responded to an SSRI. This may be an example of where optimal use of existing medications may improve public health even when it might not make much difference for individual doctors and patients."

Seeing a journal editor swallow the Kool-Aid is not encouraging. Again, the 5.9% difference is based on an endpoint that may well mean nothing.

Ghostwriter Watch: Who wrote the study and who conducted the analyses? The authors are listed as Charles Nemeroff, Richard Entsuah, Isma Benattia, Mark Demitrack, Diane Sloan, and Michael Thase. Their respective contributions are not listed in the text of the article. The contribution of Wilfrido Ortega-Leon for assistance with statistical analysis is acknowledged in the article, as are the contributions of Sherri Jones and Lorraine Sweeney of Advogent for "editorial assistance."

Ortega-Leon appears to be an employee of Wyeth. So did an employee of Wyeth run all of the stats, then pass them along to the authors for writeup? Last time I checked, there were sometimes problems associated with having a company-funded statistician run the stats then pass them along without any independent oversight. I don't know what happened, but my questions could have been easily resolved: Describe each author's contributions in a note at the end of the article.

Sherri Jones and Lorraine Sweeney have served in an "editorial assistant" role for other studies promoting Effexor, such as this one. I suspect that they are familiar with the key marketing messages for the drug. An important question: What does "editorial assistance" mean? Did Jones and Sweeney simply spell-check the paper and make sure the figures looked pretty? Did they consult the authors to get the main points, then fill in a few gaps? Or did they write the whole paper then watch the purported authors rubber-stamp their names on the author byline? Simply listing "editorial assistance" is not transparency. I have no problem with medical writers helping with a manuscript, depending on what "helping" means. Many researchers are not skilled writers and cleaning up their writing is a good idea for all parties. But having a medical writer who is paid by a drug company to make sure that key marketing messages are included in the paper can lead to problems.

Part 2, regarding the unemphasized, but important, finding from this study that antidepressants yield mediocre benefits over placebo.

Update (03-03-08): See comments. A wise reader has pointed out that there are actually three authors from Advogent. Well, um, one author and two editorial assistants. A skeptical person would add that the presence of three medical writers and a Wyeth statistican who appears in a footnote at the end of the study obviates the need for those pesky academic authors except for the need to lend the study a stamp of approval from "independent scientists." Is that too cynical?

Organizations

Scientific Misconduct

About Me

I'm an academic with a respectable amount of clinical experience and no drug industry funding. Given my lack of time, don't expect multiple daily updates. Certain things about clinical psychology, the drug industry, psychiatry, and academics drive me nuts, and you'll probably pick up on these pet peeves before long...