Sunday, August 12, 2018

She's right to be worried! There are
Denim Boutique Siwy Denim Shorts Boutique Shorts Boutique Siwy Denim Siwy so many possible cracks that bias can seep through, nudging clinical trial results off course. Some of the biggest come from people knowing which comparison group a participant will be, or has been, in. Allocation concealment and blinding are strategies to reduce this risk.

Before we get to that, let's look at the source of the problems we're aiming at here: people! They bring subjectivity to the mix, even if they are committed to the trial - and not everyone who plays a role will be supportive, anyway. On top of that, randomizing people - leaving their fate to pure chance - can be the rational and absolutely vital thing to do. But it's also
"anathema to the human spirit", so it can be awfully hard to play totally by the rules.

And we're counting on a lot of people here, aren't we? There are the ones who enter an individual into one of the comparison groups in the trial. There are those individual participants themselves, and the ones dealing with them during the trial - healthcare practitioners who treat them, for example. And then there are the people measuring outcomes - like looking at an x-ray and deciding if it's showing improvement or not.

What could possibly go wrong?!

Plenty, it turns out. Trials that don't have good guard rails for concealing group allocation and then blinding it are likely to exaggerate the benefits of health treatments (meta-research on this
here and
here).

Let's start with allocation concealment. It's critical to successfully randomizing would-be trial participants. When it's done properly, the person adding a participant to a trial has no idea which comparison group that particular person will end up in. So they can't tip the scales out of whack by, say, skipping patients they think wouldn't do well on a treatment, when that treatment is the next slot to allocate.

Some allocation methods make it easy to succumb to the temptation to crack the system. When allocation is done using sealed envelopes, people
have admitted to opening the envelopes till they get the one they want - and even going to the radiology department to use a special lamp to see through an opaque envelope, and breaking into a researcher's office to hunt for info! Others have
kept logs to try to detect patterns and predict what the next allocation is going to be.

This happens more often than you might think. A study in 2017 compared sealed envelopes with a system where you have to ring the trial coordinating center to get the allocation. There were 28 clinicians - all surgeons - allocating their patients in this trial. The result:

With the sealed envelopes, the randomisation process was corrupted for patients recruited from three clinicians.

But there was an overall difference in the ages of people allocated in the whole "sealed envelope" period, too - so some of the others must have peeked now and then, too.

Messing with allocation was one of the problems that led to a famous trial of the Mediterranean diet being retracted recently. (I wrote about this at
Absolutely Maybe and for the
BMJ.) Here's what happened, via a report from Gina Kolata (
New York Times):

A researcher at one of the 11 clinical centers in the trial worked in small villages. Participants there complained that some neighbors were receiving free olive oil, while they got only nuts or inexpensive gifts.

So the investigator decided to give everyone in the same village the same diet. He never told the leaders of the study what he had done.

"He did not think it was important"....

But it was: it was obvious on statistical analysis that the groups couldn't have been properly randomized.

The opportunities to mess up the objectivity of a trial by knowing the allocated group don't end with the randomization. Clinicians could treat people differently, thinking extra care and additional interventions are necessary for people in some groups, or being quicker to encourage people in one group to pull out of the trial. They might be more or less eager to diagnose problems, or judge an outcome measure differently.

Participants can do the equivalent of all this, too, when they know what group they are in - seek other additional treatments, be more alert to adverse effects, and so on. Ken Schulz lists potential ways clinicians and participants could change the course of a trial
here, in Panel 1.

There's no way of completely preventing bias in a trial, of course. And you can't always blind people to participants' allocation when there's no good placebo, for example. But here are 3 relevant pillars of bias minimization to always look for when you want to judge the reliability of a trial's outcomes:

Adequate concealment of allocation at the front end;

Blinding of participants and others dealing with them during the trial; and

Blinding of outcome assessors - the people measuring or judging outcomes.

Pro tip: Go past the words people use (like "double blind") to see who was being blinded, and what they actually did to try to achieve it. You need to know
"Who knew what and when?", not just what label the researchers put on it.

Monday, December 4, 2017

This fortune cookie could start a few scuffles. It's offering a cheerful scenario if you are looking for a benefit of a treatment, for example. But it sure would suck if you are measuring a harm! That's not what's contentious about it, though.

It's the p values and their size that can get things very heated. The p value is the result you get from a standard test for statistical significance. It can't tell you if a hypothesis is true or not, or rule out coincidence. What it can do is measure an actual result against a theoretical expectation, and let you know if this is pretty much what you would expect to see if a hypothesis is true. The smaller it is, the better: statistical significance is high when the p value is low. Statistical hypothesis testing is all a bit Alice-in-Wonderland!

As if it wasn't already complicated enough, people have been dividing rapidly into camps on p values lately. The p value has defenders - we shouldn't dump on the test, just because people misuse it, they say (
here). Then there are those who think it should be abandoned or at least very heavily demoted (
here and
here, for example).

Then there is the camp in favor of raising the bar by lowering the level for p values. In September 2017, a bunch of heavy-hitters say the time has come to expect p values to be
much tinier, at least when something new is claimed (
here).

How tiny are they saying a p should be? The usual threshold has been p <0.05 (less than 5%). Instead of that being a significant finding, they decided, just a bit less than 0.05 should only be called "suggestive" of a significant finding. A significant new finding should be way tinier: <0.005.

That camp reckons support for this change has reached critical mass. Which is suggestive of the <0.05 threshold going the way of the dodo. I have no idea what the fortune cookie on that says! (If you want to read more on avoiding p value potholes, check out
my 5 tips on Absolutely Maybe.)

Now let's get back to the core message of
our fortune cookie: the size of a p value is a completely separate issue from the size of the effect. That's because the size of a p value is heavily affected by the size of the study. You can have a highly statistically significant p value for a difference of no real consequence.

There's another trap: an important effect might be real, but the study was too small to know for sure. Here's
an example. It's a clinical trial of getting people to
watch a video about clinical trials, before going through the standard informed consent process to join a hypothetical clinical trial. The control group went through the same consent process, but without the video.

The researchers looked for possible effects on a particular misconception, and on willingness to sign up for a trial. They concluded this (I added the bold):

An enhanced educational intervention augmenting traditional informed consent led to a meaningful reduction in therapeutic misconception without a statistically significant change in willingness to enroll in hypothetical clinical trials.

You need to look carefully when you see statements like this one. You might not be getting an accurate impression. Later, the researchers report:

That means they worked out how many people they needed to recruit based only on what was needed to detect a difference of several points in the average misconception scores. Willingness to join a trial dropped by a few percentage points, but the difference wasn't statistically significant. That could mean it doesn't really reduce willingness - or it could mean the study was too small to answer the question. There's just a big question mark: this video reduced misconception, and a reduction in willingness to participate can't be ruled out.

What about the effect size? That is how big (or little) the difference between groups is. There are many different ways to measure it. For example, in this trial, "willingness to participate" was simply the proportion of people who said "yes" or "no".

However, the difference in "misconception" in that trial was measured by comparing mean results people scored on a test of their understanding. You can brush up on means, and how that leads you to standard deviations and standardized mean differences
here at Statistically Funny.

There are other specific techniques used to set levels of what effect size matters - but those are for another day. In the meantime, there's a technical article explaining important clinical differences
here. And another on
Cohen's d, a measure that is often used in psychological studies. It comes with this rule of thumb: 0.2 is a small effect, 0.5 is medium, and 0.8 is a large effect.

Study reports should allow you to come to your own judgment about whether an effect matters or not. May the next research report you read be written by people who make that easy!

Sunday, September 11, 2016

Imagine if weather reports only gave the expected average temperature across a whole country. You wouldn't want to be counting on that information when you were packing for a trip to Alaska or Hawaii, would you?

Yet that's what reports about the strength of scientific results typically do. They will give you some indication of how "good" the whole study is: and leave you with the misleading impression that the "goodness" applies to every result.

Of course, there are some quality criteria that apply to the whole of a study, and affect everything in it. Say I send out a survey to 100 people and only 20 people fill it in. That low response rate affects the study as a whole.

You can't just think about the quality of a study, though. You have to think about the quality of each result
within that study. The likelihood is, the reliability of data will vary a lot.

For example, that imaginary survey could find that 25% of people said yes, they ate ice cream every week last month. That's going to be more reliable data than the answer to a question about how many times a week they ate ice cream 10 years ago. And it's likely to be less reliable than their answers to the question, "What year were you born?"

Then there's the question of missing data. Recently
I wrote about bias in studies on the careers of women and men in science. A major data set people often analyze is a survey of people awarded PhDs in the United States. Around 90% of people answer it.

But within that, the rate of missing data for marital status can be around 10%, while questions on children can go unanswered 4 times as often. Conclusions based on what proportion of people with PhDs in physics are professors will be more reliable than conclusions on how many people with both PhDs in physics and school-age children are professors.

One of the most misleading areas of all for this are the abstracts and news reports of meta-analyses and systematic reviews. It will often sound really impressive: they'll tell you how many studies, and maybe how many people are in them, too. You could get the impression then, that this means all the results they tell you about have that weight behind them. The
standard-setting group behind systematic review reporting says you shouldn't do that: you should make it clear with each result. (Disclosure: I was part of that group).

This is a really big deal. It's unusual for every single study to ask exactly the same questions, and gather exactly the same data, in exactly the same way. And of course that's what you need to be able to pool their answers into a single result. So the results of meta-analyses very often draw on a subset of the studies. It might be a big subset, but it might be tiny.

"Just two years ago, a meta-analysis crunched the numbers from more than 80 studies involving more than 200,000 women with breast cancer, and reported that women who were obese when diagnosed had a 41 percent greater risk of death, while women who were overweight but whose body mass index was under 30 had a 7 percent greater risk".

There really was not much of a chance that all the studies had data on that - even though you would be forgiven for thinking that when you looked at
the abstract. And sure enough, this is how it works out when you
dig in:

There were 82 studies and the authors ran 31 basic meta-analyses;

The meta-analytic result with the most studies in it included 24 out of the 82;

84% of those results combined 20 or fewer studies - and 58% had 10 or less. Sometimes only 1 or 2 studies had data on a question;

The 2 results the New York Times reported came from about 25% of the studies and less than 20% of the women with breast cancer.

The risk data given in the study's abstract and the
New York Times report did not come from "more than 200,000 women with breast cancer". One came from over 42,000 women and the other from over 44,000. In this case, still a lot. Often, it doesn't work that out way, though.
Boutique Denim Shorts Siwy Denim Denim Siwy Siwy Shorts Boutique Boutique So be very careful when you think, "this is a good study". That's a big trap. It's not just that all studies aren't equally reliable. The strength and quality of evidence almost always varies
within a study.

Want to read more about this?

Here's an overview of the GRADE system for grading the strength of evidence about the effects of health care.

Sunday, August 14, 2016

Cupid's famous arrow causes people to fall blindly in love with each other. That can end happily ever after. Not so with his lesser known "immortal time bias" arrow! That one causes researchers to fall blindly in love with profoundly flawed results - and that never ends well.

This type of time-dependent bias afflicts observational studies. It's a particular curse for those studies relying on the "big data" from medical records instead of randomized trials.
A recent study found close to 40% of susceptible studies in prominent medical journals were "biased upward by 10% or more".
A study in 2011 found that 62% of studies of postoperative radiotherapy didn't safeguard against immortal time bias. That could make treatment look more effective than it really is.

So what is it? It's a stretch of time where an outcome couldn't possibly occur for one group - and that gives them a head start over another group.
Samy Suissa describes a classic case from the early days of heart transplantation in the 1970s. A 1971 study showed 20 people who had heart transplants at Stanford lived an average of 200 days compared to 14 transplant candidates who didn't get them and survived an average of 34 days.

Those researchers had started the clock from the point at which all 34 people had been accepted into the program. Now of course, all the people who got the transplants were alive at the time of surgery. For the stretch of time they were on the waiting list, they were "immortal": you could not die and still get a heart transplant. So when people on the waiting list died early, they were in the no-transplant group.

When the data were re-analyzed by others in 1974 to factor this into account, the survival advantage of the operation disappeared. (More about the history in Hanley and Foster's article,
Avoiding blunders involving 'immortal time'.)

This bias is also called survivor or survival bias, or survivor treatment selection bias. But time-dependent biases don't only affect death as an outcome. It can affect any outcome, not just death. So "immortal time" isn't really the best term. Hanley and Foster call it event-free time.

Carl von Walraven and colleagues are among the group that call this kind of phenomenon "competing risk bias":

They are the authors of the
2016 study I mentioned above about how common the problem is. They show the impact on data in a study they did themselves on patient discharge summaries.

If you were re-admitted to hospital before you got to a physician visit with your discharge summary, you didn't fare as well as the people who went to the doctor. If you just compare the group who went to the physician for follow-up as the hospital encouraged with the group who didn't, the group who didn't visit their doctor had
way higher re-admission rates. Not much surprise there, eh?

Siwy Siwy Siwy Boutique Shorts Denim Denim Boutique Boutique Denim Shorts Von Walraven says the risk grew as people started to do more time-to-event studies. They put the problem down partly to the popularity of a method for survival ratios that doesn't recognize these risks in its basic analyses. That's Kaplan-Meier risk estimation. You see Kaplan-Meier curves referred to a lot in medical journals.

Although they're called curves, I think they look more like staircases. Here's an example: number of months survived here starts off the same, but gets better for the blue line after a year, plateauing a couple of years later.

Some common statistical programs don't have a way to deal with time-dependent calculations in Kaplan-Meier analyses, according to von Walraven. You need extensions of the programs to handle some data properly. The Royal Statistical Society points to this problem too, in the description for their 2-day course on Survival Analysis. (One's coming up in London in
September 2016.)

Hanley and Foster have a great guide to recognizing immortal time bias (
Table 1, page 956). The key, they say, is to "Think person-time, not person":

If authors used the term 'group', ask... When and how did persons enter a 'group'? Does being in or moving to a group have a time-related requirement?

Given the problem is so common, we have to be
very careful when we read
observational studies with time-to-event outcomes and survival analyses. If authors talk about cumulative risk analyses and accounting for time-dependent measures, that's reassuring.

But what we really need is for the people who do these studies - and all the information gatekeepers, from peer reviewers to journalists - to learn how to dodge this arrow.

More reading on a somewhat lighter note: my post at Absolutely Maybe on whether winning awards or elections affects longevity.

~~~~

The Kaplan-Meier "curve" image was chosen without consideration of its data or the article in which it appears. I used the National Library of Medicine's Open i images database, and erased explanatory details to focus only on the "curve". The source is an article by Kadera BE et al (2013) in PLOS One.

Sunday, November 29, 2015

She's right: on average, when people talk about "average" for a number, they mean the mean.

The mean is the number we're talking about when we "even out" a bunch of numbers into a single number: 2 + 3 + 4 equals 9. Divide that total by 3 - the number of numbers in that set - and you get the mean: 3.

But then you hear people make that joke about "almost half the people being below average" - and that's not the mean any more. That's a different average. It's the median - the number in the middle. It comes from the Latin word for "in the middle", just like the word medium. That's why we call the line that runs down the middle of a road the median strip, too.

If the numbers in a group are all pretty close to each other - like our example here, or, say, the ages of everyone in a class at school - then there's not much difference between the mean and median.

But if the numbers in a group are wildly far apart - the ages of the people who like Star Wars movies, for example, or whose favorite singer is Frank Sinatra - then it can make a very big difference. Even if
Strangers In The Night had enough of a resurgence to drag the average age of Ol' Blue Eyes listeners down, the big Sinatra fan base would still skew older!

How far apart numbers in a dataset are spread from each other is called
variance: if the numbers bunch up in the middle, the variance is small. And understanding or dealing with variance is where we start to head in the direction of, well, sort of means of means.

The distance of a piece of data from the group's mean is a great standard way to measure the spread. This is called the deviation from the mean. A measure called the standard deviation from the mean will be bigger when the numbers are more spread out. Lots of results will cluster within 1 standard deviation (SD), and most will be within 2 standard deviations. Roughly like this:

From here, it's a hop, skip to another calculation based on the mean that you often come across in health studies. It's a way to standardize the differences in means (average results) called the standardized mean difference (SMD).

The SMD needs to be used when outcomes have been measured in similar, but different, ways in groups that researchers are comparing.

For example, there are
several scales used to measure fatigue in people with cancer. When researchers wanted to find out whether
exercise reduces or increases fatigue for people with cancer, the clinical trials of exercise they found used different scales to measure fatigue.

To get a perspective on the results of these trials, the SMD gave them the tool they needed to standardize the result from each trial. Having one standard way of seeing whether fatigue went up or down, meant the study results could be combined and compared. (The answer? Exercise reduces fatigue in people with cancer.)

There's a lot you can make sense of when you know what the means mean!

Feel like testing your knowledge of the mean, median, and mode? (The mode is the number in a set that occurs the most often: so if our example had been 2 + 3 + 4 + 4, then the mode would have been 4.) Try the Khan Academy quiz.Interested in the ancient roots of averages? Examples from Herodotus, Thucydides, and in Homer here.

Wednesday, September 30, 2015

Clinical trials are complicated enough when everything goes pretty much as expected. When it doesn't, the dilemma of continuing or stopping can be excruciatingly difficult. Some of the greatest dramas in clinical research are going on behind the scenes around this. Even who gets to call the shot can be bitterly disputed.

A trial starts with a plan for how many people have to be recruited to get an answer to the study's questions. This is calculated based on what's known about the chances of benefits and harms, and how to measure them.

Often a lot is known about all of this. Take a trial of antibiotics, for example. How many people will end up with gastrointestinal upsets is fairly predictable. But often the picture is so sketchy it's not much more than a stab in the dark.

It's hard enough to agree if there's uncertainty at any time! But the ground can shift gradually, or even dramatically, while a trial is chugging along.

I think it's helpful to think of this in 2 ways: a shift in knowledge caused by the experience in the trial, and external reasons.

Internal issues that can put the continuation of the trial in question include:

Not being able to recruit enough people to participate (by far the most common reason);

More serious and/or frequent harm than expected tips the balance;

Benefits much greater than expected;

The trial turns out to be futile: the differences in outcome between groups is so small, even if the trial runs its course, we'll be none the wiser (PDF).

External developments that throw things up in the air or put the cat among the pigeons include:

A new study or other data about benefits or safety - especially if it's from another similar trial;

Pressure from groups who don't believe the trial is justified or ethical;

Commercial reasons - a manufacturer is pulling the plug on developing the product it's trialing, or just can't afford the trial's upkeep;

Opportunity costs for public research sponsors has been argued as a reason to pull the plug for possible futility, too.

Sometimes several of those things happen at once. Stories about several examples are in a companion post to this one over at
Absolutely Maybe. They show just how difficult these decisions are - and the mess that stopping a trial can leave behind.

Trials that involve the risk of harm to participants should have a plan for monitoring the progress of the trial without jeopardizing the trial's integrity. Blinding or masking the people assessing outcomes and running the trial is a key part of trial methodology (more about that
here). Messing with that, or
dipping into the data often, could end up leading everyone astray. Establishing stopping rules before the trial begins is the safeguard used against that - along with a committee of people other than the trial's investigators monitoring interim results.

Although they're called stopping "rules", they're actually more guideline than rule. And other than having it done independently of the investigators, there is no one widely agreed way to do it - including the role of the sponsors and their access to interim data.

Some methods focus on choosing a one-size-fits-all threshold for the data in the study, while others are more Bayesian - taking external data into account. There is a detailed look at this in a
2005 systematic review of trial data monitoring processes by Adrian Grant and colleagues for the UK's National Institute of Health Research (NIHR). They concluded there is no strong evidence that the data should stay blinded for the data monitoring committee.

A
2006 analysis HIV/AIDS trials stopped early because of harm, found that only 1 out of 10 had established a rule for this before the trial began but it's more common these days. A
2010 review of trials stopped early because the benefits were greater than expected found that 70% mentioned a data monitoring committee (DMC). (These can also be called data and safety monitoring boards (DSMBs) or data monitoring and ethics committees (DMECs).)

Despite my cartoon of data monitoring police, DMCs are only advisors to the people running the trial. They're not responsible for the interpretation of a trial's results, and what they do generally remains confidential. Who other than the DMC gets to see interim data, and when, is a debate that can get very heated.

Clinical trials only started to become common
in the 1970s.
Richard Stephens writes that it was only in the 1980s, though, that keeping trial results confidential while the trial is underway became the expected practice. In some circumstances, Stephens and his colleagues argue, publicly releasing interim results while the trial is still going on can be a good idea. They talk about examples where the release of interim results saved trials that would have foundered because of lack of recruitment from clinicians who didn't believe the trial was necessary.

One approach when there's not enough knowledge to make reliable trial design decisions is a type of trial called an
adaptive trial. It's designed to run in steps, based on what's learned. About 1 in 4 might adapt the trial in some way (
PDF). It's relatively early days for those.

We also need to know more about when and how to bring people participating in the trial into the loop - including having community representation on DMCs. Informing participants at key points more would means some will leave. But most might stay, as they did in the Women's Health Initiative hormone therapy trials (
PDF) and one of the
AZT trials in the earlier years of the HIV epidemic.

There is one clearcut issue here. And that's the need to release the results of any trial when it's over, regardless of how or why it ended. That's a clear ethical obligation to the people who participated in the trial - the
desire to advance knowledge and help others is one of the reasons many people agree to participate. (More on this at the
All Trials campaign.)

Sunday, July 19, 2015

I used to think numbers are completely objective. Words, on the other hand, can clearly stretch out, or squeeze, people's perceptions of size. "OMG that spider is
HUGE!" "Where? What -
that little thing?"

Yes, numbers can be more objective than words. Take adverse effects of health care: if you use the word "common" or "rare", people won't get
as accurate an impression as if you use numbers.

But that doesn't mean numbers are completely objective. Or even that numbers are always better than words. Numbers get a bit elastic in our minds, too.

We're mostly good at sizing up the kinds of quantities that we encounter in real life. For example, it's pretty easy to imagine a group of 20 people going to the movies. We can conceive pretty clearly what it means if 18 say they were on the edge of the seats the whole time.

There's an evolutionary theory about this, called ecological rationality. The idea is, our ability to reason with quantities developed in response to the quantities around us that we frequently need to mentally process. (More on this in Brase [
PDF] and Gigerenzer and Hoffman [
PDF].)
Siwy Shorts Denim Boutique Denim Siwy Boutique Boutique Shorts Siwy Denim Whatever the reason, we're just not as good at calibrating risks that are lower frequency (Yamagishi [
PDF]). We're going to get our heads around 18 out 20 well. But 18000 out of 200000? Not so much. We'll do pretty well at 1 out of 10, or 1 out of 100 though.

And big time trouble starts if we're reading something where the denominators are jumping around - either toggling from percent to per thousand and back, or saying "7 out of 13 thought the movie was great, while 4 out of 19 thought it was too scary, and 9 out of 17 wished they had gone to another movie". We'll come back to this in a minute. But first, let's talk about some key statistics used to communicate the effects of health care.

Statistics - where words and numbers combine to create a fresh sort of hell!

First there's the problem of the elasticity in the way our minds process the statistics. That means that whether they realize it or not, communicators' choice of statistic can be manipulative. Then there's the confusion created when people communicate statistics with words that get the statistics wrong.

Let's look at some common measures of effect sizes: absolute risk (AR), relative risk (RR), odds ratio (OR), and number needed to treat (NNT). (The evidence I draw on is summarized
in my long post here.)

Natural frequencies are the easiest thing for people generally to understand. And getting more practice with natural frequencies might help us to get better at reasoning with numbers, too (Gigerenzer again [
PDF]).

Take our movie-goers again. Say that 6 of the 20 were hyped-up before the movie even started. And 18 were hyped-up afterwards. Those are natural frequencies. If I give you those "before and after" numbers in percentages, that's "absolute risk" (AR). Lots of people (but not everybody) can manage the standardization of percentages well.

But if I use relative risks (RR) - people were 3 times as likely to be hyped-up after seeing that movie - then the all-important context of proportion is lost. That's going to sound like a lot, whether it's a tiny difference or a huge difference. People will often react to that without stopping to check, "yes, but from what to what?" From 6 to 18 out of 20 is a big difference. But going from 1 out of a gazillion to 3 out of a gazillion just ain't much worth crowing or worrying about.

RRs are critically important: they're needed for calculating a personalized risk if you're not at the same risk as the people in a study, for example. But if it's the only number you look at, you can get an exaggerated idea.

So sticking with absolute risks or natural frequencies, and making sure the baseline is clear (the "before" number), is better at helping people understand an effect. Then they can put their own values on it.

The number needed to treat, takes the change in absolute change and turns it upside down. (Instead of calculating the difference out of 100, it's 100 divided by the difference.) So that instead of the constant denominator of 100, you now have ones that change: instead of 60% of people being hyped-up because of the movie, it becomes NNT 1.7 (1.7 people have to see the movie for 1 person to get hyped-up).

NNT is the anti-RR if you like: RRs exaggerate, NNTs minimize. Both can mislead - and that can be unintentional or deliberate.

When it comes to communicating with people who need to use results, I think using only statistics that will frequently mislead because it's a preference of the communicator is paternalistic, because it denies people the right to an impression based on their own values. Like all forms of paternalism, that's sometimes justified. But there's a problem when it becomes the norm.

The NNT was developed in the 1990s [
PDF]. It was meant to do a few things - including counteracting the exaggeration of the RR. Turns out it overshot the mark there! It was also intended to be easier to understand than the odds ratio (OR).

The OR brings us to the crux of the language problems. People use words like odds, risks, and chances interchangeably. Aaarrrggghhh!

Denim Siwy Denim Siwy Boutique Boutique Boutique Shorts Shorts Denim Siwy A risk in statistics is what we think of as our chances of being in the group: a 60% absolute risk means a 60 in 100 (or 6 in 10) "chance".

An odds ratio in statistics is like odds in horse-racing and other gambling. It factors in both the odds of "winning" versus the odds of "losing". (If you want to really get your head around this, check out
Know Your Chancesby Woloshin, Schwartz, and Welch. It's a book that's been
shown in trials to work!)

The odds ratio is a complicated thing to understand, especially if it's embedded in confusing language. It's a very sound way to deal with data from some types of studies, though. So you see odds ratios a lot in
meta-analyses. (If you're stumped about getting a sense of proportion in a meta-analysis, look at the number of events and the number of participants - they are the natural frequencies.)

Siwy Siwy Boutique Siwy Shorts Shorts Denim Denim Boutique Boutique Denim There's one problem that all of these ways of portraying risks/chances have in common: when people start putting them in sentences, they frequently get the language wrong. So they can end up communicating something entirely other than what was intended. You really need to double-check exactly what the number is, if you want to protect yourself from getting the wrong impression.

OK, then, so what about "pictures" to portray numbers? Can that get us past the problems of words and numbers? Graphs, smile-y versus frown-y faces, and the like? Many think this is "the" answer.
But...

This is going to be useful in some circumstances, misleading in others.
Gerd Gigerenzer and Adrian Edwards: "Pictorial representations of risk are not immune to manipulation either". (A topic for another time, although I deal with it a little in the "5 shortcuts" post listed below.)

Where does all this leave us? Few researchers reporting data have the time to invest in keeping up with the literature on communicating numbers - so while we can plug away at improving the quality of reporting of statistics, there's no overnight solution there.

Getting the hang of the common statistics yourself is one way. But the two most useful all-purpose strategies could involve detecting bias.

One is to sharpen your skills at detecting people's ideological biases and use of spin. Be on full alert when you can see someone is utterly convinced and trying to persuade you with all their chips on a particular way of looking at data - especially if it's data on a single outcome. If the question matters to you,
beware of the too-simple answer.

The second? Be on full alert when you see something you really want, or don't want, to believe. The biggest bias we have to deal with is our own.

Sunday, February 8, 2015

Deciphering trial outcomes can be a tricky business. As if many measures aren't hard enough to make sense of on their own, they are often combined in a complex maneuver called a composite endpoint (CEP) or composite outcome. The composite is treated as a single outcome. And journalists often phrase these outcomes in ways that give the impression that each of the separate components has improved.

Here's an example from the
New York Times, reporting on the results of a major trial from the last American Heart Association conference:

That individual statement sounds like the drug reduced deaths, bypasses, stents, and hospitalization for unstable angina, doesn't it? But it didn't. The modest effect was on non-fatal heart attacks and stroke only.*

CEPs are increasingly common: by 2007,
well over a third of cardiovascular trials were using them. CEPs are a clinical trial shortcut because you need fewer people and less time to hit a jackpot. A trial's main pile of chips is riding on its pre-specified
primary outcome: the one that answers the trial's central, most important question.

The primary outcome determines the size and length of the trial, too. For example, if the most important outcome for a chronic disease treatment is to increase the length of people's lives, you would need a lot of people to get enough events to count (the event in this case would be death). And it would take years to get enough of those events to see if there's anything other than a dramatic, sudden difference.

But if you combine it with one or more other outcomes - like non-fatal heart attacks and strokes - you'll get enough events much more quickly. Put in lots, and you're really hedging your bets.

It's a very valuable statistical technique - but it can go haywire. Say you have 3 very serious outcomes that happen about as often as each other - but then you add another component that is less serious and much more common. The number of less serious events can swamp the others. Everything could even be riding on only one less serious component. But the CEP has a very impressive name - like "serious cardiac events." Appearances can be deceptive.

Enough data on the nature of the events in a CEP should be clearly reported so that this is obvious,
but it often isn't. And even if the component events are reported deep in the study's detail, don't be surprised if it's not pointed out in the abstract, press release, and publicity!

There are several different ways a composite can be constructed, including use of techniques like weighting that need to be transparent. Because it's combining events, there has to be a way of dealing with what happens when more than one event happens to one person - and that's not always done the same way. The definitions might make it obvious, the most serious event might count first according to a hierarchy, or the one that happened to a person first might be counted. But exactly what's happening often won't be clear - maybe even
most of the time.

The biggest worry, though, is when researchers play the slot machine in my cartoon (what we call
the pokies, "Downunder"). I've stressed the dangers of hunting over and over for a statistical association (
here and
here). The
analysis by Lim and colleagues found some suggestion that component outcomes are sometimes selected to rig the outcome. If it wasn't the pre-specified primary outcome, and it wasn't specified in the original entry for it in a
trials register, that's a worry. Then it wasn't really a tested hypothesis - it's a new hypothesis.

Composite endpoints, properly constructed, reported, and interpreted are essential to getting us decent answers to many questions about treatments. Combining death with serious non-fatal events makes it clear when there's a drop in an outcome largely because people died before that could happen, for example. But you have to be very careful once so much is compacted into one little data blob.

Sunday, November 30, 2014

Substituting outcomes that can take years, or even decades, to emerge, with ones you can measure much earlier, makes clinical research much simpler. This kind of substitute outcome is called a surrogate (or intermediate) endpoint or outcome.

Surrogates are often biomarkers - biological signs of disease or a risk factor of disease, like cholesterol in the blood. They are used in clinical care to test for, or keep track of, signs of emerging or progressing disease. Sometimes, like cholesterol, they're the target of treatment.

The problem is, these kinds of substitute measures aren't always reliable. And sometimes we find that out in the hardest possible way.

The risk was recognized as soon as the current methodology of clinical trials was being developed in the 1950s.
Austin Bradford Hill, who played a leading role, put it bluntly: if the "rate falls, the pulse is steady, and the blood pressure impeccable, we are still not much better off if unfortunately the patient dies."

That famously happened with some drugs that controlled cardiac arrhythmia - irregular heartbeat that increases the chances of having a heart attack. On the basis of ECG tests that showed the heartbeat was regular, these drugs were prescribed for years before a trial showed that they were causing tens of thousands of premature deaths, not preventing them. That kind of problem has
happened too often for comfort.

But one phase III trial,
RILOMET-1, quickly showed an increase in the number of deaths in people using the drug. We don't know how many yet - but it was enough for the company to decide to end all trials of the substance.

This drug targets a biomarker associated with worse disease outcomes, an area seen by some as
transforming gastric cancer research and treatment. Others see
considerable challenges, though - and what happened to the participants in the RILOMET-1 trial underscores why.

There is a lot of controversy about surrogate outcomes - and debates about what's needed to show that an outcome or measure is a
valid surrogatewe can rely on. They can lead us to think that a treatment is
more effective than it really is.

Yet
a recent investigative report found that cancer drugs are being increasingly approved based only on surrogate outcomes, like "progression-free survival." That measures biomarker activity rather than overall survival (when people died).

It can be hard to recognize at first, what's a surrogate and what's an actual health outcome. One rule of thumb is, if you need a laboratory test of some kind, it's more likely to be a surrogate. Whereas symptoms of the disease you're concerned, or harm caused by the disease, are the direct outcomes of interest. Sometimes those are specified as"patient-relevant outcomes."

Many surrogate outcomes are incredibly important, of course -
viral load for HIV treatment and trials for example. But in general, when clinical research results are based only on surrogates, the
evidence just isn't as strong and reliable as it is for the outcomes we are really concerned about.

Sunday, October 12, 2014

I can neither confirm nor deny that Cecil is now a participant in one of the there-is-no-limit-to-the-human-lifespan resveratrol studies at
Harvard's "strictly guarded mouse lab"! If he is, I'm sure he's even more baffled by the humans' hype over there.

Resveratrol is the antioxidant in grapes that many believe makes drinking red wine healthy. And it's a good example of how research on animals is often terribly misleading and misinterpreted. I've written about it over at
Absolutely Maybe if you're interested in a classic example of the rise and fall of animal-research-based hype (or more detail about resveratrol).

But this week, it's media hype about a study using human stem cells in mice in another lab at Harvard that's made me ratty. You could get the idea that a human trial of a "cure" for type 1 diabetes is just a matter of time now - and not a lot of time at that. According to the
leader of the team, Doug Melton, "We are now just one preclinical step away from the finish line."

An effective treatment that ends the need for insulin injections would be incredibly exciting. But we see this kind of claim from laboratory research all the time, don't we? How often does it work out - even for the studies that
are at "the finishing line" for animal studies?

Bart van der Worp and colleagues wrote an excellent paper explaining why. It's not just that other animals are so different from humans. We're far less likely to hear of the failed animal results than we are of human trials that don't work out as hoped. That bias towards positive published results draws an over-optimistic picture.

As well as fundamental differences between species, van der Worp points to other common issues that reduce the applicability for humans of typical studies in other animals:

The animals tend to be younger and healthier than the humans who have the health problem;

They tend to be a small group of animals that are very similar to each other, while the humans with the problem are a large very varied group;

Wool Boutique Skirt Wool Boutique Wool Wool Skirt Skirt Boutique Boutique Skirt Boutique Wool Skirt Boutique qpnBAPwEp So how does the Harvard study fare on that score? They used stem cells to develop insulin-producing cells that appeared to function normally when transplanted into mice. But this was the very early stages. When it came to the test they reported on the ones with diabetes, there were only 6 (young) mice who got the transplants (and 1 died) (plus a comparison group). Gender was not reported - and as is common in laboratory animal studies, there wasn't lengthy follow-up. This was an important milestone, but there's a very long way to go here. Transplants in humans face
a lot of obstacles.

Van der Worp points to another set of problems: inadequacies in research methods that we've learned over time in human research bias the proceedings too much - including problems with statistical analyses.
Jennifer Hirst and colleagues have studied this too. They concluded that so many studies were bedeviled by issues such as lack of randomization and blinding by those assessing outcomes, that they should never have been regarded as being "the finishing line" before human experimentation at all.

There's good news though!
CAMARADES is working to improve this - with the same approach for chipping away at these problems as in human trials: by slogging away at biased methodologies and publication bias. And pushing for good quality systematic reviews of animal studies before human trials are undertaken. It's well worth half an hour to watch
the wonderful talk by Emily Sena at Evidence Live 2015.

Laboratory animal research may be called "preclinical," but even that jargon is a bit of over-optimistic marketing. Most of what's tried in the lab will never get near human trials. And when it does, it will mostly be disappointing. Laboratory research is needed, and encouraging progress is great. But people should definitely not be getting our hopes up too much about it.

~~~~Boutique Shorts Shorts Denim Denim Denim Siwy Boutique Boutique Siwy SiwyThe National Institutes of Health (NIH) addressed the issue of gender in animal experiments earlier in 2014. After I wrote this post, the NIH also released proposed guidelines for reporting preclinical research.Thanks to Jonathan Eisen for adding a link for the full text of the paper to PubMed Commons, as well as to a blog post by Paul Knoepfler discussing the context of the stem cell work by Felicia Pagliuca, Doug Melton and colleagues. NHS Behind the Headlines have also analyzed and explained this study.Thanks to Jim Johnson for pointing an oversight: that animal studies - this one included - can also suffer from having too little follow-up.Interest declaration: I'm an academic editor at one of the journals whose papers on animal research I commended (PLOS Medicine) and on the human ethics advisory group of another (PLOS One), but I had no involvement in either paper.

Update: Checked, post and cartoon refreshed, and link to Sena's talk at Evidence Live on 5 December 2015.