Ronald Fisher (a UK statistician) introduced the P value in the 1920s. He did not mean it to be a definitive test He intended it simply as an informal way to judge whether evidence was significant in the old-fashioned sense: i.e. that it was worth a second look. The idea was to run an experiment, then see if the results were consistent with what random chance may produce. Researchers would first set up a ‘null hypothesis” that they wanted to disprove – such as there being no correlation or no difference between 2 groups. Next hey would play devils advocate and, assuming that this null hypothesis was in fact true, calculate the chances of getting results at least as extreme as what was actually observed. This probability s the P value

Most look at this and say that there is a 1% chance of the findings/results being wrong. But this is wrong. The P value cannot say this – you also need to know the odds that the real effect was there in the first place. The P value actually means that there is a 1% chance of results as extreme as these would occur when there is really no difference occurring in the experiment – e.g. that a drug has no effect.

Most look at this and say that there is a 1% chance of the findings/results being wrong. But this is wrong. The P value cannot say this – you also need to know the odds that the real effect was there in the first place. The P value actually means that there is a 1% chance of results as extreme as these would occur when there is really no difference occurring in the experiment – e.g. that a drug has no effect.

Consider 1000 hypotheses, of which only 10% are true.

Random error makes a hypothesis that is really false – look true. These = FP. Medicine accepts that this happens in the order of 1 in 20 times, so in 1000 hypotheses (where 100 are TP) this means there are 45 FPs

If there are 100 TP and 45 FP then almost a third of the results that look positive would be wrong.

But it is worse than that. There is another type of error – and that is False negatives. Where there is a true effect, but it is misinterpreted as a false one. Say 20% of the true finding fail to be detected (and this figure is difficult to quantity). That is in this case - 20 cases. Now researches see 125 hypotheses as true (where 45 are not true)

A growing number cannot be replicated, because many studies may have not found a real result in the first place. Perhaps we should only be looking at P values &amp;lt;0.005

A growing number cannot be replicated, because many studies may have not found a real result in the first place. Perhaps we should only be looking at P values &amp;lt;0.005

Since publication in 1990, results from the National Acute Spinal Cord Injury Study II (NASCIS II) trial have changed the way patients suffering an acute spinal cord injury (SCI) are treated. though well-designed and well-executed, both NASCIS II and III failed to demonstrate improvement in primary outcome measures as a result of the administration of methylprednisolone. Post-hoc comparisons, although interesting, did not provide compelling data to establish a new standard of care in the treatment of patients with acute SCI. Evidence of the drug&amp;apos;s efficacy and impact is weak and may only represent random events.

Renamed

In the late 1940s before there was a polio vaccine Health authorities noted that polio cases increased with ice cream and soft drink consumption. Eliminating such treats were part of the advice given to combat the spread of the diseases. Polio was more common in summer, when people eat more icecream Hence a Correlation vs causation

UQ RORT: One of Australia&amp;apos;s leading universities is investigating new concerns of possible academic misconduct by two former academics. The University of Queensland&amp;apos;s Dr Caroline Barwood and Professor Bruce Murdoch published a peer-reviewed paper in the prestigious European Journal of Neurology, heralding a major breakthrough in the treatment of Parkinson&amp;apos;s disease. theuniversity made the unusual admission that it could find no data or evidence that the research was ever conducted. Before the article was retracted, the study&amp;apos;s apparent success led to a number of grants. Ten months after the allegations of academic misconduct were first raised and one month after the investigation was referred to the CMC, the university accepted part of a $300,000, five-year research fellowship on behalf of Dr Barwood.

Where we end up at is this&amp;gt;&amp;gt;&amp;gt; What are we doing now that is harmful to our patients?

Don’t just skim article Not just abstract and conclusions Learn a little about stats Don’t be fooled by high quality journal Know/review the literature/topic. Not just article

I don’t recommend that you go home tonight and try a “booty call” with your partner Perhaps read the article and find out the details.

Consistent and transparent

Be aware of your own biases – especially confirmation biases. Stop searching for information to confirm your own views. Read broadly. Don’t believe a single articles findings – look for bodies of work around a topic.

Who do you believe and what do you believe? Do you believe who speaks loudest?

Sit and contemplate your position…………Put the patient (not the patients leg) at the forefront of your focus. Be sceptical!

Transcript of "Cullen: Why most published research is wrong"

1.
“Why most published
research is wrong.”
Louise Cullen
(Clinician researcher)

3.
“It is everyone’s responsibility to
find out how to
ask questions systematically,
find answers from searching the
literature,
critically appraise the literature
and apply the results to
practice.”
Rinaldo Bellomo

4.
“It is everyone’s responsibility to
find out how to
ask questions systematically,
find answers from searching the
literature,
critically appraise the literature
and apply the results to
practice.”
Rinaldo Bellomo

5.
40 ingredients
associate with
cancer
Most single studies showed implausibly
large effects.