Are trial reporting standards improving?

To ensure that randomized control trials live up to their gold standard status, the reporting must be accurate and detailed. With new guidance being introduced like CONSORT extensions and TIDieR we would probably expect to see improvements in reporting standards, but is this actually the case? Research published today in Trials explores this by looking at the quality of reporting in trials testing drug adherence strategies before and after the introduction of the CONSORT extension.

Randomized control trials are the gold standard method for establishing whether a healthcare intervention such as a new treatment or drug works, with most researchers agreeing that trials can provide the most robust answers.

If we place such trust in trials then it is essential that their conduct and results are reported accurately and in enough detail for both scientists and the general public to understand the meaning of what is learned and to use the results wisely.

What the intervention is and what it involves must be clear. This means that someone else can understand what’s being offered, the work can be repeated, or the new intervention can be transferred to real-life practice and be of benefit.

An intervention could be a drug, a diet or other lifestyle changes; an exercise or an education program; or perhaps a combination of these. An intervention could also be a service provided by specialists where the content of the intervention is adjusted to meet the needs of individual people, both those receiving and delivering the intervention. Sometimes the ‘active or key ingredients’ are difficult to identify, and it is the overall effect of the intervention that is reflected in the results of a trial.

Our study

Our paper published today in Trials helps us understand how good researchers are at providing details when reporting the conduct and results of trials. We wanted to know whether or not reporting standards are improving following new guidance (CONSORT extension) that has become available to help researchers and scientific journals to up their standards.

To test this we decided to compare trials published in the scientific literature over two five-year periods: 2002 – 2007, before the introduction of the CONSORT extension and 2010 – 2015, after the introduction of the CONSORT extension.

We chose to consider a specific field that has importance for healthcare worldwide. We focused on trials that tested a range of strategies to help people take their drugs as prescribed (adherence).

People in general are not very good at taking medications as prescribed, particularly if lots of drugs are involved over many months such as in treatments for HIV and tuberculosis. Often such conditions carry negative social connotations and people may need help to stick to their treatment programs to maximize their chances of staying well.

Focusing on this field we examined 42 trials published between 2002 and 2007 and 134 trials published between 2010 and 2015, from 112 peer-reviewed journals. Reading the published papers in detail, we noted exactly how each one described their drug adherence intervention.

We looked for 19 particular things, each of which was a measure of quality of reporting as recommended in the CONSORT guidance, or in the more recently available TIDieR suggestions. This included, for example, what the rationale for the intervention was, where the intervention took place, who provided it, how it was provided, how long it was provided for and details on what it was compared with.

What did we discover?

Overall, there was a lack of detail in the description of interventions, which makes it more difficult for the interventions to be repeated in research or used in healthcare practice.

Overall, there was a lack of detail in the description of interventions, which makes it more difficult for the interventions to be repeated in research or used in healthcare practice.

Although over 80% of the trials within both time periods did include 9 of the 19 items. In their descriptions of their interventions, there was no significant difference in the quality of reporting over the 13 year period, and no evidence of improvement when comparing 2010-2015 with 2002-2007.

Perhaps we shouldn’t be surprised in what we found. Fewer than half of the 112 journals currently insist that authors follow the CONSORT or TIDieR guidance. It may be helpful if more journals endorsed the guidance and made sure that all their papers adhere to these rules. Interventions may be difficult to describe as the key active ingredients are unclear, so increasing online access to supporting files at trial publication will provide opportunities to include more details.

The bottom line is that if the intervention is poorly described, it cannot be replicated or used for the benefit of people who need it. It is possible that our example exposes one reason why people worldwide still find it difficult to stick to their treatment plans and take medications regularly.

Louise Jones has a background in clinical pathology and medicine. She recently retired from her academic post at University College London where for 10 years she led the Palliative Care Research team funded by the UK charity Marie Curie. She is interested in the clinical, social and existential aspects of life threatening and life limiting illnesses and has worked collaboratively across many specialties to increase understanding in these areas. She has particular interest in the developing and testing of complex healthcare interventions and in the conduct of trials.

Bridget Candy is a principal research fellow at the Marie Curie Palliative Care Research Department, within the Division of Psychiatry at University College London. Her main research interests are in the use of evidence synthesis in the development and evaluation of complex palliative care interventions. In her PhD (2015) she explored ways to understand heterogeneity of complex interventions by testing various approaches in aligning qualitative evidence with trialled reported intervention content.