Discovering and connecting data to build new algorithms, AI and machine learning approaches

Leveraging the UK’s renowned leadership in drug discovery – a modern approach to R&D

Virtual R&D is the delivery of medicines R&D using multiple best-in-class partners, ensuring the critical value-based experiments are done by the right people, data is captured, and IP-estate is secured.

Brokering easier access to consented patient data and samples

The UK has millions of samples and billions of data points collected from UK patients who have agreed that their samples and data can be used for research. But small UK medical research companies struggle to access them.

New methods to improve the translation of biomedical science towards clinical trial success

It is now over ten years since John Ioannidis published his essay on ‘Why most published findings are false’. This paper presented statistical arguments to remind the research community that a finding is less likely to be true if: 1) study size is small, 2) effect size is small, 3) a protocol is not pre-determined and not adhered to, 4) it is derived from a single research team. Methods to overcome this issue included establishing the correct study size, reproducing the results by multiple teams and improving the reporting standards of scientific journals.

So, have any of these suggestions been taken forward? There is an increase in the promotion of open science, a movement which supports increased rigour, accountability and reproducibility of research. In addition, there are increased requirements by some scientific journals to improve reporting standards. Whether or not these recommendations are followed or adhered to remains to be established. Evidence from a systematic review we conducted on the asymmetric inheritance of cellular organelles highlights the problems in basic science reporting and study design.

Of 31 studies (published between 1991 and 2015), not one performed calculations to determine sample size, 16% did not report technical repeats and 81% did not report intra-assay repeats. We need to educate our future scientists on study design and impose stricter publication criteria.

Impartial and robust evaluations of biomedical science are required to determine which new biomedical discoveries will be clinically predictive. We should be concerned by the lack of reproducibility in biomedical science because it is a major contributing factor in the low rate of clinical trial success (only 9.6% of phase 1 studies reach approval stage). Our ability to judge which discoveries will have real-life effects is crucial.

What can be learned from clinical research?

A lot can be learnt from clinical research. The publication of clinical research must follow strict reporting guidelines to ensure the transparency of research and decision making is based on unbiased evidence gathered by systematic review. Systematic reviews provide a methodology to identify all research evidence relating to a given research question.

Importantly systematic reviews are unbiased (not opinion-based) and are carried out according to strict guidelines to ensure the clarity of the results and their reproducibility.

They form the basis of all decision making in clinical research. Importantly the evidence gathered by the review is judged to ascertain its quality. This allows the review to present results graded according to the ‘best evidence’. Determining the quality of the clinical evidence is judged primarily according to the study design (randomisation, blinding of participants and research personnel etc). There are many risk of bias tools available, each appropriate to a different study design. The GRADE seeks to incorporate the findings of a systematic review alongside study size, imprecision, consistency of evidence and indirectness; providing a clear summary of the strength of evidence to guide recommendations for decision making.

It makes sense for decision making in biomedical research to be judged in a similar open and unbiased manner. Astoundingly, the choice of which biomedical discovery is suitable for further investment is usually an opinion-based decision. Big steps have been made to introduce systematic review to preclinical research. There are reporting guidelines for animal studies and risk of bias tools. The quality of these studies is again based on study design.

At a basic bioscientific level one must argue that judging the quality of evidence should focus on more fundamental aspects of the research: 1) how valid is the chosen model (or how well does it recapitulate the human disease of interest), 2) how valid is the chosen marker (or how well does it identify the target?) 3) how reliable is the result. Pioneering work by Collins and Lang has introduced tools to perform such judgments. These tools aim to directly address the issues raised by John Ioannidis. They aim to highlight the strengths and gaps in a given research base.

Prior to this, she was a Wellcome Fellow in cancer research and tissue development. Her focus was on alternative models to animal testing (human tissue culture, organoids, 3D models and stem cells). Using her expertise in both fields she is researching the value of evidence-based decision making in biomedical science and clinical translation and hopes to provide solutions to de-risk investment in this field.

Sign up for our newsletter

First Name

Last Name

Email

Complete this form and we'll send you our monthly newsletter, and occasional alerts when we have major announcements and collaboration opportunities. Every email we send will include a link to unsubscribe, and after 12 months we'll ask you if you want to stay opted-in. We won't share your data with any third-party. For more details, read our Privacy Policy

To comply with EU directives we now provide detailed information about the cookies we use. To find out more about cookies on this site, what they do and how to remove them, see our information about cookies. Click OK to continue using this site.OkRead more