To make a call on whether the first phase of a clinical trial is to go ahead, a board of experts weighs up the risk and benefits based on sound research. At least, that’s the idea.

Now a recently published study on the evidence that informs these decisions, says it might not be as rigorous as we’d like it to be.

Ethicists from Hannover Medical School and McGill University in Canada have taken a closer look at the materials used by review boards to greenlight phase I and phase II clinical trials.

Specifically, they analysed 109 applications gathered by three institutional review board chairpersons; these documents were used to approve trials between 2010 and 2016.

The applications – called investigator brochures – provide the review board with all of the evidence that they should require to make a sound judgment on whether a clinical trial can safely proceed.

Such pre-clinical data tends to include the results of animal trials that demonstrate the potential efficacy and risks of a potential medical treatment.

Among the 109 investigator brochures there were 708 efficacy studies that used animal models.

Surprisingly, nearly 9 out of 10 studies had no reference to a published report, making it hard to know whether the study would even pass peer review.

What was particularly alarming was the relative absence of details on methodology – 95 percent of the studies contained no mention of processes used to avoid the pitfalls of bias.

The researchers also noted that 82 percent of the brochures reported positive findings in their animal testing, raising questions on what was being left out.

Of course, optimistically we can assume any negative results would simply mean abandoning efforts to proceed to a clinical phase.

“If the studies aren’t positive, then you wouldn’t go to an IRB to do a clinical trial,” Shai Silberberg, director of research quality at the US National Institute of Neurological Disorders and Stroke told Science.

Silberberg wasn’t involved in the research, but thinks the results are worth paying attention to. “This is incredibly alarming.”

None of this should be taken to mean that the research itself was necessarily of poor quality, or that the pre-clinical results are unreliable.

It does raise serious questions on what information institutional review boards are using to make their call when the data they’re presented with is either unchecked or flimsy.

It’s possible that this kind of information could be considered too sensitive to publish, but in such situations some sort of critical process could still be implemented either among board members or within a separate review body.

Pre-clinical animal trials are standard fare for researchers wanting to check if their potential treatments behave as predicted.

Phase I clinical trials are the first level that involve human testing, carried out on healthy volunteers to determine in detail how most bodies would respond in absence of any illness.

It’s rare for subjects to experience serious problems as a result of testing, but damaging side effects and even deaths aren’t completely unknown.

Keeping these risks at an absolute minimum is the goal of review boards, making it a concern that they have potentially inadequate information to predict whether a treatment is medically and ethically acceptable.

Requiring peer review publication of all pre-clinical animal trials would no doubt add layers of time and cost to an already long and expensive process.

Yet it’s possible that requiring this level of evidence might even pay off in the end. With barely one in eight treatments making it through clinical trials, knocking a few hopefuls out earlier might save resources.

It should be noted that the investigator brochures all came from Germany-based boards. Still, the researchers have no doubt the problem is far more widespread.