Abstract

Perhaps the most fundamental dimension along which trial quality is, or should be, judged is the one that made it into the name, RANDOMIZATION. How well do we, as a society, do in ensuring that only the best randomization procedures are used in the pinnacle of evidence-based medicine, the randomized trial? Sadly, the answer is not very well, and this is uniform across all disease areas, all journals and all research groups. The emperor’s new clothes have yet to be exposed, and so the charade continues unabated, with the near ubiquitous choice of blocked randomization, despite its offering only weak encryption, over the vastly superior maximal procedure, offering strong encryption. Nor is this choice even justified. Nowhere in the literature is there an argument to suggest that blocked randomization is superior to, or even equivalent to, the maximal procedure. But by avoiding the issue altogether, researchers are able to implicitly justify the use of an unjustifiable procedure, one that could never be justified explicitly. The best we can do is point out the folly.

Editorial

Perhaps the most fundamental dimension along which trial quality is, or should be, judged is the one that made it into the very name of the randomized clinical trial, RANDOMIZATION. So it is not only fair, but also imperative, to ask how well we, as a society, do in ensuring that only the best randomization procedures are used in the pinnacle of evidence-based medicine, the randomized trial. Sadly, the answer is not very well, and this is uniform across all disease areas, all journals and all research groups.

The usual methods for evaluating trial quality, such as the jaded score, represent nothing more than a course Eddington fishing net, perhaps suitable for catching outright fraud and those major biases and flaws that have made it into prime time, but not the equally important ones whose 15 minutes of fame are yet to come. This latter class of smaller fish, the ones that slip right through the net, would include flawed randomization procedures that offer only weak encryption, despite the fact that strong encryption is both necessary and readily available. This particular version of the emperor’s new clothes has yet to be exposed as such, and so the charade continues unabated, with the near ubiquitous choice of blocked randomization, despite its offering only weak encryption, over the vastly superior maximal procedure [1,2] offering strong encryption.

It is worth noting, even emphasizing, that this choice is never justified in practice, nor could it be. Nowhere in the literature is there an argument to suggest that blocked randomization is superior to, or even equivalent to, the maximal procedure. This is a false controversy. But by avoiding the issue altogether, researchers can implicitly justify the use of an unjustifiable procedure that could never be justified explicitly. Each time they do, they not only invalidate their own trial, and preclude the possibility of allocation concealment, but they also engage the ripple effect to empower other researchers to do the same. Each time a trial is conducted with permuted block randomization, it lends perverse credibility to the method, which then becomes a standard, and is that much more likely to be used in future trials as well, and that much harder to dislodge. And so it perpetuates itself with a vicious cycle that is a major contributor to the reproducibility crisis that has been discussed so frequently in recent times.

The solution is rather obvious. We need to make statistics boring again [3] so that true quality is more influential than either novelty or, in the case of blocked randomization, frequency of use. If researchers were diligent in weighing their options, and serious about the public trust invested in them, then they would arrive at the only conclusion possible. Use the maximal procedure instead of permuted blocks.