Is simple best? Randomization methods and selection bias

New research published this month in Trials explored what methods are used to prevent selection bias during randomization in unblinded randomized controlled trial. Here, Shaun Treweek, founder of Trial Forge, comments on what this research tells us about current randomization methods and discusses what may be the way forward to prevent selection bias in future trials.

1

Donald Rumsfeld famously spoke about known knowns, known unknowns and unknown unknowns. He was talking about Iraq but his categorization, rather unwieldy at first, captures rather nicely the problem we have when we want to compare treatment A to treatment B. There are things we know will affect our outcomes and there are things we think might affect them. But what if there’s something we neither know about nor could know about, an unknown unknown?

The tool of choice to crack this nut is the randomized trial. In its simplest form it neither knows nor cares about Rumsfeld’s categorization and blindly distributes participants and their myriad characteristics between the trial groups, leaving only the thing being evaluated as the difference between them. Genius.

But what if we get cleverer and tweak our randomization to deal with some of those known knowns and known unknowns. What then?

If we can guess allocation we are at risk of selection bias… In other words, randomization starts to lose some of its shine.

Then we should be concerned, which is what Brennan Kahan, Sunita Rehal and Suzie Cro conclude in their recent study published in Trials on selection bias in unblinded trials. They were interested in the use of restricted randomization (e.g. stratification by site, minimization) and the ability, or otherwise, of recruiters at sites to guess with more than 50% accuracy to which group the next participant at their site would be randomized.

This is important. If we can guess allocation we are at risk of selection bias because some of the unknown unknowns in the heads of recruiters might start to influence who gets in the trial and when. In other words, randomization starts to lose some of its shine.

Kahan and colleagues looked at 152 trials published in four journals in 2010 and 149 gave no information on whether recruiters were blinded to previous allocations. Indeed, most said nothing of who was involved in recruitment, whether these individuals had any other role in the trial.

Reporting of recruitment information is stubbornly poor, as Hewitt and Torgerson reported in 2008 and Kirsty Loudon and I did in 2011. Only four of those 152 trials used simple randomization while 95 (63%) used some form of restricted randomization. The rest didn’t say what they did.

Kahan and colleagues conclude that their findings suggest that a substantial proportion of unblinded trials are at risk of selection bias

Kahan and colleagues conclude that their findings suggest that a substantial proportion of unblinded trials are at risk of selection bias, a conclusion I would agree with having read their article. Indeed, their article makes me reflect on what happened in my own trials, or more importantly perhaps, what I will do in my future trials.

Trials are important. They are right at the heart of evidence-informed healthcare and around 25,000 are published every year. Thousands of people around the world are beavering away right now on trials that will be the feedstock of systematic reviews, which will in turn be the foundation of guidelines used by health professionals, policymakers and patients. The results of trials influence the health care received by millions of people.

Trials occupy this position because when done well they reduce bias, especially selection bias. Their pivotal role in healthcare means we always need them to be as good as they can be.

Kahan and colleagues suggest that this is not always the case even for something as fundamental as the choice and reporting of the randomization method.

At the other end of the trial pathway, one recent study found that 35% of published reanalyzes of trial data led to changes in conclusions about which patients should be treated. It’s time to point the spotlight at how we design, conduct, analyze and report trials, as well as medical research more generally.

Trial Forge logoTrial Forge is a new initiative that aims to improve the efficiency of trials. We want to question why we do things the way we do, disseminate what we do know, and generate evidence to support our design, conduct, analysis and reporting decisions, when all we currently have is what we did last time. We work with, among many others, the Medical Research Council-Hubs for Trials Methodology Research (MRC-HTMR), the Cochrane Collaboration and the Health Research Board-Trials Methodology Research Network (HRB-TMRN). Collaboration and coordination are key.

So what should we take from the article by Kahan and colleagues? A couple of years ago a colleague showed me the coin he flipped to do the randomization for one of his trials. We laughed at this low-tech approach. The recent Trials article suggests that we should be less quick to go high-tech and start our discussion of treatment allocation by asking ourselves what the justification is for moving away from simple randomization. And for Trial Forge, we’ll add this to the list of messages we need to spread. Sometimes simple really is best.

View the latest posts on the On Medicine homepage

One Comment

Gregory A. Anderson

The Empirical method is vital for the sciences to maintain their validity, when applicable. Making the most of available knowledge on how to improve our art for accuracy must be part of our motive to continue our efforts. We know Industry-wide that all the sciences are suffering from holding ourselves accountable. We must strive for excellence. Thanks for the heads-up here.

Comments are closed.