Abstract
“…The customary test for an observed difference… is based on an enumeration of the probabilities, on the initial hypothesis that two treatments do not differ in their effects,… of all the various results which would occur if the trial were repeated indefinitely with different random samples of the same size as those actually used.”
--Peter Armitage, 1954
Randomization has been the hallmark of the clinical trial since Sir Bradford Hill introduced it in the 1948 streptomycin trial. An exploration of the early literature yields three rationales: (1) the incorporation of randomization provides unpredictability in treatment assignments, thereby mitigating selection bias; (2) randomization tends to ensure comparability in the treatment groups on known and unknown confounders (at least asymptotically); and (3) the act of randomization itself provides a basis for inference when random sampling is not conducted from a population model. Of these three, rationale (3) is often forgotten, ignored, or left untaught.
Today, randomization is a rote exercise, scarcely considered in protocols or medical journal articles. “Randomization was done by Excel” is a standard sentence that serves to check the box that investigators specify how they conducted the randomization. Yet the literature of the last century is rich with statistical articles on randomization methods and their consequences, authored by some of the greats of the biostatistics and statistics world. In this talk, we review some of this literature and describe very simple methods to rectify some of this oversight. We describe how randomization-based inference can be used for virtually any outcome of interest in a clinical trial.
We conclude that randomization matters!
Orgnizzatore
Maroussa Zagoraiou