In Medical decision making : an international journal of the Society for Medical Decision Making
Much evidence in comparative effectiveness research is based on observational studies. Researchers who conduct observational studies typically assume that there are no unobservable differences between the treatment groups under comparison. Treatment effectiveness is estimated after adjusting for observed differences between comparison groups. However, estimates of treatment effectiveness may be biased because of misspecification of the statistical model. That is, if the method of treatment effect estimation imposes unduly strong functional form assumptions, treatment effect estimates may be inaccurate, leading to inappropriate recommendations about treatment decisions. We compare the performance of a wide variety of treatment effect estimation methods for the average treatment effect. We do so within the context of the REFLUX study from the United Kingdom. In REFLUX, participants were enrolled in either an randomized controlled trial (RCT) or an observational study arm. In the RCT, patients were randomly assigned to either surgery or medical management. In the patient preference arm, participants selected to either have surgery or medical management. We attempt to recover the treatment effect estimate from the RCT using the data from the patient preference arms of the study. We vary the method of treatment effect estimation and record which methods are successful and which are not. We apply more than 20 different methods, including standard regression models as well as advanced machine learning methods. We find that simple propensity score matching methods provide the least accurate estimates versus the RCT benchmark. We find variation in performance across the other methods, with some, but not all recovering the experimental benchmark. We conclude that future studies should use multiple methods of estimation to fully represent uncertainty according to the choice of estimation approach.
Keele Luke, O’Neill Stephen, Grieve Richard
causal inference, observational study, research design