Wednesday March 6, 12:00 PM - 4:00 PM
Workshop A
A Survey of Methods for Assessing Treatment Effect Variation in Education Impact Evaluations

Luke Miratrix, Harvard Graduate School of Education
Avi Feller, Goldman School, UC Berkeley

Education research is increasingly asking not just "what works?" but "for whom?" and "when?". An important aspect of this is moderation analyses which allows for understanding how impacts vary across students or schools. This is particularly critical as researchers and policymakers expect greater insights per research dollar.

This workshop will give a tour of methods for approaching these seemingly simple questions in the context of randomized trials in education---and will show why these questions are so hard to answer in practice. We will start with several classical, regression-based approaches that have attractive theoretical properties and are robust in practice. We will then extend these ideas to hierarchical linear models, especially in the context of multi-site trials. We will also survey recent machine learning and related methods for treatment effect variation, and discuss assessing variation explained by post-randomization variables. Throughout we also consider recent variants of the more classical approaches as well as other extensions, including application of these methods to non-randomized trials. We will finally touch upon how issues of power play out in these settings.

Overall, we highlight simple, interpretable methods that are robust and easily justified. We will demonstrate these approaches with several education data examples and will provide an easy-to-use R package (along with demo scripts and documentation) so practitioners can apply these methods on their own data.


Show all professional development options?