Wednesday March 6, 12:00 PM - 4:00 PM
Workshop A
A Survey of Methods for Assessing Treatment Effect Variation in Education Impact Evaluations

Luke Miratrix, Harvard Graduate School of Education
Avi Feller, Goldman School, UC Berkeley

Education research is increasingly asking not just "what works?" but "for whom?" and "when?". An important aspect of this is moderation analyses which allows for understanding how impacts vary across students or schools. This is particularly critical as researchers and policymakers expect greater insights per research dollar.

This workshop will give a tour of methods for approaching these seemingly simple questions in the context of randomized trials in education---and will show why these questions are so hard to answer in practice. We will start with several classical, regression-based approaches that have attractive theoretical properties and are robust in practice. We will then extend these ideas to hierarchical linear models, especially in the context of multi-site trials. We will also survey recent machine learning and related methods for treatment effect variation, and discuss assessing variation explained by post-randomization variables. Throughout we also consider recent variants of the more classical approaches as well as other extensions, including application of these methods to non-randomized trials. We will finally touch upon how issues of power play out in these settings.

Overall, we highlight simple, interpretable methods that are robust and easily justified. We will demonstrate these approaches with several education data examples and will provide an easy-to-use R package (along with demo scripts and documentation) so practitioners can apply these methods on their own data.


Workshop B
Integrating the Examination of Costs into Efficacy Proposals and Field Trials

A. Brooks Bowden, North Carolina State University
Viviana Rodriguez, Columbia University

Come join us to review the concept of costs, prepare your IES grant proposal, and work on your data collection tools and plan to integrate cost analysis into your effectiveness evaluation. This workshop will be an applied working session for hands on support to design and conduct a cost study. Following a review of cost concepts, the session will focus on developing a plan to collect data required to identify costs and collect the necessary data to estimate the cost of impact production for educational programs.

Before the session, please review the newest edition of the primary text on the estimation of costs: Economic Evaluation in Education, 3rd Edition. There are videos available online to support the text. In addition, cbcse.org offers a listing of papers on the Ingredients Method and applications of the method to educational interventions. Feel free to contact us if we can provide additional support prior to the session.

Please bring information on the program you are planning to evaluate or currently evaluating. This includes a detailed description of the program, the sample being served, the context for implementation, and the theory of change. Information about the evaluation is also needed to understand the logic model, the control condition, outcome measurement, examination of implementation, moderators, and expected heterogeneity in impacts.

Attendees will also need a computer with excel. It would also be beneficial to register for the CostOut tool available at cbcse.org prior to the session.


Wednesday March 6, 12:00 PM - 4:00 PM
Workshop C
Stanford School Level Data
Sean Reardon, Stanford University
Andrew D. Ho, Harvard University
Benjamin R. Shear, University of Colorado, Boulder
Erin M. Fahle, St. John’s University

The Stanford Education Data Archive (SEDA) is a publicly available dataset based on roughly 330 million standardized test scores from students in U.S. public schools. SEDA contains estimate of the average test scores for school districts, counties, and metropolitan statistical areas by grade (grades 3-8), year (2009-2016), subject (math and reading/language arts), and subgroup (gender, race/ethnicity, and economic disadvantage). Scores from different states, grades, and years are linked to a common national scale, allowing comparisons of student performance over states and time.

In March 2019, SEDA will release average test scores for U.S. schools. This workshop will provide a description of SEDA’s contents and construction, focusing on the newly added school-level test score estimates and covariates. It will then include a description of the how to use the school-level data in descriptive and causal research, as well as a discussion of the strengths and limitations of the data. The workshop will include code, activities, and examples using Stata and R. Participants should bring a laptop with R or Stata, or be prepared to work from raw data using their preferred statistical program.

More information about SEDA is available at http://seda.stanford.edu


Workshop D
Advanced Methods in Meta-Analysis

Terri Pigott, Loyola University Chicago
Josh Polanin, American Institute of Research
Elizabeth Tipton, Northwestern
Ryan Williams, American Institute of Research

More details coming soon.


Back to Top