Wednesday March 1, 9:00 AM - Noon
Workshop A
New Weighting Methods for Causal Mediation Analysis

Guanglei Hong, University of Chicago
Jonah Deutsch, Mathematica Policy Research
Xu Qin, University of Chicago

Understanding how an education intervention works, or why it does not work, is crucial to program evaluations. A theoretical construct characterizing the hypothesized intermediate process (i.e., the causal mechanism) is called a mediator. This workshop introduces the ratio-of-mediator-probability weighting (RMPW) method for decomposing total treatment effects into causal pathways. In a nutshell, an indirect effect and a direct effect are each identified and estimated simply as a mean contrast between two potential outcomes. RMPW adjusts for a large number of confounding covariates and requires relatively few assumptions about the distribution of the outcome, the distribution of the mediator, and the functional form of the outcome model. Hence, the new method overcomes some important constraints of existing strategies. One important extension of RMPW is for investigating heterogeneity in program mechanisms across different contexts.

This workshop will introduce the concepts of causal mediation, explain the intuitive rationale of the RMPW strategy, delineate the parametric and nonparametric analytic procedures, and discuss extensions. Participants will gain hands-on experiences with a free stand-alone RMPW software program, a Stata ado file, and an R package. These user-friendly computational tools are designed to facilitate implementation by applied researchers and help with their analytic decision-making.

The target audience includes graduate students, early career scholars, and advanced researchers who are familiar with multiple regression and have had prior exposure to binary and multinomial logistic regression. Each participant should bring a laptop for hands-on exercises.

Additional information about earlier versions of this workshop, related readings, and software may be found online at http://voices.uchicago.edu/ghong/


Wednesday March 1, 9:00 AM - Noon
Workshop B
Endogenous Subgroup Analysis Using ASPES

Laura Peck, Abt Associates
Eleanor Harvill, Abt Associates
Shawn Moulton, Abt Associates

Recent scholarship has presented new methods and applications for analyzing “endogenous subgroups” in experimental evaluation data. Endogenous subgroups are those that are defined by events that occur after the point of random assignment: they may be defined as a program mediator or as the post-randomization experience of individuals. We will briefly introduce the range of methods (instrumental variables, propensity score matching, principal stratification) used to estimate impacts on endogenous subgroups in order to situate the Analysis of Symmetrically-Predicted Endogenous Subgroups (ASPES) method among the options.

The workshop will demonstrate how to use ASPES in practice, offer an applied example, and focus on what research directors and analysts need to know in order to use ASPES. We will examine practical considerations researchers may face when using the ASPES method, data requirements, sample size requirements, and how to define the mediator of interest.


Wednesday March 1, 9:00 AM - Noon
Workshop C
The Stanford Education Data Archive: Using Big Data to Study Academic Performance
Sean F. Reardon, Stanford University
Andrew D. Ho, Harvard University
Benjamin Shear, University of Colorado - Boulder
Erin Fahle, Stanford University

The Stanford Education Data Archive (SEDA) is a new publicly-available dataset based on roughly 300 million standardized test scores generated by students in U.S. public schools from 2009 to 2015. SEDA currently contains average test scores by grade (grades 3-8), year (2009-2015), test subject (math and ELA), and subgroup (gender, race/ethnicity, and free lunch eligibility) for all school districts in the U.S. The test scores from different states, grades, and years are aligned to a common national scale, allowing comparisons of student performance across place and time. SEDA was constructed by Sean Reardon and Andrew Ho.

This workshop will provide a detailed description of how the dataset was assembled and what SEDA contains. It will include a description of how the test scores are linked to a common scale, a discussion of the sources and magnitude of uncertainty in the estimates, and a guide for interpretation of the estimates. Through examples, participants will learn how SEDA may be used appropriately for both descriptive and causal research.


Wednesday March 1, 1:00 - 4:00 PM
Workshop D
Implementation Research in RCT Evaluations

Rekha Balu, MDRC
Carolyn Hill, MDRC

This workshop will outline measurement and analytic approaches researchers may use to assess unexpected implementation challenges and the implications of such challenges for both implementation and impact analyses.
Researchers from education and related policy domains will present a series of case studies related to three implementation challenges that manifest in RCT designs, utilizing published evaluations:
(1) Incomplete or varied implementation related to the launch of a new intervention.
To what extent may technical assistance and monitoring data help document the quality of service delivery or otherwise-invisible implementation challenges?
(2) Inconsistent program fidelity within and between sites in multi-site RCTs.
To what extent do program adaptations and modifications reveal capacity or other structural constraints to implementation versus intentional responses to improve program "fit”?
(3) Measuring and documenting sufficient contrast between the program and control group services when policies may change during the course of a multi-year study.


Wednesday March 1, 1:00 - 4:00 PM
Workshop E
Cost Analysis for Evaluation in Education

A. Brooks Bowden, North Carolina State University

This workshop is designed to provide an introduction to cost analysis in educational evaluation and will focus on proposal development and conducting cost analysis.

Participants will explore:
(a) the importance of cost analyses for decision-making
(b) the ingredients method
(c) how to design and conduct cost studies for evaluations

The format will include examples of cost analyses with focused discussion on key areas in conducting economic evaluations of educational programs, including:
(1) Treatment contrast and measurement of costs for both treatment and control conditions. Evidence will be presented from a recent randomized control trial evaluation of Reading Partners, a supplemental reading instruction program that utilizes volunteers.
(2) Alignment between costs and program fidelity. By examining costs, it is possible to understand implementation from a resource perspective. This will be highlighted with an example from a cost-effectiveness and benefit-cost analysis of Talent Search.
(3) The class of interventions referred to as service mediation interventions. 

Many interventions in education induce changes in other services received resulting in change in resources as well. Impact evaluations may be uninformative or possibly misleading if they fail to consider both the intervention and any change in services induced by the program. Challenges of evaluating service mediation interventions will be explored with evidence from a benefit-cost analysis of City Connects, a service that aims to better assign students to after-school community programs.

Attendees will be encouraged to attend the IES Methods Training on cost-effectiveness and cost-benefit analysis presented by the Center for Benefit-Cost Studies in Education of Teachers College, Columbia University (http://www.cbcsemethodstraining.org).


Wednesday March 1, 1:00 - 4:00 PM
Workshop F
Principal Stratification: Introduction and Tools for Analysis

Lindsay Page, University of Pittsburgh
Avi Feller, University of California - Berkeley

With the proliferation of randomized trials in education, researchers are asking ever more sophisticated questions about program impacts.  Collectively, the field is evolving from first-order questions about ‘‘what works overall’’ to more nuanced questions about what works, for whom, when, and under what circumstances. Researchers and policy makers are interested in better understanding the many ways that impacts may vary across contexts and subpopulations.  When relevant groups are defined by observed, pre-randomization characteristics, the process for generating causal estimates within subgroups is typically straightforward. Yet, key questions often pertain to subgroups defined by behaviors, actions or decisions that occur after randomization.

Principal stratification provides a framework to specify subgroups of interest (generically referred to as principal strata) defined by experimental subjects’ observed and counterfactual post-randomization actions, behaviors or responses and to articulate estimands associated with each stratum. In this workshop, we will introduce the principal stratification framework, consider multiple substantive applications, and present tools for bounding and sharpening bounds for principal causal effects even when they are not point identified.

We aim to underscore bounding as an approach that is accessible, computationally straightforward and that may provide general insight for principal causal effects. 
The analytic strategies that we present may be adapted and used by the majority of quantitative researchers in education using standard software. As workshop participants will learn, the insights produced from such analyses may still be of use in guiding policy and programmatic decisions.


Back to Top