Wednesday March 1, 9:00 AM - Noon
Understanding how an education intervention works, or why it does not work, is crucial to program evaluations. A theoretical construct characterizing the hypothesized intermediate process (i.e., the causal mechanism) is called a mediator. This workshop introduces the ratio-of-mediator-probability weighting (RMPW) method for decomposing total treatment effects into causal pathways. In a nutshell, an indirect effect and a direct effect are each identified and estimated simply as a mean contrast between two potential outcomes. RMPW adjusts for a large number of confounding covariates and requires relatively few assumptions about the distribution of the outcome, the distribution of the mediator, and the functional form of the outcome model. Hence, the new method overcomes some important constraints of existing strategies. One important extension of RMPW is for investigating heterogeneity in program mechanisms across different contexts.
This workshop will introduce the concepts of causal mediation, explain the intuitive rationale of the RMPW strategy, delineate the parametric and nonparametric analytic procedures, and discuss extensions. Participants will gain hands-on experiences with a free stand-alone RMPW software program, a Stata ado file, and an R package. These user-friendly computational tools are designed to facilitate implementation by applied researchers and help with their analytic decision-making.
The target audience includes graduate students, early career scholars, and advanced researchers who are familiar with multiple regression and have had prior exposure to binary and multinomial logistic regression. Each participant should bring a laptop for hands-on exercises.
Additional information about earlier versions of this workshop, related readings, and software may be found online at http://voices.uchicago.edu/ghong/
Recent scholarship has presented new methods and applications for analyzing “endogenous subgroups” in experimental evaluation data. Endogenous subgroups are those that are defined by events that occur after the point of random assignment: they may be defined as a program mediator or as the post-randomization experience of individuals. We will briefly introduce the range of methods (instrumental variables, propensity score matching, principal stratification) used to estimate impacts on endogenous subgroups in order to situate the Analysis of Symmetrically-Predicted Endogenous Subgroups (ASPES) method among the options.
The workshop will demonstrate how to use ASPES in practice, offer an applied example, and focus on what research directors and analysts need to know in order to use ASPES. We will examine practical considerations researchers may face when using the ASPES method, data requirements, sample size requirements, and how to define the mediator of interest.
Wednesday March 1, 9:00 AM - Noon
The Stanford Education Data Archive (SEDA) is a new publicly-available dataset based on roughly 300 million standardized test scores generated by students in U.S. public schools from 2009 to 2015. SEDA currently contains average test scores by grade (grades 3-8), year (2009-2015), test subject (math and ELA), and subgroup (gender, race/ethnicity, and free lunch eligibility) for all school districts in the U.S. The test scores from different states, grades, and years are aligned to a common national scale, allowing comparisons of student performance across place and time. SEDA was constructed by Sean Reardon and Andrew Ho.
This workshop will provide a detailed description of how the dataset was assembled and what SEDA contains. It will include a description of how the test scores are linked to a common scale, a discussion of the sources and magnitude of uncertainty in the estimates, and a guide for interpretation of the estimates. Through examples, participants will learn how SEDA may be used appropriately for both descriptive and causal research.
This workshop will outline measurement and analytic approaches researchers may use to assess unexpected implementation challenges and the implications of such challenges for both implementation and impact analyses.
This workshop is designed to provide an introduction to cost analysis in educational evaluation and will focus on proposal development and conducting cost analysis.
Participants will explore:
Many interventions in education induce changes in other services received resulting in change in resources as well. Impact evaluations may be uninformative or possibly misleading if they fail to consider both the intervention and any change in services induced by the program. Challenges of evaluating service mediation interventions will be explored with evidence from a benefit-cost analysis of City Connects, a service that aims to better assign students to after-school community programs.
Attendees will be encouraged to attend the IES Methods Training on cost-effectiveness and cost-benefit analysis presented by the Center for Benefit-Cost Studies in Education of Teachers College, Columbia University (http://www.cbcsemethodstraining.org).
With the proliferation of randomized trials in education, researchers are asking ever more sophisticated questions about program impacts. Collectively, the field is evolving from first-order questions about ‘‘what works overall’’ to more nuanced questions about what works, for whom, when, and under what circumstances. Researchers and policy makers are interested in better understanding the many ways that impacts may vary across contexts and subpopulations. When relevant groups are defined by observed, pre-randomization characteristics, the process for generating causal estimates within subgroups is typically straightforward. Yet, key questions often pertain to subgroups defined by behaviors, actions or decisions that occur after randomization.
Principal stratification provides a framework to specify subgroups of interest (generically referred to as principal strata) defined by experimental subjects’ observed and counterfactual post-randomization actions, behaviors or responses and to articulate estimands associated with each stratum. In this workshop, we will introduce the principal stratification framework, consider multiple substantive applications, and present tools for bounding and sharpening bounds for principal causal effects even when they are not point identified.
We aim to underscore bounding as an approach that is accessible, computationally straightforward and that may provide general insight for principal causal effects.