Filtered by tag: RCT Remove Filter

The Higher Education Enrollment Decision: Feedback on Expected Study Success and Updating Behavior

Chris van Klaveren, Karen Kooiman, Ilja Cornelisz & Martijn Meeter

PDF Version

Secondary school students tend to be overly optimistic about how well they will perform in college. This overconfidence leads to suboptimal decision making. But what if secondary school students were told their likelihood of succeeding in the college program they applied to prior to their decision to enroll?  Would this influence their decision to enroll?

This study presents the results of a field experiment in which a random half of 313 secondary-school students applying to higher education received personalized predictions on study success (the other half did not receive such predictions). A comparison of the enrolment rates of the two groups of students helps us understand the effect of receiving these personalized predictions. We find that:

Read More

Quality Preschool for Ghana Program Improves Teacher and Student Outcomes

Sharon Wolf, J. Lawrence Aber, Jere Behrman & Edward Tsinigo

PDF Version

Preschool teacher training program improves classroom quality and child outcomes in Ghana

Children around the world are attending preschool more than ever before. But many preschools are poor quality and children are not learning. Ghana, a lower-middle income country in West Africa, has been at the forefront of expanding access to preschool and adopting a progressive- child-centered curriculum.

Yet, preschool quality remains poor and most teachers have not been trained in the national curriculum. 

Read More

Bounding, an accessible method for estimating principal causal effects, examined and explained

Luke Miratrix, Jane Furey, Avi Feller, Todd Grindal, and Lindsay Page

PDF Version

Estimating program effects for subgroups is hard. Estimating effects for types of people who exist in theory, but whom we can’t always identify in practice (i.e., latent subgroups) is harder. These challenges arise often, with noncompliance being a primary example. Another is estimating effects on groups defined by “counterfactual experience,” i.e., by what opportunities would have been available absent treatment access. This paper tackles this difficult problem. We find that if one can predict, with some accuracy, latent subgroup membership, then bounding is a nice evaluation approach, relying on weak assumptions. This is in contrast to many alternatives that are tricky, often unstable, and/or rely on heroic assumptions.

What are latent subgroups again?

Read More

Using Multisite Experiments to Study Cross-Site Variation in Treatment Effects

Howard Bloom, Steve Raudenbush, Michael Weiss, & Kristin Porter

PDF version

Multisite randomized trials are experiments where individuals are randomly assigned to alternative experimental arms within each of a collection of sites (e.g., schools).  They are used to estimate impacts of educational interventions. However, little attention has been paid to using them to quantify and report cross-site impact variation. The present paper, which received the 2017 JREE Outstanding Article Award, provides a methodology that can help to fill this gap.

Why and how is knowledge about cross-site impact variation important?

Read More

The Implications of Teacher Selection and the Teacher Effect in Individually Randomized Group Treatment Trials

Michael Weiss

PDF Version

Beware! Teacher effects could mess up your individually randomized trial! Or such is the message of this paper focusing on what happens if you have individual randomization, but teachers are not randomly assigned to experimental groups.

The key idea is that if your experimental groups are systematically different in teacher quality, you will be estimating a combined impact of getting a good/bad teacher on top of the impact of your intervention.

Read More

Effect Sizes Larger in Developer-Commissioned Studies than in Independent Studies

Rebecca Wolf, Jennifer Morrison, Amanda Inns, Robert Slavin, and Kelsey Risman

PDF Version

Rigorous evidence of program effectiveness has become increasingly important with the 2015 passage of the Every Student Succeeds Act (ESSA). One question that has not yet been addressed is whether findings from program evaluations carried out or commissioned by developers are as trustworthy as those identified in studies by independent third parties. Using study data from the What Works Clearinghouse, we found evidence of a “developer effect,” where program evaluations carried out or commissioned by developers produced average effect sizes that were substantially larger than those identified in evaluations conducted by independent parties.

Why is it important to accurately determine the effect sizes of an educational program?

Read More