Filtered by tag: RCT Remove Filter

Early College High Schools Increase Students’ Early Postsecondary Degree Attainment

Julie Edmunds, Fatih Unlu, Elizabeth Glennie, Lawrence Bernstein, Lily Fesler, Jane Furey, & Nina Arshavsky

PDF Version

Are early college high schools effective?

Yes, they are, according to a rigorous study conducted in North Carolina. Students who attended Early College High Schools enrolled in and completed college more than comparable students who did not (see bar chart below). The increase in degree completion is one of the largest ever observed in a randomized trial! Early college students also earned 8 times as many college credits in high school as their peers in the control group.

Read More

Partially Identified Treatment Effects for Generalizability

Wendy Chan

PDF Version

Will this intervention work for me?

This is one of the questions that make up the core of generalization research. Generalizations focus on the extent to which the findings of a study apply to people in a different context, in a different time period, or in a different study altogether. In education, one common type of generalization involves examining whether the results of an experiment (e.g., the estimated effect of an intervention) apply to a larger group of people, or a population.

Read More

Immediate and Long-Term Efficacy of a Kindergarten Mathematics Intervention

Ben Clarke, Christian Doabler, Keith Smolkowski, Evangeline Kurtz Nelson, Hank Fien, Scott K. Baker, Derek Kosty

PDF Version

Early intervention can reduce the achievement gap in mathematics

More than half of elementary school students in the United States score below proficient in mathematics in fourth grade. To address this problem, educators can provide early intervention on whole number skills (e.g., counting by ones; adding two numbers to make 10; decomposing numbers). Early intervention may be integral to children’s long-term success with mathematical thinking because difficulty at school entry typically persists into later elementary grades. Persistent frustration and hardship in learning mathematics are associated with a mathematics learning disability (MLD). Students with MLD are most vulnerable to lifelong difficulty managing daily tasks that involve numbers (e.g., money management). Students with or at risk for MLD will likely benefit from intervention as early as possible to reduce adverse long-term impacts.

Read More

Between-School Variation in Students’ Achievement, Motivation, Affect, and Learning Strategies: Results from 81 Countries for Planning Cluster-Randomized Trials in Education

Martin Brunner, Uli Keller, Marina Wenger, Antoine Fischbach & Oliver Lüdtke

PDF Version

Does an educational intervention work?

When planning an evaluation, researchers should ensure that it has enough statistical power to detect the expected intervention effect. The minimally detectable effect size, or MDES, is the smallest true effect size a study is well positioned to detect. If the MDES is too large, researchers may erroneously conclude that their intervention does not work even when it does. If the MDES is too small, that is not a problem per se, but it may mean increased cost to conduct the study.  The sample size, along with several other factors, known as design parameters, go into calculating the MDES. Researchers must estimate these design parameters. This paper provides an empirical bases for estimating design parameters in 81 countries across various outcomes.

Read More

Improving the general language skills of second-language learners in kindergarten: a randomized controlled trial

Kristin Rogde, Monica Melby-Lervåg, & Arne Lervåg

PDF Version.

There are increasing numbers of children whose first language differs from the predominant language of instruction in their school. Entering school where the language of instruction is a student’s second language is associated with undesirable social, educational, and economic outcomes. This study investigates the efficacy of an intervention aimed at improving second-language skills of kindergarteners.

How did we test the intervention?

Read More

The Higher Education Enrollment Decision: Feedback on Expected Study Success and Updating Behavior

Chris van Klaveren, Karen Kooiman, Ilja Cornelisz & Martijn Meeter

PDF Version

Secondary school students tend to be overly optimistic about how well they will perform in college. This overconfidence leads to suboptimal decision making. But what if secondary school students were told their likelihood of succeeding in the college program they applied to prior to their decision to enroll?  Would this influence their decision to enroll?

This study presents the results of a field experiment in which a random half of 313 secondary-school students applying to higher education received personalized predictions on study success (the other half did not receive such predictions). A comparison of the enrolment rates of the two groups of students helps us understand the effect of receiving these personalized predictions. We find that:

Read More

Quality Preschool for Ghana Program Improves Teacher and Student Outcomes

Sharon Wolf, J. Lawrence Aber, Jere Behrman & Edward Tsinigo

PDF Version

Preschool teacher training program improves classroom quality and child outcomes in Ghana

Children around the world are attending preschool more than ever before. But many preschools are poor quality and children are not learning. Ghana, a lower-middle income country in West Africa, has been at the forefront of expanding access to preschool and adopting a progressive- child-centered curriculum.

Yet, preschool quality remains poor and most teachers have not been trained in the national curriculum. 

Read More

Bounding, an accessible method for estimating principal causal effects, examined and explained

Luke Miratrix, Jane Furey, Avi Feller, Todd Grindal, and Lindsay Page

PDF Version

Estimating program effects for subgroups is hard. Estimating effects for types of people who exist in theory, but whom we can’t always identify in practice (i.e., latent subgroups) is harder. These challenges arise often, with noncompliance being a primary example. Another is estimating effects on groups defined by “counterfactual experience,” i.e., by what opportunities would have been available absent treatment access. This paper tackles this difficult problem. We find that if one can predict, with some accuracy, latent subgroup membership, then bounding is a nice evaluation approach, relying on weak assumptions. This is in contrast to many alternatives that are tricky, often unstable, and/or rely on heroic assumptions.

What are latent subgroups again?

Read More

Using Multisite Experiments to Study Cross-Site Variation in Treatment Effects

Howard Bloom, Steve Raudenbush, Michael Weiss, & Kristin Porter

PDF version

Multisite randomized trials are experiments where individuals are randomly assigned to alternative experimental arms within each of a collection of sites (e.g., schools).  They are used to estimate impacts of educational interventions. However, little attention has been paid to using them to quantify and report cross-site impact variation. The present paper, which received the 2017 JREE Outstanding Article Award, provides a methodology that can help to fill this gap.

Why and how is knowledge about cross-site impact variation important?

Read More

The Implications of Teacher Selection and the Teacher Effect in Individually Randomized Group Treatment Trials

Michael Weiss

PDF Version

Beware! Teacher effects could mess up your individually randomized trial! Or such is the message of this paper focusing on what happens if you have individual randomization, but teachers are not randomly assigned to experimental groups.

The key idea is that if your experimental groups are systematically different in teacher quality, you will be estimating a combined impact of getting a good/bad teacher on top of the impact of your intervention.

Read More

Effect Sizes Larger in Developer-Commissioned Studies than in Independent Studies

Rebecca Wolf, Jennifer Morrison, Amanda Inns, Robert Slavin, and Kelsey Risman

PDF Version

Rigorous evidence of program effectiveness has become increasingly important with the 2015 passage of the Every Student Succeeds Act (ESSA). One question that has not yet been addressed is whether findings from program evaluations carried out or commissioned by developers are as trustworthy as those identified in studies by independent third parties. Using study data from the What Works Clearinghouse, we found evidence of a “developer effect,” where program evaluations carried out or commissioned by developers produced average effect sizes that were substantially larger than those identified in evaluations conducted by independent parties.

Why is it important to accurately determine the effect sizes of an educational program?

Read More