Software

Study Design Tools

Impact Analysis Tools

Cost, Cost-Effectiveness & Benefit-Cost Analysis Tools

 

Study Design Tools

 


 

1. The Generalizer 
Developers: Elizabeth Tipton, Katie Miller & Larry V. Hedges
Purpose:  This tool is helpful in selecting sites for experiments and for assessing the generalizability of findings from completed experiments. 

  • For users planning a study, The Generalizer creates a stratified recruitment plan based on school features. By recruiting some schools from each stratum, the final sample is compositionally similar to the inference population. Users are provided with lists of schools and contact information, as well as recruitment goals that can be used to guide the recruitment process. 
  • For users evaluating the findings from a study, The Generalizer compares the final sample of schools that took part in the evaluation with the relevant population of schools in the United States as well as each of the 50 states. Users are provided with summary information indicating the similarity between the sample and population, as well as guidelines regarding how this similarity affects the generalizability of findings.

Suggested citations:

  • Software: Tipton, E. & Miller, K. (2016) The Generalizer. Accessed at www.thegeneralizer.org.
  • Background: Tipton, E. (2014) Stratified sampling using cluster analysis: A sample selection strategy for improved generalizations from experiments. Evaluation Review, 37(2): 109-139.
  • Background: Tipton, E. (2014) How generalizable is your experiment? Comparing a sample and population through a generalizability index.  Journal of Educational and Behavioral Statistics, 39(6): 478 – 501.

Funding:  The Spencer Foundation 
Platform/Software: Web-based (Google Chrome preferred)

 


 

2. PowerUp! 
Developers:  Nianbo Dong, Benjamin Kelcey, Rebecca Maynard & Jessaca Spybrook
Purpose:  This tool allows users to determine the optimal sample size required to achieve specified minimum detectable effect sizes.  It also supports computation of the minimum detectable effect size for a specified level of statistical power and precision, given user-inputs about the sample design and size.  In both applications, the user is prompted to select the sample design (e.g., randomized controlled trial, interrupted time series or regression discontinuity), the nature of clustering and blocking, and assumptions about the outcomes to be analyzed, the magnitude of intra class correlations, and the number and explanatory power of covariates.  The tool produces tables that summarize the sample design assumptions supplied by the user, as well as the tool-generated estimate of the minimum required sample size or the minimum detectable impact.  
Suggested citations: 

  • Software: Dong, N., Kelcey, B., Maynard, R. & Spybrook, J. (2015) PowerUp! Tool for power analysis. www.causalevaluation.org.
  • User's Manual: Dong, N., & Maynard, R. A. (2013). PowerUp!: A tool for calculating minimum detectable effect sizes and minimum required sample sizes for experimental and quasi-experimental design studies. Journal of Research on Educational Effectiveness, 6(1), 24-67.

Funding:  National Science Foundation [DGE-1437679, DGE-1437692, DGE-1437745], and Institute of Education Sciences [R305B090015]
Platform/Software: Windows & Mac, requires Microsoft Excel
Note: Winner of the 2013 AERA Division H (Research, Evaluation, and Assessment in Schools) Outstanding Publication Award, Advances in Methodology, for this article.

 


 

3. Optimal Design (Software for Multi-Level and Longitudinal Research) 
Developers:  Stephen Raudenbush, Jessaca Spybrook, Howard Bloom, Richard Congdon, Carolyn Hill & Andres Martínez
Purpose:  Optimal Design allows users to conduct a power analysis and compute minimum detectable effect sizes for studies of individual and group-level interventions.  The accompanying manual describes how to conduct a power analysis for individual and group randomized trials. It includes an overview of each design, the appropriate statistical model each design, and the calculation of statistical power and minimum detectable effect sizes. It also includes empirical estimates of design parameters for planning group randomized trials as well as power for meta-analysis and optimal sample allocation for two-level cluster randomized trials.  
Suggested citations:

  • Software: Raudenbush, S.W. et al (2011) Optimal Design Software for Multi-level and Longitudinal Research (Version 3.01),
  • User's Manual: Spybrook, J., Raudenbush, S. W., Liu, X. F., Congdon, R., & Martínez, A. (2006). Optimal design for longitudinal and multilevel research: Documentation for the “Optimal Design” software. Survey Research Center of the Institute of Social Research at University of Michigan.

Funding:  William T. Grant Foundation 
Platform/Software: Windows

 


 

4. Online Intraclass Correlation Database 
Developers: Eric C. Hedberg and Larry V. Hedges
Purpose:   Educational experiments often involve assignment of aggregate units such as schools or school districts (statistical clusters) to treatments. Experiments that do so are called cluster randomized experiments. The sensitivity (statistical power, precision of treatment effect estimates, and minimum detectable effect size) of cluster randomized experiments depends on statistical significance level, sample size, and effect size, but also on the variance decomposition among levels of aggregation (as indicated by intraclass correlation or ICC values at each level of aggregation) and the effectiveness of any covariates used to explain variation at different levels of aggregation (as indicated by R2 values at each level of aggregation). We call the ICC and R2 values design parameters because values of these parameters are necessary to design a cluster randomized experiment that has adequate sensitivity.

This database provides empirical estimates of design parameters for two and three level cluster randomized trials that use academic achievement as an outcome variable. These estimates are available for the nation as a whole (based on surveys with national probability samples) and for selected states (based on those states longitudinal data systems, which are essentially an exhaustive sample).   
Suggested citations:

Funding: National Science Foundation [0129365, 0815295] and Institute for Education Sciences [R305D110032]
Platform/Software: Web-based

 


 

5.   Mosaic Evidence Synthesis Tools

Developers: Martyna Citkowicz, Charlie Ebersole, Karthik Jallawaram, Megha Joshi, Laura Michaelson, David Miller, Joshua Polanin, Joe Taylor, & Ryan Williams from the Methods of Synthesis and Integration Center (MOSAIC) at the American Institutes for Research (contributors listed alphabetically by last name).

 Purpose: The Methods of Synthesis and Integration Center (MOSAIC) hosts a number of tools. These tools range from assistance with collection and coding of study information using a collaborative software program to exploration of data from completed meta-analyses using evidence gap maps, box plots, and traditional forest plots to interpret and translate meta-analytic findings. Explore these interactive data tools on MOSAIC’s Tools page and stay tuned as the team is working on uploading more datasets and tools in the near future!

 Suggested Citations: See individual tools.

 Funding: Institute of Education Sciences [R305A170146], National Science Foundation [EHR-2000672], and American Institutes for Research

 Platform/Software: Web-based

 

 

 


 

Impact Analysis Tools

 


 

RCT-YES 
Developer:  Peter Schochet, Carol Razafindrakoto, Carlo Caci, Mason DeCamillis & Matthew Jacobus
Purpose:  The Institute of Education Sciences (IES) has launched a new tool that can make it easier and more cost-effective for states and school districts to evaluate the impact of their programs. RCT-YES is free, user-friendly software that assists those with a basic understanding of statistics and research design in analyzing data and reporting results from randomized controlled trials (RCTs) and other types of evaluation designs.

RCT-YES was developed by Mathematica Policy Research, Inc. under a contract from IES' National Center for Education Evaluation and Regional Assistance. While the software has a simple interface and requires no knowledge of programming, it does not sacrifice rigor. RCT-YES uses state-of-the-art statistical methodsto analyze data.

For more information on RCT-YES, visit www.rct-yes.com.

Suggested citations:  

  • Software: Schochet, P.Z., Razafindrakoto, C., Caci, C., DeCamillis, M.  & Jacobus, M. RCT-YES software. The software is available at www.rct-yes.com.         
  • User's Manual: Schochet, P.Z. (2016). RCT-YES software: User's Manual.

Background: Schochet, P. Z. (2015). Statistical theory for the RCT-YES software: Design-based causal inference for RCTs (NCEE 2015–4011). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Analytic Technical Assistance and Development. Accessed 1-22-16 https://ies.ed.gov/ncee/pubs/20154011/pdf/20154011.pdf.

Funding:  Institute of Education Sciences 
Platform/Software: Windows and requires R or Stata to be installed on user's computers.

 

 


 

Cost, Cost-Effectiveness & Benefit-Cost Analysis Tools

 


 

CostOut Toolkit
Developers:  Fiona Hollands, Barbara Hanisch-Cerda, Henry Levin, Clive Belfield, Amritha Menon, Robert Shand, Yilin Pan, Ipek Bakir, & Henan Cheng.
Purpose:  CostOut facilitates the estimation of costs and cost-effectiveness of educational or other social programs. It is primarily designed for researchers, analysts, educational administrators, and policymakers. CostOut is based on the “ingredients method” and includes a database of around 700 national average prices of educational resources. Users may also build their own databases of local prices or foreign currency prices and can customize the inflation and geographical indices. CostOut allows for multiple programs to be compared in one analysis.
Suggested citations: 

  • Software: Hollands, F.M., Hanisch-Cerda, B., Levin, H. M., Belfield, C.R., Menon, A., Shand, R., Pan, Y., Bakir, I., & Cheng, H. (2015). CostOut - the CBCSE Cost Tool Kit. Center for Benefit-Cost Studies of Education, Teachers College, Columbia University. Accessed 1-22-16 from:  http://www.cbcsecosttoolkit.org/ .
  • User's Manual: Hollands, F. M., Hanisch-Cerda, B., Menon, A., Levin, H. M., & Belfield, C. R. (2015).  User Manual for CostOut - the CBCSE Cost Tool Kit. Center for Benefit-Cost Studies of Education, Teachers College, Columbia University. Accessed 1-22-16 from:  http://www.cbcsecosttoolkit.org/ .

Funding:  Institute of Education Sciences 
Platform/Software: Web-based