Filtered by tag: Measurement Remove Filter

Mitigating Illusory Results through Preregistration in Education

Summary by: Claire Chuter

PDF Version

Good researchers thoroughly analyze their data, right? Practices like testing the right covariates, running your analyses in multiple ways to find the best fitting model, screening for outliers, and testing for mediation or moderation effects are indeed important practices… but with a massive caveat. The aggregation of many of these rigorous research practices (as well as some more dubious ones) can lead to what the authors call “illusory results” – results that seem real but are unlikely to be reproduced. In other words, implementation of these common practices (see Figure 1 in the article), often leads researchers to run multiple analytic tests which may unwittingly inflate their chances of stumbling upon a significant finding by chance.

Potential Solutions

Read More

The Methodological Challenges of Measuring Institutional Value-added in Higher Education

Tatiana Melguizo, Gema Zamarro, Tatiana Velasco, and Fabio J. Sanchez

PDF Version

Assessing the quality of higher education is hard but there is growing pressure for governments to create a ranking system for institutions that can be used for assessment and funding allocations.  Such a system, however, would require a reliable methodology to fairly assess colleges using a wide variety of indicators. Countries with centralized governance structures have motivated researchers to develop “value-added” metrics of colleges’ contributions to student outcomes that can be used for summative assessment (Coates, 2009; Melguizo & Wainer, 2016; Shavelson et al. 2016). Estimating the “value-added” of colleges and programs, however, is methodologically challenging: first, high- and low-achieving students tend to self-select into different colleges– a behavior that if not accounted for, may yield to estimates that capture students’ prior achievement rather than colleges’ effectiveness at raising achievement; second, measures considering gains in student learning outcomes (SLOs) as indicators at the higher education level are scant. In our paper, we study these challenges and compare the methods used for obtaining value-added metrics in the context of higher education in Colombia.

How to best estimate value-added models in higher education?

Read More

Latent Profiles of Reading and Language and Their Association with Standardized Reading Outcomes in K-10th Grade

Barbara R Foorman, Yaacov Petscher, Christopher Stanley, & Adrea Truckenmiller

PDF Version

Differentiated instruction involves tailoring instruction to individual student’s learning needs. While critical to effective teaching, an understudied first step in differentiated instruction is understanding students’ learning profiles – that is, their strengths and weaknesses in knowledge and skills.  It is only after a student’s learning profile is understood that a teacher can individualize instruction. But how can educators best measure learning profiles to facilitate differentiated instruction?

Descriptive approaches such as informal reading inventories lack the psychometric rigor required for purposes of classification, placement, and monitoring growth.  However, quantitative approaches to classifying and clustering (i.e., grouping) students by skill classes and validating the clusters by relating them to standardized tests is a reliable tool for creating profiles. The objective of this study was twofold. First, to determine the profiles of reading and language skills that characterized 7,752 students in kindergarten through 10th grade. Second, to relate the profiles to standardized reading outcomes.

Read More