A Smarter, More Reliable Way to Level Reading Materials for Students
Matching students with books at just the right reading level is critical—but not always easy. Traditional leveling systems often rely heavily on content features alone, which can lead to unreliable results and mismatches between students and texts.
In their recent article, Validation Analysis During the Design Stage of Text Leveling, researchers Jerome D’Agostino and Connie Briggs introduce a more robust, research-driven approach called integrated leveling. This method doesn’t just rely on expert judgment or surface features—it blends those with proven test development practices, like field testing and item analysis, to validate the complexity of each text.
Interestingly, the initial book levels assigned by review committees actually aligned quite well with how students performed during testing. Field testing helped fine-tune the levels and confirm where each book truly belongs on the difficulty scale, but the expert judgments were largely on target from the start. That’s good news—it suggests that with the right validation processes, we can trust the professional insights of educators and still strengthen them with data.
So, what does this mean for teachers? Integrated leveling offers a more accurate, evidence-backed way to identify student reading needs, inform instruction, and track growth over time. It’s a thoughtful balance of educator expertise and empirical validation—one that leads to better reading support for every student.
References:
D’Agostino, J. V., & Briggs, C. (2025). Validation Analysis During the Design Stage of Text Leveling. Education Sciences, 15(5), 607. https://doi.org/10.3390/educsci15050607