The Every Student Succeeds Act requires states, for the first time, to measure and report on the academic performance of homeless and foster children, as well those from military families.
Providing student-growth measures for these vulnerable subgroups will give states and districts a clearer picture of how鈥攐r whether鈥攖he needs of these students are being met. As states and districts plan how to incorporate these data into their accountability systems, they must also understand how to mitigate the unique challenges of measuring the academic growth of these students.
Homeless, foster, and military-connected student subgroups include a higher proportion of high-mobility students, missing test scores, and smaller student sample sizes than many other subgroups鈥攁ll of which can hinder the ability to measure their academic growth.
Students connected to the active-duty military, for instance, move three times more frequently than their civilian counterparts, according to the Military Child Education Coalition. In addition, high percentages of homeless and foster students experience frequent school changes, often moving from one district to another.
These disruptive transitions can lead to lost testing data. Many states, including Arkansas, Delaware, and Kentucky, have expanded their statewide student-information systems over the past decade and now have the ability to share data on students who move across district lines. (Privacy laws, however, still stymie efforts to track student data across state lines.) While sharing data between districts should mitigate the loss of existing testing data, students in these subgroups are also more likely to miss tests in the first place.
Many state student-growth models can鈥檛 incorporate students who are missing recent test scores, because those models focus on a change in student achievement, in a single subject, only from one year to the next. States and districts attempting to use these simplistic growth models will struggle to generate information on highly mobile subgroups. How do we make sure data shine a light on how these potentially at-risk students are being served?
For more on student mobility, please visit:
Sophisticated growth models, such as those used in Tennessee and Pennsylvania, can include more of these students, even those missing test scores from the previous year. Both states have used Education Value-Added Assessment System (EVAAS) models for many years and have a rich history of using data for both reflecting on instructional practices and improving student outcomes.
By including additional prior testing data鈥攁cross different subjects, grades, and assessments鈥攁dvanced growth models provide a more accurate understanding of students鈥 knowledge and skills when they enter the classroom. This approach gives teachers better information on how to work with those students and provides a clearer baseline from which to measure growth in the current year.
How do we make sure data shine a light on how these potentially at-risk students are being served?"
Another challenge in collecting good data is that homeless, foster, and military-connected student subgroups represent a small percentage of the overall school population. For instance, 15 states have fewer than 5,000 homeless students. With a smaller subgroup of students, it is more difficult to produce meaningful growth measurements, given the inherent statistical limitations of small samples.
The American Statistical Association recommends that estimates from student-growth models be presented alongside information on the precision and limitations of the model used. This is an especially important reminder when faced with small subgroups, as smaller samples have more built-in error. Adopting a model that includes the standard error around a group鈥檚 growth measure can mitigate that problem, by essentially telling users how confident they should be in the measure.
In its notice of proposed rulemaking under ESSA, the U.S. Department of Education allows states to set their own student-subgroup minimum amounts, but requires states to get federal approval for a minimum sample size greater than 30 to make sure they are still capturing the performance of small groups. As states consider different growth measures for their accountability systems and school report cards, they must also take the limitations of small-group measurement into account. Incorporating standard error adds critical context and protects schools against incorrect classification.
Some states use student-growth measures to classify schools into different categories, such as letter grades, star ratings, and schools 鈥渋n need of improvement.鈥 The standard error indicates how confident we can be in concluding whether the growth measure meets, exceeds, or falls short of the growth expectation. Only when there is enough evidence is a growth measure categorized into something other than 鈥渕eeting expectations.鈥
The data challenges of small, mobile subgroups are not insurmountable; if we conquer them, we can do more than just meet new ESSA requirements. ESSA prompts states to design accountability systems that look back on how they served students the previous year. More advanced models also look to the future, toward how to better serve these often-overlooked subgroups in the coming years. Advanced models incorporate predictive analytics, which allow for student projections to future state assessments and Advanced Placement and college-readiness tests.
With projections and early-warning indicators, teachers and schools can see a student鈥檚 trajectory and more proactively implement remediation, intervention, and enrichment strategies that foster academic improvement. Better still, they can accomplish this with the same underlying standardized-test data required by ESSA.
As states and districts redesign school accountability systems, student-growth measures remain a valuable indicator of school quality. But let鈥檚 use all the data we have to meet the distinct needs of homeless, foster, and military-connected students. Where possible, let鈥檚 examine these vulnerable groups individually. And let鈥檚 not remove a child from an analysis because he or she is missing a test score. All kids count, so let鈥檚 count all kids.