In 2013, a national panel of education experts called for U.S. states and districts to move away from a focus on testing primarily for accountability, and toward building tests that would help teachers provide more individualized instruction and support for students.
More than a decade after the Gordon Commission on the Future of Assessment in Education, experts at the American Educational Research Association conference in Philadelphia pointed out that the technology now exists for more nuanced measurement, but states and districts have not yet developed the training and infrastructure to help teachers use the new tools effectively.
鈥淲e don鈥檛 really set our educators and our students up for success right now,鈥 said LaVerne Evans Srinivasan, of the Carnegie Corp. of New York, a philanthropic group that supports education programs (distinct from the Carnegie Foundation for the Advancement of Teaching), during an AERA symposium. 鈥淥ur educators are at a disadvantage in terms of having the professional learning support that they need to have digital literacy and competency to work with new [artificial intelligence] technology to feel confident that they can use these tools and technologies ... to both reduce the pain of their workload, but also optimize their ability to differentiate learning and personalize learning for young people.鈥
Testing tools built on artificial intelligence systems have expanded rapidly in K-12.
James Moore III, head of the National Science Foundation鈥檚 education directorate, said NSF鈥檚 STEM education grants alone have invested $75 million in the last year alone in , such as an open-access, online assessment of in several languages and a program to boost student in science.
But E. Wyatt Gordon, vice president and head of evaluation systems at the computer-adaptive testing company Pearson VUE (for Virtual University Enterprises), said, so far, most AI testing tools 鈥渆ssentially amount to learners asking fact-based questions, and getting fact-based answers.鈥
鈥淲e know that鈥檚 not a good teaching environment. So the challenge lies in transforming those interactions into effective learning experiences,鈥 Gordon said.
That means, for example, using programs that collect data about the strategies students use to solve problems鈥攏ot just checking correct answers鈥攁nd then relaying information to teachers about whether a student has particular misconceptions about a concept or less efficient learning strategies.
鈥楾he education sector is very slow to evolve and change鈥
Some high-profile assessments are already trying to leverage AI to provide more nuanced information. The 2025 Program for International Student Assessment, for example, will include performance tasks in which students may work with an AI-driven chatbot, to ensure students have basic background knowledge on a subject and track students鈥 decisionmaking approach to completing tasks.
鈥淭he challenge is, the education sector is very slow to evolve and change,鈥 Srinivasan said. 鈥淎lready, we have young people educated at an enormous disadvantage by the limited progress that we鈥檝e made in having measurement and assessment keep up with the progress that we鈥檙e making in innovations of how learning happens in classrooms.鈥
For example, she noted that the Carnegie Corp. has supported efforts to redesign high schools for the last two decades. 鈥淭hose new high school designs were based on assessing mastery and competency,鈥 Srinivasan said, 鈥渂ut when we implemented those designs, we didn鈥檛 have the tools to make it easy to adjust how we measure and assess progress toward mastery of rigorous material.鈥
In a newly released , the National Academy of Education recommends states and districts create both formal training systems to help teachers understand how to use different kinds of assessments, and informal networks of school leaders and mentors to share best practices.