The rush to adopt new technology during coronavirus-driven remote learning could lead educators to use more tools powered by advanced artificial intelligence. But that more optimistic vision for AI could be tempered by budget shortfalls resulting from the virus outbreak that 鈥渕ay seriously delay鈥 school districts from making those types of investments in the near future.
That鈥檚 how Robert F. Murphy sees it. He is an independent education consultant with more than two decades of research experience, including positions as a senior policy researcher for the international think tank RAND Corp. and as the director for evaluation research at SRI International, a scientific research center.
Murphy, in a paper authored last year for RAND, focused on AI applications in K-12 classrooms and, as the featured speaker during a webinar hosted by a company developing AI-based language learning tools, has cautioned that artificial intelligence is not likely to transform education the same way it already has other high-profile industries such as transportation, drug discovery and health care.
Instead, he has argued that AI will continue to play a back-up role to enhance the classroom experience, assisting teachers with second-language learning, feedback on writing drafts, early diagnosis of reading problems, and through adaptive instruction for remediation.
鈥淭he recent phase of remote learning does not change my feelings about AI鈥檚 prospects for disrupting education relative to its potential to disrupt other fields,鈥 Murphy said in an email interview with 澳门跑狗论坛. 鈥淗owever, I do believe the current COVID-19 distance learning situation will bring renewed attention to the need for online instructional systems that support adaptive instruction and provide automated feedback and support to students when teachers and parents cannot be present.鈥
Murphy recently had an email conversation with 澳门跑狗论坛 Contributing Writer David Saleh Rauf about the evolution of advanced AI tools for education. This interview has been edited for brevity and clarity.
How can teachers, parents, and students trust that AI is going to be making the best recommendations to improve an educational outcome or that it鈥檚 even making good recommendations? Has there been sufficient research to determine what value teachers and students could derive from an AI solution in the classroom?
Currently, there is a lack of research and vendor reporting in both areas鈥攊n our understanding of the accuracy of statistical AI-based applications in education [with the exception of some automated applications that score writing. The application automatically scores the writing.] and to what extent they are providing value beyond similar applications that don鈥檛 include advanced AI techniques. As more of these applications are introduced in the market, there will likely be discussions about establishing industry standards for the disclosure by vendors of certain information about the products, including product performance. This information might include a description or ranking of the 鈥渒nowability鈥 of system decisions; limitations and accuracies of model predictions; the consequences of inaccurate decisions for students and teachers; and how the models were trained, including details of the data sets used and how the learned models were evaluated for potential bias.
The accuracy of statistical AI systems is highly dependent on access to large sets of data. In some cases, those data can be biased and reinforce racial, gender, or other stereotypes upon which the AI tool is being trained. What are the implications of biased data creeping into AI systems used for education?
Concerns over algorithmic bias will depend on the application, its role in the school and classroom, and the consequences of system decisions for students and teachers. For example, the consequences of bias creeping into the decisions from a curriculum-recommendation engine for teachers are likely to be fairly minor compared with the possible consequences for students of a biased AI-based early-warning system that might disproportionately and incorrectly identify one group of students for remediation based on gender or race while completely missing students with real needs for the same reason. This is why for AI applications such as early-warning systems where the consequences are significant, I advocate that the output of the systems should only be used as one data point in the decision making process along with the professional judgments of teachers and administrators based on their personal experience and knowledge of individual students.
What鈥檚 the biggest obstacle preventing some of these advanced statistical AI systems from being used in classrooms around the country on a large and meaningful scale? Is it funding and product development, the lack of acquired data sets needed to build and train the machine-learning algorithms, trust issues regarding privacy, or a combination of all those factors?
If I had to rank these obstacles by their strength of influence on the development of advanced AI solutions for education, it would be (1) lack of appropriate data sets to train algorithms, (2) funding for development, and (3) data-privacy issues. The vast amounts of data needed to train sophisticated and unbiased AI learning applications across a host of subject areas and grade levels is just not readily available in education. The only groups with easy access to the type of fine-grained data that might be of use for training advanced AI algorithms are the current publishers and developers of online learning platforms and applications that are being used at scale (for example, some online learning platforms currently in use in the U.S. and English-language learning apps serving the Chinese market). This is a very small network of people. Without this data, advanced AI capabilities are not possible. But even if the required training data were widely available, the available funding for development of AI-based solutions for the K-12 education market will only ever be a small fraction compared with what is being invested in other markets, such as health care, autonomous vehicles, the military.
There鈥檚 a ton of funding, ranging from venture capital used by startups to big investments made by public corporations, going into AI shaping technology all around us. How come the investment in AI for the ed-tech sector isn鈥檛 as robust? Could that change following the COVID-19 school closures and the uncertain future of when brick-and-mortar classes will resume?
The K-12 education market is a notoriously difficult and costly market for product vendors due to a number of factors: small discretionary budgets, compliance requirements, long sales cycles involving committee approvals, etc. Unfortunately, at this time, it鈥檚 difficult to envision large new investments by venture-capital firms and public corporations in the development of new products and services for the U.S. K-12 market.
In a webinar you participated in last year, you said that systems outside of education are ultimately going to influence the public鈥檚 trust of AI in education. What did you mean?
The general public, including the parents of school-age children, will have their most significant experiences with AI applications outside of education, such as in health care or within their automobile. And the quality of this experience will likely shape the public attitudes toward AI鈥檚 use in education. Media coverage, good and bad, will also play a major role in public attitudes. The next news story involving an AI application that garners wide media attention鈥攁 major personal-data breach, the discovery of COVID-19 vaccine, a fatal autonomous-driving accident, more productive agricultural yields鈥攚ill likely have a lasting impact on the public鈥檚 general perception of the safety and reliability of all AI applications, including those developed for the education market.