澳门跑狗论坛

Privacy & Security

Educators and Artificial Intelligence: 鈥淓rr on the Side of Caution,鈥 Says RAND Researcher

By Benjamin Herold 鈥 January 23, 2019 6 min read
  • Save to favorites
  • Print
Email Copy URL

Artificial intelligence will likely have a relatively modest impact in K-12 classrooms, focused primarily on helping students develop 鈥渘arrow procedural knowledge and skills,鈥 according to a new report from the RAND Corporation.

鈥淭he work of teachers and act of teaching, unlike repetitive tasks on the manufacturing floor, cannot be completely automated,鈥 wrote Robert F. Murphy, a senior policy researcher with the group.

The RAND report, titled 鈥,鈥 does not include original research into AI in education. Rather, it鈥檚 a 鈥減erspective,鈥 in which Murphy, a veteran evaluator of educational technology, provides 鈥渆xpert insight on a timely policy issue.鈥

Without question, artificial intelligence is a topic of mounting interest. In the K-12 world, the discussion has numerous components, including how schools can best prepare students for a labor market already being disrupted by AI-powered automation, as well as the ways in which AI-powered technologies are .

The RAND report focuses on the latter. Murphy looks specifically at instructional tools, such as adaptive software and automated essay-scoring systems, already in wide use in the classroom. He also considers administrative tools such as 鈥渆arly warning systems,鈥 which thousands of school districts now use to help identify students at risk of dropping out.

The potential benefits of such technologies are real, but likely limited in scope, Murphy said in an interview. AI is still light years away from being able to replicate the kind of creativity, empathy, and improvisation that is core to good teaching. There is also currently little evidence to show that such tools are effective at improving educational outcomes, and the technologies must overcome significant hurdles around privacy, potential bias, and public trust.

Schools would be wise to consider those realities before embracing AI whole-hog, Murphy said.

鈥淚 would err on the side of caution,鈥 he said. 鈥淚f publishers and developers aren鈥檛 willing to provide information about how decisions are made within their systems, I think that raises red flags.鈥

Augmenting, Not Replacing, Teachers

The RAND paper broadly defines artificial intelligence as 鈥渁pplications of software algorithms and techniques that allow computers and machines to simulate human perception and decision-making processes to successfully complete tasks.鈥

Within the K-12 world, the most visible use of such technologies are in adaptive instructional systems and intelligent tutoring systems that aim to provide customized content based on each student鈥檚 strengths and weaknesses.

Many such tools are what Murphy described as 鈥渞ule-based鈥 applications that operate primarily off of if-then logic statements they are programmed in advance. Perceived advantages of such technologies include the ability to let students work at their own pace, advance only when they master requisite skills and concepts, and receive continuous feedback on their performance.

There鈥檚 a fair bit of research evidence to show that such systems can be effective in the classroom鈥攂ut only when it comes to topics and skills that revolve around facts, methods, operations and 鈥減rocedural skills,鈥 the RAND report says.

鈥淭he systems are less able to support the learning of complex, difficult-to-assess, higher-order skills,鈥 such as critical thinking, effective communication, argumentation, and collaboration, according to the report.

Other AI tools now using machine-learning techniques to discover patterns and identify relationships that are not part of their original programming face similar limitations, the report contends.

Automated essay scoring systems, for example, now provide valuable feedback to students, the RAND report says. But such feedback is mostly focused on things like grammatical errors and the use of passive voice, rather than the depth of the ideas being communicated.

The lesson for K-12 educators and administrators?

Such tools can鈥檛 replace teachers, Murphy said. But they can help teachers do more鈥攂y making it easier to backfill missing foundational knowledge and skills for students who are behind, for example, or by making it more possible to assign student writing with a realistic expectation that all students will receive at least some immediate feedback on their work.

鈥淚n the broader field of AI, there鈥檚 a lot of focus on augmenting workers鈥 capacity,鈥 Murphy said. 鈥淚t鈥檚 a way to help teachers solve some important challenges.鈥

Concerns Over Privacy, Bias, Transparency

When it comes to artificially intelligent administrative tools, meanwhile, there are both bigger opportunities and bigger questions.

AI-powered tools are already helping schools identify students who are at risk for dropping out, hire teachers and other staff, improve logistics, and offer college and career counseling.

There does not appear to be much evidence behind such tools one way or another, Murphy said.

The best-case scenario, he said, is that AI can help improve organizational decision-making to the point where enough money and staff time is freed up so that more resources can be directed into the classroom.

But a series of 鈥渉uge鈥 hurdles must be overcome before that鈥檚 possible, Murphy maintained.

For one, there鈥檚 reason to believe that most developers of AI-powered tools don鈥檛 have access to enough 鈥渉igh-quality鈥 data to really drive effective decision-making. (AI tools 鈥渓earn鈥 by being trained on large data sets. In K-12, though, such training sets are typically limited to information on things like test results, demographics, and discipline鈥攏ot the kinds of granular data on what鈥檚 happening between students and teachers that truly shapes learning.)

At the same time, though, the push to collect more educational data is raising significant concerns. Some parents and activists worry about privacy, arguing that efforts to gather, say, every click recorded by educational software are pushing far beyond the capacity of current laws to protect students鈥 personal information. The RAND report also flags issues of bias, noting that the statistical models that power AI often reinforce racial, gender, and other biases that are already embedded in the data upon which such tools are trained.

And heightening both sets of concerns is a lack of transparency, with many software developers proving unable or unwilling to communicate publicly in a clear, understandable way about how their systems make decisions.

That erodes public trust and creates a big problem for K-12 educators and policymakers, Murphy said, especially as the stakes attached to AI-powered decisions continue to outpace most school districts鈥 capacity to fully consider the pros, cons, and potential unintended consequences of such technologies.

鈥淚 think we really need to be cautious about the blind acceptance of AI-powered decision tools when there are serious consequences for the individuals who are the object of their decision-making,鈥 Murphy said.

Image: Getty


See also:

A version of this news article first appeared in the Digital Education blog.