Artificial intelligence can write a résumé, answer medical questions, and even have philosophical conversations. But one task it’s not always so good at is solving math problems—and that poses a challenge for teachers and students who might try to use the technology in class.
AI bots, such as ChatGPT, regularly answer math questions incorrectly, botching calculations or using faulty logic. That’s because these models , experts have said. Instead, chatbots analyze large amounts of text to make predictions—which can be wrong. The tool can also give contradictory answers to the same question asked at different times.
It’s a problem that AI developers have begun to address.
In February, a found that Khanmigo, an AI tutor created by the online education nonprofit Khan Academy, regularly struggled with basic computation. Khan Academy has since , including directing numerical problems to a calculator.
And earlier this month, OpenAI, the organization that created ChatGPT, designed to better reason through complex math tasks.
But still, said Lane Walker, a high school math teacher in Wake County schools in North Carolina, “Every once in a while I get an answer that’s clearly wrong. … You can’t take everything it says without question.”
Surveys have shown that teachers are hesitant about bringing AI into the classroom, in part due to concerns about chatbots presenting them or their students incorrect information.
In math, specifically, educators are split on the role that AI should play, according to an EdWeek Research Center survey. When asked how math instruction should change to address the existence of AI platforms that can solve math problems for students, 43 percent of teachers, principals, and district leaders said that students should be completing their work in class with a paper and pencil to make sure they’re not accessing these tools.
But about a third of educators said that students should be taught how to incorporate AI into their assignments, and 1 in 5 said that teachers should use AI to create math assignments.
ܹ̳ spoke to two teachers who are using AI in these ways to find out how they handle the technology’s shortcomings—and how to turn these flaws to their advantage.
Lane Walker
Math teacher
Fuquay-Varina High School, Wake County schools, N.C.
Walker has always tried to create what she calls “learning adventures” for her students—math problems that reference their interests, like sports, and involve some open-ended thinking.
But figuring out how to write a problem that checks all those boxes while evaluating students’ understanding of a math concept is time consuming. Creating a whole worksheet full of them? “I was like, ‘Oh man, who’s got time for that?’” Walker said.
For the past year, Walker has sometimes outsourced this “grunt work,” as she described it, to ChatGPT.
She’s asked it to make flashcards, create two-step word problems for Algebra 1 classes, and write an inquiry-based lesson to help students move from working with one-variable equations to equations with several variables.
Walker has also used ChatGPT to deepen her own math knowledge. She has asked the chatbot, for example, to give her a summary of research on an ongoing debate among mathematicians: Is 0 a natural number? In response, ChatGPT broke down the arguments and cited sources on either side.
No matter what Walker uses AI for, she always does a quick check of the problems. In the past, she has had to correct errors in a procedural problem she had planned to give students.
She’s also stumped ChatGPT on occasion. When she asked it whether students should expect to see point-slope form on the ACT college-entrance test, and where to find examples of practice problems, the AI directed her to a resource that didn’t exist.
“When I saw some [responses], it was so obvious that AI wasn’t interpreting my question accurately,” Walker said.
David Dai
8th grade accelerated math and geometry teacher
Barton Academy For Advanced World Studies, Mobile, Ala.
Like Walker, Dai uses ChatGPT to create lessons. But instead of asking the tool to write problems, Dai employs it as a brainstorming partner.
“I’ll type a prompt into ChatGPT and say, ‘Hey, we’re about to start a unit on looking at polygons and angle relationships and side length measures. … How can I contextualize it, and maybe think about it from an arts perspective or from an architecture perspective?’” he said.
In response, Dai said, the chatbot suggested building a lesson around stained glass windows in cathedrals. He used that idea as a way to hook students into the lesson by giving them a real-world example of why knowing how to measure angles matters.
“There is still this aspect of verification,” he added. “I take what it shares and I go and do my own research and dig a little bit deeper.”
Dai knows that his students have access to AI, too—tools like Photomath and Symbolab that can spit out answers to math problems. He doesn’t discourage his classes from using these, though.
“If [AI] can give you a step-by-step answer, then it’s very low cognitive demand in terms of what I’m having my students do,” Dai said. He tries to write questions that require too much critical thinking and open-ended problem-solving for AI to handle. Sometimes, he has students test whether he’s outfoxed the technology.
“I’ll have my students take pictures of the problems that I create, and they’re like, ‘Yeah, no, [Photomath isn’t] giving me an answer. And I’m like, fantastic, because you need to now think through: How are we going to set up this problem?” he said.
For Dai, the possibility that AI might get a math problem wrong poses another learning opportunity. One way that he assesses student understanding is through error analysis: asking them to explain how and why a problem was solved incorrectly. He encourages students to use that process to double-check any answers AI provides them.
“My students get into a habit of … being able to read other people’s work and make sense of others’ thinking, and say, ‘Oh, yes, that makes sense,’ or, ‘No, mathematically, that doesn’t work.’”