Persona bots, which can be found on鈥痑nd other platforms, allow users to have real-time conversations with bots purporting to be historical figures, world leaders, and even fictional characters. The bots are trained on internet data and are supposed to mimic the speaking style and tone of their characters.
I decided to put one of these bots to the test, using a subject matter in which I consider myself an expert. I was 澳门跑狗论坛鈥檚 politics and policy reporter from 2006 to early 2019, a period that covers the entirety of President Barack Obama鈥檚 two terms. In that role, I did in-depth reporting on Obama鈥檚 K-12 policies. I never interviewed the president directly, but I spoke extensively to both his education secretaries鈥擜rne Duncan and John King鈥攎ultiple times.
Another reason for choosing Obama鈥檚 education agenda: It was far-reaching, ambitious, and controversial. There were complexities, nuances, subtle shifts in position. Would a chatbot be able to capture them? The answer: No, or at least not very well. (Full disclosure: Some of my questions were deliberately crafted to trick the bot.)
As you鈥檒l see, the bot got a few facts right. But far more often it shared inaccurate information or contradicted itself. Maybe most surprisingly, it was more apt to parrot Obama鈥檚 critics than the former president himself.
The chatbot鈥檚 failure to truly channel Obama on his K-12 record came as no surprise to Michael Littman, a professor of computer science at Brown University. Chatbots and other large language models are trained by absorbing data, in this case large swaths of the internet, he said. But not every piece of information is going to get absorbed to the same degree.
鈥淚f the system doesn鈥檛 have experience directly with the question, or it doesn鈥檛 have enough experience that could lead it to make up something plausible [for the character], then it will just make up something that maybe other people said, and in this case, it was his critics,鈥 Littman explained. 鈥淸The bot] didn鈥檛 have anything else to draw on.鈥
It鈥檚 also not particularly unusual that the bot contradicted itself so often, Littman explained.
鈥淥ne of the hardest things to do when making things up is remain self-consistent because you can easily make a statement, then later, you want to say something that doesn鈥檛 really agree with that statement,鈥 he explained. 鈥淎nd then you鈥檙e stuck. The bots are always making stuff up, so they get into that situation a lot. Sometimes they don鈥檛 even bother noticing. They just continue forward.鈥
The bottom-line: The conversation is a good example of why character bots are good for teaching about AI, but not very good鈥攁t least not yet鈥攁t helping students find accurate information. Also, if you鈥檙e teaching a course on education policy history, definitely steer clear of referring students to this chat!