Credit: Pixabay/CC0 Public Domain

AI tools like ChatGPT make learning easier—and more persuasive, study finds

by · Tech Xplore

Googling isn't quite what it used to be. Now, when typing something into Google's search engine, the first response flashing to life on your screen is not the top-ranked search result but an "AI Overview." When asked why (using Google's search engine), the AI Overview replied: "… to provide users with quick, synthesized answers to complex or multi-step questions, enhancing efficiency and user experience. The goal is to save users time by presenting relevant information from multiple sources in one place, reducing the need to click through multiple links."

Daniel Karell, an assistant professor of sociology and faculty fellow at Yale's Institution for Social and Policy Studies, wondered how the increasing reliance on tools like AI Overview, along with chatbots like Google's Gemini or OpenAI's ChatGPT, might affect people's understanding of historical events.

"Back in the day, if you wanted to know what the Seattle General Strike was, you'd grab an encyclopedia—or later, check Wikipedia," Karell said. "Now, you just ask ChatGPT. Or you Google it and get an AI-generated summary. Increasingly, the information we rely on is being packaged by tools built by companies. We're really interested in what that means for the future."

Karell and his team set out to find if reading AI-written summaries of history help people learn better than reading human-written ones. They tested this by showing people short summaries of historical events—some written by humans, others written by AI (like ChatGPT)—and then quizzed them to see how much they remembered.

In a paper published in December in Social Science Computer Review, the researchers found that people who read AI-written summaries answered more questions correctly than those who read human-written ones. Notably, it didn't matter if people knew the summary was written by AI. Whether or not they were told the summary was AI-generated, they still learned more from it.

"People recalled facts better after reading the AI version than the version written by experts," Karell said. "It's like the model took Wikipedia and made it more readable."

Even more striking: AI-generated content shifted political opinions.

In a related paper published recently in PNAS Nexus, the researchers reported how reading the AI summaries affected participants' opinions about issues related to the historical events.

"If the AI summary had a liberal slant, people responded with more liberal opinions compared to those who read the Wikipedia summary," Karell said. "If it had a conservative slant, the stated opinions shifted that way."

Karell attributes this to the model's ability to construct arguments more clearly, not just present facts.

"We can imagine the large language model starting with something like a Wikipedia article and transforming it—making the text smoother, more engaging, and easier to retain," he said. "A similar pattern could explain the persuasive aspect: the model might present facts in a way that feels more compelling and so possibly more convincing."

Karell's co-authors on the paper include Matthew Shu, a former Yale researcher and currently a machine learning engineer at Brain Co.; Keitaro Okura, a Ph.D. candidate in sociology at Yale; and Thomas Davidson, an assistant professor of sociology at Rutgers University.

"AI tools like ChatGPT are becoming common ways to learn about history and other topics," Karell said. "This study shows that AI-written content can actually help people learn better—especially if the writing is clear and easy to understand. Yet, it also suggests that relying on AI to learn about things like history may end up influencing how we think about the world."

Publication details
Daniel Karell et al, Generating the Past: How Artificial Intelligence Summaries of Historical Events Affect Knowledge, Social Science Computer Review (2025). DOI: 10.1177/08944393251409744
Matthew Shu et al, How latent and prompting biases in AI-generated historical narratives influence opinions, PNAS Nexus (2026). DOI: 10.1093/pnasnexus/pgag022
Journal information: PNAS Nexus
Key concepts
AI alignmentGenerative AI ethics

Provided by Yale University