Study points to opportunity for governments to work with public on use of AI
by King's College LondonGaby Clark
scientific editor
Meet our editorial team
Behind our editorial process
Andrew Zinin
lead editor
Meet our editorial team
Behind our editorial process
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
trusted source
proofread
The GIST
Add as preferred source
A major new study suggests people's direct experience with artificial intelligence has little impact on their views about its role in government decision-making—while factual information about the technology can significantly shift public opinion. Professor Yotam Margalit (King's College London) and Dr. Shir Raviv (Tel Aviv University) tracked the attitudes of more than 1,500 workers in a controlled experiment designed to mimic real-world interactions with AI systems. The work is published in the British Journal of Political Science.
Participants were randomly assigned tasks by either a human manager or an algorithmic "AI boss" and weeks later surveyed about their attitudes toward using AI in public policy. The researchers found that taking orders from an algorithm significantly affected the workers' job satisfaction and performance. However, it did not alter their views on using AI in public policy decisions (e.g., in policing, welfare, or education). Whether the workers had a positive or negative experience with their algorithmic boss, their political attitudes toward government decision-making remained unchanged.
Instead, the study found that exposure to new, objective information was a major catalyst for changing minds. When participants were presented with expert commentary on the potential societal impacts of AI, their opinions shifted significantly days later.
The finding held even when the new information contradicted their pre-existing beliefs. Workers who were initially skeptical of AI grew more supportive of its use in government after reading about its potential benefits, such as increased accuracy and consistency. Conversely, learning about risks, like racial bias, actively decreased support.
Ultimately, the research suggests that public attitudes toward AI governance are neither fixed nor politically aligned. Rather, citizens are open to learning about the new technology and revising their views, underscoring the potential value of public education in the emerging AI area.
"Our findings point to an opportunity—perhaps only a temporary one—to create a broad coalition that spans across the political spectrum and promotes AI governance that is centered on safeguarding the public interest rather than the interests of partisan special interest groups," said Margalit.
More information
Yotam Margalit et al, The Politics of Using AI in Policy Implementation: Evidence from a Field Experiment, British Journal of Political Science (2026). DOI: 10.1017/s0007123425101282
Provided by King's College London