Credit: Unsplash/CC0 Public Domain

Human bias reminders can make AI decisions seem more acceptable, study finds

by · Tech Xplore

Reminding people that human decision-making can be biased can make the use of artificial intelligence seem less problematic, a new study says. Drawing attention to the limitations of human decision-making may also make AI seem more consistent or impartial. This could increase pressure on governments from voters to rely more on algorithmic systems rather than less.

The study shows when people first think about the limitations of human decision-making, AI tends to appear more favorable by comparison. Conversely, when they first consider AI decision-making, they become more critical of human decision-makers.

The research was carried out by Florian Stoeckel, from the University of Exeter, Ben Lyons, from the University of Utah and Adrienn Ujhelyi and Monika Kovacs, from ELTE Eötvös Loránd University. They examined how people evaluate the risk of discrimination in public-sector hiring decisions. They asked people about their views on a selection process that would either be conducted by AI or by human recruiters.

They also asked people how likely they thought they would face discrimination in hiring decisions made either by AI systems or by human recruiters. Half of the respondents evaluated AI first, while the other half evaluated human recruiters first. This allowed some respondents to think about potential human bias before evaluating AI decision-making.

When respondents answer the question about humans first, the potential for human bias became more prominent in their thinking.

Professor Stoeckel said, "Evaluations of AI do not only depend on the properties of algorithms, but also on whether people compare AI to human decision-making. Once that comparison is made, AI-based decision-making can look better, not just worse. This matters for how citizens respond to the use of AI in hiring, and in the public sector more generally.

"These findings suggest that public concerns about AI bias are not fixed. Instead, they depend on the context in which people evaluate algorithmic systems. When public debates highlight the limitations of human decision making, AI systems may appear more favorable by comparison. The potential problem is that this shift in perception can occur even if an AI system itself still contains biases.

"People seem to rely on general assumptions about algorithms or computers when judging AI. Debates may shift toward the weaknesses of human decision making, which can make AI appear more acceptable even if the fairness of the AI system itself has not been demonstrated.

"There is a risk that those who want to increase public acceptance of AI may therefore emphasize the shortcomings of human decision-makers rather than demonstrate that a specific AI system actually operates fairly. If public administrations integrate AI, trust in these systems should be based on actual advantages and performance, rather than on comparisons with human weaknesses.

"The reverse dynamic is also possible. When people first think about AI decision-making, they may begin to evaluate human decision-makers more critically. As AI becomes a visible alternative, attention can shift to the limitations of human decisions. In that situation, AI may not only appear faster or cheaper, but potentially more consistent or impartial. If this dynamic extends more broadly, it could also increase pressure on governments from citizens to rely more on AI in decision-making, rather than less."

The YouGov survey was carried out with 11,000 participants in Austria, Germany, Hungary, Italy, the Netherlands, Poland, Spain and the United Kingdom. Respondents were randomly assigned to evaluate concerns about discrimination by AI-based decision-making before expressing views on human decision-makers (control condition) or to evaluate human decision-makers before AI (treatment condition).

Key concepts
AI alignmentGenerative AI ethics

Provided by University of Exeter