Study warns cost-cutting use of generative AI could increase cyber-attack risks

by

Lisa Lock

scientific editor

Meet our editorial team
Behind our editorial process

Andrew Zinin

lead editor

Meet our editorial team
Behind our editorial process
Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

The GIST
Add as preferred source


Credit: Pixabay/CC0 Public Domain

Newly published research from a leading computer scientist warns that the use of generative AI to design, train, or perform steps within a machine learning system could increase serious risks. Michael Lones, professor at Heriot-Watt University's School of Mathematical and Computer Sciences, has argued in a new paper that generative AI could expose organizations and the public to unintended harm.

These include cyber-attacks, data breaches, and bias against underrepresented groups, despite potential cost and efficiency benefits.

Professor Lones' study has been published in the journal Patterns and explores how generative AI is increasingly being used to design, build, and operate machine learning systems across a wide range of sectors.

Professor Lones said, "Machine learning developers need to be aware of the risks of using Gen AI in machine learning and find a sensible balance between improvements in capability and the risks that might come with that.

"Given the current limitations of generative AI, I'd say this is a clear example of just because you can do something doesn't mean you should."

Machine learning systems are algorithms that learn to recognize patterns in data, which they can then use to make predictions and decisions regarding new data.

Machine learning has been around for decades, and most people encounter it in their daily lives in the form of spam filters, product recommendations on e-commerce websites, and social media newsfeeds. But it's also used in high-stakes situations, such as assigning patients to drug trials and processing insurance claims.

In the last two or so years, there has been a push to incorporate generative AI (mainly in the form of LLMs) into machine learning systems, but doing so carries risks and limitations that developers and the general public should be aware of.

Professor Lones adds, "If you have Gen AI working in a number of different ways within your machine learning workflows or system, then they can interact in unpredictable and hard to understand ways.

"My advice at the moment is to avoid adding too much complexity in terms of how we use Gen AI in machine learning, particularly if you're in a sector that has high stakes that impact people's lives and livelihood."

Professor Lones' work explores four ways in which generative AI is currently being applied in machine learning: as a component within a machine learning pipeline, to design and code machine learning pipelines, to synthesize training data, and to analyze machine learning outputs.

All of these applications carry risks, and these risks are compounded if LLMs are used for multiple tasks within a machine learning system, or if LLMs are "agentic," meaning they can autonomously use external tools to solve problems.

One of the biggest risks is simply that LLMs sometimes make mistakes, bad decisions, and fabricate or "hallucinate" information.

Discover the latest in science, tech, and space with over 100,000 subscribers who rely on Phys.org for daily insights. Sign up for our free newsletter and get updates on breakthroughs, innovations, and research that matter—daily or weekly.

Subscribe

These errors aren't necessarily predictable and may be difficult to evaluate because LLMs operate in a non-transparent way, which presents an additional issue for legal compliance.

Professor Lones added, "In areas like medicine or finance, there are laws about being able to show that the machine learning system is reliable, and that you can explain how it reaches decisions.

"As soon as you start using LLMs, that gets really hard, because they're so opaque. It's important for people in the general public to be aware of the limitations of GenAI systems.

"Companies will deploy these systems to do things like cut costs, and this may improve the experience that end users get, but it may also have negative consequences, such as bias and unfairness."

Publication details

Michael A. Lones, Pitfalls and risks of generative AI in machine learning, Patterns (2026). DOI: 10.1016/j.patter.2026.101534

Journal information: Patterns

Provided by Heriot-Watt University