Shadow AI 'double agents' are outpacing security visibility – and that's a serious concern for UK businesses

Businesses are struggling to keep an eye on AI agents

· TechRadar

News By Benedict Collins published 20 March 2026

(Image credit: Shutterstock)

Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter

Get the TechRadar Newsletter

Sign up for breaking news, reviews, opinion, top tech deals, and more.

Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors


By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

You are now subscribed

Your newsletter sign-up was successful


An account already exists for this email address, please log in. Subscribe to our newsletter


  • AI agent adoption is outpacing visibility
  • AI agents are working autonomously across environments
  • Business leaders recognize the risk and believe they can prevent unauthorized access

UK businesses are increasingly deploying AI agents to help automate mundane tasks and improve productivity, but some are behaving as ‘double agents’ and putting business security at risk.

New research from Microsoft’s Cyber Pulse report has found that while most business leaders believe they can prevent unauthorized use by AI double agents, visibility is struggling to keep pace with adoption.

Unmanaged AI agents create blind spots for security teams, especially when autonomous AI agents are given permission to work across networks, devices, and software.

Article continues below

AI double agents risk sabotaging businesses

In 2026, adoption has risen rapidly with 62% of UK businesses already deploying AI agents within their business - a rise of 22% year over year. Additionally, 68% of businesses are expecting an enterprise-wide AI agent rollout within the next 12 months.

But business leaders also recognize the risk of this rising rate of adoption, with 84% noting that unauthorized or poorly governed AI agents are a serious security concern.

This problem is only likely to worsen as AI agents become more capable and accessible, especially when they can act autonomously with permissions stretching across different environments.

Microsoft’s findings also note that security teams have three clear priorities. Making sure visibility into where AI agents are operating is maintained (50%), ensuring that introducing AI agents to existing systems and processes is done safely (50%), and verifying that autonomous AI agents meet compliance, risk and audit requirements (49%).

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsors