Copyright and AI in the UK: the balancing act
Will the UK diverge on AI and copyright law?
· TechRadarNews By Rajvinder Jagdev published 29 November 2024
It is no secret that copyright-protected creative works (including newspaper articles, novels, music and images) are being used to train generative AI models. The issues are complex, but the battle lines are clearly drawn. Creatives are lobbying governments to protect their rights, in what many see as an existential threat to the future of creativity itself. The widely publicized Statement on AI Training, with over 30,000 signatories including high-profile writers, actors, and academics has brought growing public attention to the perspective of creators on the topic.
On the other side, AI companies are pushing for maximum freedoms to allow algorithms to train on existing material to ‘turbo-charge’ innovation. Microsoft CEO Satya Nadella likened AI training to learning a topic from a textbook, arguing that companies should be given free rights over data to train their models. Like many countries, the UK government is in the spotlight as it works to see how it can reconcile the conflicting interests of groups that want to shape the legislation governing this rapidly developing area.
Rajvinder Jagdev
Partner at Powell Gilbert.
What is the current UK position?
To date the UK has taken a light touch approach to the intellectual property issues surrounding artificial intelligence. For instance, the Copyright, Designs and Patents Act 1988 (CDPA), drafted well over thirty years ago, is still the primary source of legislation for this area. The CDPA gives rights to owners of copyright to prevent original creative works being copied, distributed, or performed without permission. Although the CDPA has been amended over the years, it has not yet been updated to account for the AI age. This means that as it stands, unauthorized copying of protected works is not allowed for the purposes of training AI models for commercial purposes. This is in contrast to the EU position, where copying is allowed for commercial purposes unless the rights holder has opted out, and the US, where AI developers can seek to rely on the “fair use” exemption.
In practice, enforcing this restriction in an AI context is challenging. For a start, it is hard to know if any particular work is being used without access to the training data set used for each system. Even if it is established that the copyright-protected works were used in the context of training an AI model, a rights holder still must establish that copying of that work has occurred in the jurisdiction. To have a chance of success in such proceedings, it is essential for legal practitioners to properly understand the technology underlying the allegedly infringing AI model. Although training data is necessarily copied initially (e.g. into RAM), in most cases once it has been fed in, the AI model does not store a copy of the raw data. Instead, the AI’s neural network evolves in response to the training data. Without access to records, it is challenging to establish the identity of the data set (and any protected works) used in training, although in some cases tell-tale features in the output may provide clues.
Both of these issues of copying and jurisdiction are in dispute in Getty v Stability AI, where Getty has asserted that Stability AI has infringed their IP rights, both through the alleged use of its images as training data, and the generated image outputs that bear the Getty watermark. The trial is due to take place in June 2025 and it will be interesting to see how these issues are addressed by the UK Court.
Across the pond, similar cases are pending, including the parallel US proceedings in the Getty case. The New York Times has brought a claim against OpenAI in the New York District Courts, including a demand for the destruction of AI models that have used its content as training data. The outcome of these cases could drastically impact the relationship between AI companies and news outlets with respect to copyright.
How might UK policy change?
It is expected that the UK government will address some of the contentious topics surrounding AI in the soon to be published Artificial Intelligence Opportunities Action Plan. This is likely to propose changes to the CDPA to address the use of copyright-protected works to train AI models. The UK Prime Minister, Sir Keir Starmer, has indicated in a recent statement that the Action Plan will include rights for publishers to maintain control over, and be paid for, content that is used for training. These changes are long awaited. The issue of content creators’ rights was debated in Parliament in 2021 following a private member’s bill initiated by Labour MP Kevin Brennan. This bill proposed rights to remuneration for creators and a transparency obligation that would provide authors with the right to be informed about how their works are being used.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsors