As we begin 2025, the emergence and growth of Artificial Intelligence (“AI”) shows no signs of slowing down.
Many believe that AI has already outpaced the current legal and regulatory frameworks in the UK. This has led to businesses lacking the certainty and confidence they need to embrace AI and maximise the benefits it has to offer, whilst minimising the risks of becoming entangled in AI copyright litigation.
This blog explores the relationship between AI and UK copyright law, the key challenges posed by AI, what to expect in the future, and, most importantly, how your business can be prepared.
What is AI?
‘AI’ has become somewhat of a buzz word over the past few years. Before diving into its relationship with copyright, it is helpful to remind ourselves of what AI actually is.
Who better to summarise the position than OpenAI’s infamous AI model – ChatGPT.
ChatGPT describes AI as follows:
“AI is the simulation of human intelligence in machines that are programmed to think, learn, and make decisions. AI systems can perform tasks that typically require human cognition, such as problem-solving, pattern recognition, language understanding, decision-making, and even creativity”
Copyright – the basics
In the UK, copyright protection is largely governed by the Copyright, Designs and Patents Act 1988 (“CDPA”). Copyright protects an author’s own intellectual creation, and this includes original literary, dramatic, musical or artistic works.
Under UK copyright law, the general rule is that the author is the first owner of the copyright which subsists in the work. However, this legislation was enacted with humans in mind as the creators.
In the age of AI, the position is less clear cut in relation to whether works created by AI qualify for copyright protection, and if they do, who is the owner of the copyright.
Is AI-generated work protected by copyright?
The CDPA has specific provisions which allow for the protection of what is known as “computer generated works”. Computer generated works are defined as being works “generated by a computer in circumstances that there is no human author of the work”.
Whilst at the time the CDPA was introduced it could not have anticipated the recent developments of AI, currently the general position in the UK is that in the absence of reforms to this area of law, AI-generated works fall under the definition of computer generated works.
The second hurdle in relation to copyright protection, is the requirement for the work to be original. This originality hurdle is often considered to be a relatively low bar.
Therefore, where a human is exercising their own creative ability to make free and creative choices and AI is merely acting as a facilitative tool, the work has the potential to qualify for copyright protection.
However, in situations where work is generated by highly sophisticated AI-systems such as ChatGPT, it is possible that the human input has become so distant from the output generated that the requirements for copyright protection are not met.
A further issue is whether AI-generated outputs can ever truly be ‘original’ given that the output is generated by the AI’s learning and scraping which derives from pre-existing data. This particular issue also gives rise to AI copyright infringement concerns, which are discussed further in this blog.
Who owns the copyright?
Assuming the AI-generated work qualifies for protection, the question then becomes who owns the copyright?
UK copyright law makes a unique distinction between ‘Authorship’ and ‘Ownership’. Under the CDPA, the author of a work is the person who created it. The author of the work is then usually the first owner of any copyright in – subject to exceptions where works are created in the course of employment or there is a contractual agreement with terms as to copyright ownership which alters the default position at law.
With regard to AI-generated works, which as we have explained likely fall under the category of computer generated works, the author for copyright ownership purposes is (i) the person [emphasis added] by whom the (ii) arrangements necessary for the creation of the work are undertaken.
Looking at the first point, it is therefore currently impossible for an AI system itself to own the copyright in any work it creates. Instead, only the human responsible for the input is capable of this. This is something which has been considered by the UK courts in the context of whether an AI system can be listed as the inventor of a patent in Thaler v Comptroller General of Patents, Trade Marks and Designs.
Turning to the second point, before the rapid development of AI, there was not as much consideration given to the requirement of “arrangements necessary” under the CDPA.
However, now that AI has developed to such a point where it can create sophisticated works with very little input required from the human using the tool, there is an argument that the user cannot be said to have made the “arrangements necessary”.
If this is the case, and an original work has nonetheless been created, then there is a secondary argument as to whether it is the developers of the AI tool themselves who have made such arrangements and are therefore the copyright authors.
Until the law is clarified, be it through legislation or decisions in the courts, particular attention must be given to the contractual arrangements in place with AI service providers and the terms which deal with copyright ownership. Some AI developers, for example OpenAI, state in their terms of use that the copyright subsisting in outputs is owned by the user responsible for the input. Whilst others do not assign such rights.
AI copyright infringement – considerations from both user and AI developer perspectives
Going further than assigning ownership, in 2023, Microsoft launched ‘The Copilot Copyright Commitment’. As part of the Copilot Copyright Commitment, Microsoft states it is obligated to defend customer organisations against intellectual property claims and to cover all related legal expenses incurred in AI copyright litigation.
The commitment is intended to address customer concerns relating to potential IP infringement liability that could result from the use of the output of Microsoft’s Copilots and Azure OpenAI Service.
Such policies should not however be taken as meaning that users have free-range when it comes to AI-generated works. Caution needs to be exercised to navigate the various guardrails and mitigations that must be in place in order to be eligible for the benefits under these sorts of policies.
Users of AI need to be aware of the fact that they may be liable for copyright infringement where they have inputted pre-existing copyright work without either the owner’s consent or being able to rely on a fair dealing exception to copyright infringement.
Fair dealing exceptions are available in the UK, but they are significantly narrower in scope and not nearly as permissive as the approach in the US with regard to “fair use”, particularly where copying has taken place in a commercial context.
In order to train AI, the developers must feed and provide it access to massive amounts of data. This process is known as text and data mining. It is a process by which the AI model reviews data and begins to identify patterns, trends and essentially educates itself on any given topic.
The issue for AI developers it that the datasets it wishes to use are likely to contain work protected by third party copyright. Conversely, the issue for rights holders it that it can be difficult to prove their works are being used to train the AI model and that their works have been copied in the AI-generated output.
The current legal framework in the UK does not allow copying of copyright-protected material for training generative AI models, except where it is carried out with permission of the copyright owner or done in a research or study context and for purely non-commercial purposes.
As a result, under current UK law, when an AI company wishes to scrape data from third party materials publicly available on the internet to train its models, it may need to obtain consent of the relevant right holder in order to do so. This is something we have previously looked at in the context of the ongoing Mumsnet v Open AI dispute.
Due to no established licensing framework for this purpose being in place, and generative AI models requiring vast amounts of data to learn, the process can be difficult to navigate.
What to expect in the future?
The difficulties with no established licensing framework being in place is something which has been recognised by the UK Government. Earlier this month, on 13 January 2025, the UK Government published its AI Opportunities Action Plan.
The plan includes a total of 50 recommendations for AI growth, including the following suggestions for the training of AI models:
This recommendation gives a clear indication of what the UK Government considers to be a potential solution to the current issues with training AI models. However, the difficulty will be to satisfy the AI developers, who will likely want access to far bigger and more diverse sets of data, that this is a comprehensive solution.
When faced with what is a global challenge, the UK may look for inspiration from other parts of the world. In the EU, there is an exemption that permits copying for the purpose of training generative AI, except where the content owner has opted-out. This exemption is under Article 4 of the Directive on Copyright in the Digital Single Market (the “DSM Directive”).
The UK has decided not to implement the DSM Directive and instead it will develop its own regime to regulate the tech firms in the UK. If the requirements are different in the UK from the DSM Directive, this may pose a challenge for the larger AI firms providing services across Europe. For example, where an AI firm provides services in the EU, but the training of the generative AI model takes place outside of the EU. We have seen similar territorial-related challenges in relation to GDPR, which has an “extra-territorial effect” of applying to organisations that handle the data of EU citizens, regardless of whether they are EU-based organisations or not.
Again, the potential tension between the UK and EU position is something which has been acknowledged by the UK Government in its recent AI Opportunities Action Plan. In this respect, the plan makes the following recommendation:
It is therefore highly likely that within the next few years, the UK will see a shift towards a copyright exception for the purposes of text and data mining to train AI.
If this is the case, a further possibility is for the UK to turn towards a system which compensates copyright owners for their role in AI training and incentivises them into allowing the copying to take place. An ‘AI levy’ would attempt to ensure the payment of fair remuneration to creators.
However, the exact mechanisms of such a system are still unknown. The biggest challenge for the UK government will be to balance the interests of the creators, who will want fair compensation for the use of their work and transparency to enable clear monitoring, and the AI stakeholders, who will be interested in cheap and continuous access to vast amounts of accessible data.
Practical tips for businesses
With so much uncertainty surrounding both the current and future legal position in relation to AI and copyright, it can be difficult for businesses to understand and address the risks.
If you are looking to integrate AI into your business, here are the top tips in order to minimise the risk of a claim for AI copyright infringement:
Our London based intellectual property litigation specialists have the expertise to deal with the legal issues that surround AI and copyright.
If you have concerns about AI copyright infringement that are affecting your business, or would like advice on how to utilise AI in a copyright compliant manner, please contact Waterfront here and a member of our IP & Disputes team will be in touch.
This matter deals with the Claimant’s (‘TVIS’) allegation of infringement and misrepresentation in relation to its “VETSURE” trade mark by the Defendant (‘Howserv’s’) “PETSURE” trade mark, used for pet insurance. In the first instance decision, the claim was dismissed due to the marks being highly descriptive and “not…