In the blink of an eye, artificial intelligence (AI) evolved from the new kid on the block to a mainstay that can’t be ignored, no matter how much some agencies might like to…
From brainstorming campaign ideas and writing copy to producing visual content and even analysing consumer behaviour, many clients now expect you to use AI in some way.
As the many faces of AI become rooted in the agency toolbox, the unsettling question remains: what happens when the algorithm gets it wrong?
Who’s responsible if:
You, the client, or the tool? The answer isn’t yet clear. But for agencies, that uncertainty has real insurance implications.
Is AI a risk multiplier?
Most if not all UK agencies are using generative AI tools to some degree, whether for internal purposes or to fulfil deliverables for their clients – or both!
ChatGPT, Midjourney, Runway, and Adobe Firefly all support creative production, and the benefits are clear: better ideation, faster delivery and reduced costs.
How you balance these benefits with original creativity is totally in your hands, but the new exposures might not be…
Copyright and IP ownership
Who owns the AI-generated output? If it’s the agency or their client, how do they prevent the AI tool from using what they’ve learned in that project to create something similar for another user?
AI generates with pre-existing information, meaning that its outputs are, by their nature, not completely original. What happens when those outputs are recognisably close to existing intellectual property?
Defamation and reputational harm
What if an AI tool produces or suggests content that damages a third party’s reputation, and that’s missed before being published?
Embarrassment and a disgruntled client might be the least of your worries. It could result in the end of the working relationship, or even expensive lawsuits – all in the race for increased efficiency.
Data protection breaches
AI tools are trained on existing data, and they learn from the information inputted by users. Some platforms allow information to be ringfenced, giving a degree of data protection, but even those aren’t always 100% watertight.
Any information used to generate through AI comes with risk. And when that data is sensitive, confidential or personal, you could be exposed to potential litigation and regulatory actions.
Professional negligence
If your agency relies on AI for strategic, creative or analytical decisions that later prove flawed, clients may still hold you accountable.
Think of it like outsourcing to a poor contractor – the end client will still hold the agency responsible.
The blurred line between tool and team
Traditionally, agencies have been liable for the advice, content or design they deliver – not for the performance of their tools. But when AI becomes a decision-maker rather than a mere instrument, the line becomes blurred.
Imagine an AI copywriting tool accidentally plagiarises another brand’s tagline. The client runs the campaign, the competitor sues, and the claim lands on your desk…
Or say that your media-buying algorithm over-targets certain demographics, breaching advertising regulations or equality laws. Was that your negligence, or the tool’s?
From a legal standpoint, clients will likely still see you as responsible. They leverage AI as part of your delivery ecosystem. You chose it, configured it, and used it as part of your professional service. That makes it your risk.
What UK insurance does (and doesn’t) say
At present, most UK Professional Indemnity (PI) wordings cover your negligent acts, errors or omissions in the course of professional services, but they don’t mention AI specifically… Bar a few recent exceptions, they’ve not gone much further than acknowledging it exists!
So what does that mean for you? If a claim arises from your AI output, you’d most likely still be covered, provided you exercised reasonable skill and care. The problem therefore lies in whether insurers consider your reliance on unverified AI as “reasonable” or not.
If you’re thinking “well that’s totally subjective”, you’d be right! And whilst there are some protections for companies in such a situation, it isn’t out of the realm of possibility to find insurers taking a default stance to avoid paying AI-based claims en masse. Remember how shamefully the COVID-19 Business Interruption claims were dealt with by some household insurers? This was despite it being abundantly clear from the outset they were liable to pay.
What could future insurance policies look like?
Just like AI, these will evolve over time – especially as insurers begin to experience losses.
Usually insurers will look to tighten areas where claims are paid beyond what they intended to cover. Grey areas will likely be filled with more specific definitions, and greater obligations will be imposed onto the agency buying the insurance in the first place.
There are countless ways these changes could manifest, but here’s a handful of emerging risks and exclusions we think could be included:
Best practice for UK agencies using AI
While the legal and insurance frameworks are still evolving, agencies can take practical steps today to manage risk so that they’re ready when things change.
Audit your AI use
List every tool and process you use where AI supports or replaces human judgement. Identify which outputs are delivered to clients.
Check licence terms
Many free or public AI tools disclaim ownership of outputs or retain training rights over your data. Paid, enterprise-grade licences often have stronger IP guarantees. Know your rights and responsibilities.
Retain human oversight
Ensure a trusted member of your team reviews and signs off AI-generated content before client delivery. This not only protects client relationships, but it also demonstrates “reasonable care” if a claim arises.
Avoid uploading client data to public tools
Consider whether the data you’re inputting into AI is sensitive, personal or client owned – like a contract or brand strategy deck. If in doubt, don’t upload it, because the fallout could include contract breaches, GDPR transgressions and rejected insurance claims.
Document your processes
Create an AI policy for your agency. Then, on a project level, keep a record of prompts, edits, and human review steps. This audit trail may help defend a PI claim.
Review contracts
Update client terms to clarify how AI is used, what warranties apply, and where responsibility sits.
Talk to your broker
Ask whether your PI and Cyber policies explicitly cover AI-assisted work, and whether any exclusions or endorsements apply.
The future: insurers catching up with innovation
Although insurers might not react to AI’s evolution as quickly as other sectors, they will adapt.
They’re already starting to recognise that AI isn’t a niche exposure, but a mainstream operational tool. Here’s how we see this transition unfolding:
Our advice? AI is no longer just part of your creative toolkit. Bring it onto your risk register now, to avoid getting stung later.
AI moves quickly – get a head start on the risks today
We all know the enormous potential that AI has to boost efficiency and creativity within agencies. However, it also shifts where professional risk sits.
The courts and insurers may take years to catch up, but your clients won’t wait that long to assign blame if something goes wrong. In the meantime, the best approach is to be proactive. Understand your AI tools, keep human control, and make sure your insurance reflects your evolving business model. Because any time you outsource creative decisions, you’re still the one who’s accountable – whether that’s to a third-party contractor or an AI tool.
Unsure whether your current policies cover AI-assisted work? Don’t leave it to chance. Get in touch with us today for clear advice on how you can stay protected while reaping the many rewards of a rapidly shifting technological landscape.
Photo by Steve Johnson on Unsplash