Agencies, AI, Claims, General - January 2, 2026
Agencies
AI
Claims
General
AI Use in the Agency World: When Algorithms Make Creative Decisions, Who’s Liable?
2236, 2236, steve-johnson-_0iV9LmPDn0-unsplash, steve-johnson-_0iV9LmPDn0-unsplash-scaled.jpg, 385413, https://riskboxuk.com/wp-content/uploads/2026/01/steve-johnson-_0iV9LmPDn0-unsplash-scaled.jpg, https://riskboxuk.com/ai-use-in-the-agency-world-when-algorithms-make-creative-decisions-whos-liable/steve-johnson-_0iv9lmpdn0-unsplash/, , 6, , , steve-johnson-_0iv9lmpdn0-unsplash, inherit, 2226, 2026-01-02 15:09:35, 2026-01-02 15:09:35, 0, image/jpeg, image, jpeg, https://riskboxuk.com/wp-includes/images/media/default.png, 2560, 1440, Array

In the blink of an eye, artificial intelligence (AI) evolved from the new kid on the block to a mainstay that can’t be ignored, no matter how much some agencies might like to…

From brainstorming campaign ideas and writing copy to producing visual content and even analysing consumer behaviour, many clients now expect you to use AI in some way.

As the many faces of AI become rooted in the agency toolbox, the unsettling question remains: what happens when the algorithm gets it wrong?

 

Who’s responsible if:

  • An AI-generated image infringes copyright?
  • Written copy includes false and inaccurate information?
  • The insights gleaned are wrong and lead to heavy financial loss for a key client?

You, the client, or the tool? The answer isn’t yet clear. But for agencies, that uncertainty has real insurance implications.

 

Is AI a risk multiplier?

Most if not all UK agencies are using generative AI tools to some degree, whether for internal purposes or to fulfil deliverables for their clients – or both!

ChatGPT, Midjourney, Runway, and Adobe Firefly all support creative production, and the benefits are clear: better ideation, faster delivery and reduced costs.

How you balance these benefits with original creativity is totally in your hands, but the new exposures might not be…

 

Copyright and IP ownership

Who owns the AI-generated output? If it’s the agency or their client, how do they prevent the AI tool from using what they’ve learned in that project to create something similar for another user?

AI generates with pre-existing information, meaning that its outputs are, by their nature, not completely original. What happens when those outputs are recognisably close to existing intellectual property?

 

Defamation and reputational harm

What if an AI tool produces or suggests content that damages a third party’s reputation, and that’s missed before being published?

Embarrassment and a disgruntled client might be the least of your worries. It could result in the end of the working relationship, or even expensive lawsuits – all in the race for increased efficiency.

 

Data protection breaches

AI tools are trained on existing data, and they learn from the information inputted by users. Some platforms allow information to be ringfenced, giving a degree of data protection, but even those aren’t always 100% watertight.

Any information used to generate through AI comes with risk. And when that data is sensitive, confidential or personal, you could be exposed to potential litigation and regulatory actions.

 

Professional negligence

If your agency relies on AI for strategic, creative or analytical decisions that later prove flawed, clients may still hold you accountable.

Think of it like outsourcing to a poor contractor – the end client will still hold the agency responsible.

 

The blurred line between tool and team

Traditionally, agencies have been liable for the advice, content or design they deliver – not for the performance of their tools. But when AI becomes a decision-maker rather than a mere instrument, the line becomes blurred.

 

Imagine an AI copywriting tool accidentally plagiarises another brand’s tagline. The client runs the campaign, the competitor sues, and the claim lands on your desk…

 

Or say that your media-buying algorithm over-targets certain demographics, breaching advertising regulations or equality laws. Was that your negligence, or the tool’s?

 

From a legal standpoint, clients will likely still see you as responsible. They leverage AI as part of your delivery ecosystem. You chose it, configured it, and used it as part of your professional service. That makes it your risk.

 

What UK insurance does (and doesn’t) say

At present, most UK Professional Indemnity (PI) wordings cover your negligent acts, errors or omissions in the course of professional services, but they don’t mention AI specifically… Bar a few recent exceptions, they’ve not gone much further than acknowledging it exists!

 

So what does that mean for you? If a claim arises from your AI output, you’d most likely still be covered, provided you exercised reasonable skill and care. The problem therefore lies in whether insurers consider your reliance on unverified AI as “reasonable” or not.

 

If you’re thinking “well that’s totally subjective”, you’d be right! And whilst there are some protections for companies in such a situation, it isn’t out of the realm of possibility to find insurers taking a default stance to avoid paying AI-based claims en masse. Remember how shamefully the COVID-19 Business Interruption claims were dealt with by some household insurers? This was despite it being abundantly clear from the outset they were liable to pay.

 

What could future insurance policies look like?

Just like AI, these will evolve over time – especially as insurers begin to experience losses.

Usually insurers will look to tighten areas where claims are paid beyond what they intended to cover. Grey areas will likely be filled with more specific definitions, and greater obligations will be imposed onto the agency buying the insurance in the first place.

 

There are countless ways these changes could manifest, but here’s a handful of emerging risks and exclusions we think could be included:

 

  • IP infringement exclusions – With AI relying on existing data, future outputs could lead to copyright infringement claims. We’d expect insurers to include conditions obliging agencies to show they took steps to verify the content’s originality and check against that in the public domain.

 

  • Data breach exclusions – The types of data and how it’s uploaded into AI may be affected. Insurers might incorporate conditions stating that agencies will only be insured against alleged breaches of confidentiality if they haven’t uploaded client data into public AI tools. Certain types of information might have more restrictions imposed, or insurers might reduce the limits for these situations.

 

  • Intentional act exclusions – If agencies knowingly use unlicensed AI assets, even in simple blog posts, insurers might reasonably be able to decline the claim. Right now, an insurer will most likely look at whether the usage was reasonable or reckless, but it’s likely that they might crystallise this in their policies, leading to more claims being declined.

 

Best practice for UK agencies using AI

While the legal and insurance frameworks are still evolving, agencies can take practical steps today to manage risk so that they’re ready when things change.

 

Audit your AI use

List every tool and process you use where AI supports or replaces human judgement. Identify which outputs are delivered to clients.

 

Check licence terms

Many free or public AI tools disclaim ownership of outputs or retain training rights over your data. Paid, enterprise-grade licences often have stronger IP guarantees. Know your rights and responsibilities.

 

Retain human oversight

Ensure a trusted member of your team reviews and signs off AI-generated content before client delivery. This not only protects client relationships, but it also demonstrates “reasonable care” if a claim arises.

 

Avoid uploading client data to public tools

Consider whether the data you’re inputting into AI is sensitive, personal or client owned – like a contract or brand strategy deck. If in doubt, don’t upload it, because the fallout could include contract breaches, GDPR transgressions and rejected insurance claims.

 

Document your processes

Create an AI policy for your agency. Then, on a project level, keep a record of prompts, edits, and human review steps. This audit trail may help defend a PI claim.

 

Review contracts

Update client terms to clarify how AI is used, what warranties apply, and where responsibility sits.

 

Talk to your broker

Ask whether your PI and Cyber policies explicitly cover AI-assisted work, and whether any exclusions or endorsements apply.

 

The future: insurers catching up with innovation

Although insurers might not react to AI’s evolution as quickly as other sectors, they will adapt.

 

They’re already starting to recognise that AI isn’t a niche exposure, but a mainstream operational tool. Here’s how we see this transition unfolding:

 

  • Insurance policies could be adapted to include dedicated AI conditions and exclusions, particularly on PI and Cyber policies
  • Requirements could be brought in for agencies to have formal AI usage policies – and, for larger entities, AI governance frameworks
  • Insurance cover could be removed for claims arising from unvetted or open-source AI use
  • There could be a heightened underwriting focus on how data is used and what human oversight is involved

 

Our advice? AI is no longer just part of your creative toolkit. Bring it onto your risk register now, to avoid getting stung later.

 

AI moves quickly – get a head start on the risks today

We all know the enormous potential that AI has to boost efficiency and creativity within agencies. However, it also shifts where professional risk sits.

 

The courts and insurers may take years to catch up, but your clients won’t wait that long to assign blame if something goes wrong. In the meantime, the best approach is to be proactive. Understand your AI tools, keep human control, and make sure your insurance reflects your evolving business model. Because any time you outsource creative decisions, you’re still the one who’s accountable – whether that’s to a third-party contractor or an AI tool.

 

Unsure whether your current policies cover AI-assisted work? Don’t leave it to chance. Get in touch with us today for clear advice on how you can stay protected while reaping the many rewards of a rapidly shifting technological landscape.

 

Photo by Steve Johnson on Unsplash

Latest blog posts

Read more
Contact Us

Have your own insurance challenge?

Get in touch with the RiskBox team for a solution.
You can reach us on 0161 533 0411 or info@riskboxuk.com.
Alternatively, click the button below and fill in our contact form.
Chat with us