AI Compliance for UK Businesses: A 2026 Guide to GDPR, FCA, EU AI Act and the UK AI Principles

By Rod Doyle & Lisa O’Reilly, Directors, TESS Group  |  4 May 2026  |  15 min read
TL;DR: UK businesses using AI in 2026 sit in an overlap of five regulatory regimes: UK GDPR, FCA Consumer Duty + AI guidance, the EU AI Act (which has extraterritorial scope), the UK’s cross-sector AI principles, and sector-specific rules from MHRA, SRA, EHRC and others. None of them is “the AI law” on its own — together they form your compliance surface area. This guide covers what each one actually requires, sector overlays for healthcare/finance/recruitment/legal, and the governance framework most UK SMEs end up with: an AI register, a named AI risk owner, a quarterly review forum, vendor due diligence, staff training, and incident response. The skills come from the AI & Automation Level 4 apprenticeship.

If you run a UK business in 2026 and your team uses AI — for marketing copy, customer support, hiring, accounts, anything — you have a compliance problem you may not have fully scoped yet. There are at least five overlapping regulatory regimes that touch what your AI does, and the question your board, your insurers and your customers want answered is the same: are you OK?

This guide is the long answer. We’ll cover the five regulatory regimes that matter for UK businesses in 2026, what each one actually requires of you, sector-specific overlays for healthcare, financial services and recruitment, and the governance framework most UK SMEs end up adopting.

Important: this is a starting framework, not legal advice. For specific situations, especially in regulated sectors, talk to your DPO, your legal team, or qualified counsel.

The Five Regimes That Matter for UK AI in 2026

UK businesses using AI sit in an overlap of five regulatory regimes. None of them is “the AI law” on its own — together they form your compliance surface area.

RegimeWhat it coversWho enforces
UK GDPR + DPA 2018Personal data inputs and outputs of AI; automated decision-makingICO
FCA Consumer Duty + AI guidanceAI in financial services products and customer outcomesFCA
EU AI Act (extraterritorial)UK businesses serving EU customers or placing AI products in EUEU member-state regulators
UK AI regulatory principlesCross-sector principles overlaid on existing regulatorsFCA, ICO, MHRA, CMA, Ofcom, etc.
Sector-specificMHRA (medical devices/AI), CQC (health/care), SRA (legal), employment law (recruitment)Sector regulator
“The mistake most UK businesses make is asking ‘is this AI use legal?’ The right question is which of the five regimes apply to OUR specific use of AI, what each requires, and where the gaps are. The answer is almost never ‘none.’”
Rod Doyle, Director, TESS Group

1. UK GDPR and the ICO — The Default Floor

UK GDPR (the post-Brexit retained version) and the Data Protection Act 2018 are the floor for every UK business using AI. If your AI touches personal data — customer names, employee data, support tickets, sales pipeline notes, anything — you have UK GDPR obligations regardless of sector.

The ICO has issued specific AI guidance in 2023, 2024 and an updated version in 2026. The points that matter most:

  • Lawful basis — you need one for the personal data your AI processes. “Legitimate interests” is usually the working answer for internal use, but it requires a documented LIA (Legitimate Interests Assessment).
  • Data minimisation — train and prompt with the minimum personal data necessary. ChatGPT does not need your full customer list to draft an email.
  • Transparency — your privacy notice has to explain that AI is being used to process personal data and how.
  • Article 22 — solely automated decisions with legal or similarly significant effects (loan approvals, hiring, performance reviews) require explicit safeguards including human review.
  • DPIA — almost any new AI use case touching personal data triggers a Data Protection Impact Assessment requirement.
  • International transfers — if your AI vendor processes data outside the UK or EU adequacy areas, you need an appropriate transfer mechanism (IDTA, SCCs).
Practical signal If your team is pasting customer data into ChatGPT (the consumer product, not the business one) you almost certainly have a UK GDPR issue. This is the single most common AI compliance gap we see in UK SMEs.

2. FCA Consumer Duty and AI Guidance — Financial Services

If you sell financial products to UK retail consumers — banks, insurers, brokers, lenders, wealth managers, pension providers, debt collectors — the FCA Consumer Duty applies, and the FCA has explicit AI guidance overlaid on top.

The Consumer Duty (in force since July 2023) requires you to deliver good outcomes for retail customers. AI use that affects customer outcomes — pricing, eligibility, advice, support — falls inside the Duty. You must be able to evidence that AI-driven decisions produce good customer outcomes, that vulnerable customers get appropriate care, and that any harms can be detected and remediated.

The FCA’s AI-specific guidance (2024 discussion paper plus 2026 update) sets expectations on:

  • AI risk governance — board-level ownership, AI risk appetite, and lines of defence (covered in our AI for Finance & Operations short course for finance teams)
  • Model risk management — particularly for credit, pricing and underwriting AI
  • Explainability — you must be able to explain AI-driven decisions in customer outcomes terms, not just statistically
  • Bias testing — documented testing for protected characteristics
  • Operational resilience — AI components included in your important business services map
If you’re an FCA-regulated firm Your AI register and your operational resilience self-assessment should already reference every AI system in customer-facing or risk-relevant workflows. If they don’t, that’s a 2026 priority.

3. The EU AI Act — Yes, It Affects UK Businesses

UK businesses commonly assume the EU AI Act doesn’t apply to them. It often does. The Act has extraterritorial scope where:

  • You place an AI system on the EU market, or put it into service in the EU
  • The output of your AI system is used in the EU
  • You’re a UK distributor, importer or deployer of an EU-bound AI product

If your UK business has EU customers using an AI-powered product, or you supply AI-enabled software to EU clients, the AI Act probably applies to that activity. The Act categorises AI by risk:

Risk classWhat’s in it (illustrative)Required compliance
ProhibitedSocial scoring, exploitative manipulation, real-time biometric ID in public (with exceptions)Cannot deploy
High-riskRecruitment, credit, education, critical infrastructure, law enforcementConformity assessment, technical documentation, post-market monitoring
Limited-riskChatbots, generated content (deepfakes), emotion recognitionTransparency obligations
Minimal-riskSpam filters, AI-enabled video gamesVoluntary codes of practice
General-purpose AI (GPAI)Foundation models like GPT, Claude, GeminiModel documentation, copyright disclosure (provider obligations)

The phase-in dates: prohibited practices already in force, GPAI obligations in force, high-risk systems and most other obligations applying through 2026 with full enforcement by August 2026.

4. UK AI Regulatory Principles — Cross-Sector Overlay

The UK’s 2023 white paper set out five cross-sector AI regulatory principles, applied via existing regulators rather than a single new AI law:

  1. Safety, security, robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

Each UK regulator (FCA, ICO, MHRA, CMA, Ofcom and others) has been issuing its own application of these principles into its sector. The 2026 update is more prescriptive than 2023 — in practice you should expect specific evidential expectations from your sector regulator, not just principles to interpret.

The proposed UK AI Bill (still in legislative passage as of mid-2026) would add a more centralised regulatory function and explicit requirements for foundation model developers. The trajectory is clear even if the exact statute lands later: more specific obligations, more enforcement, more documentation.

5. Sector-Specific Overlays

Healthcare and care

The MHRA regulates AI as a medical device when it’s used to diagnose, treat or monitor conditions. The CQC inspects providers using AI in care delivery and expects evidence of safe deployment. UK NHS trusts and care homes deploying AI tools have specific procurement and clinical-safety requirements (DCB0129, DCB0160) that pre-date and extend beyond the cross-sector principles.

Recruitment and employment

UK employment law (Equality Act 2010) applies to AI-driven hiring decisions just as it does to human ones. Indirect discrimination via AI bias is unlawful. AI used in performance management, promotion or termination decisions triggers Article 22 of UK GDPR. The EHRC has issued AI-specific guidance for employers; expect specific case law within the next 24 months.

Legal services

The SRA has issued guidance on AI use by solicitors. Confidentiality obligations under Code of Conduct Rule 6 still apply when client data is processed by AI vendors. Many SRA-regulated firms are revising their IT policies in 2026 to control which AI tools are permitted for which work types.

Education

JCQ guidance for AI use in qualifications. DfE keeping watch on AI in schools and FE. Apprenticeship providers (like TESS Group) have specific Ofsted/IfATE expectations around AI tooling used in delivery. ESFA funding rules apply if AI affects evidence of compliance.

The Governance Framework Most UK SMEs End Up With

Across the UK SMEs we work with at TESS, the governance framework that emerges is broadly the same shape:

  1. AI register — a single list of every AI tool the business uses, who owns it, what data it touches, what risk class it falls in, what compliance is required and what evidence is held
  2. AI policy — written rules for staff: which tools are permitted, what data can and can’t go into them, when human review is required, how to report concerns
  3. AI risk owner — a named accountable executive (often the COO or Head of Risk; in regulated firms, the SMF holder)
  4. Quarterly AI risk review — a forum that reviews new use cases, incidents, and external developments
  5. Vendor due diligence — a process for evaluating AI vendors before signing contracts: data protection, security, model documentation, audit rights
  6. Staff training — everyone using AI gets baseline training; people building or governing it get deeper training
  7. Incident response — how AI errors get detected, escalated, and remediated with regulators where required
“You don’t need a 200-page AI policy. You need an AI register that’s actually maintained, an executive who owns the risk, and a quarterly forum where new use cases get reviewed before they ship. Get those three right and you’re ahead of 80% of UK businesses your size.”
Lisa O’Reilly, Director, TESS Group

Where the Skills Come From

For organisations that want to embed governance specifically (rather than the full apprenticeship), the AI Adoption & Governance apprenticeship unit covers this in more depth as a standalone qualification. The hardest part of all of this isn’t the policy — it’s having people inside the business who can actually operate the framework. AI risk management is now its own discipline, distinct from generic risk management or generic IT governance.

Most UK SMEs we work with grow these skills via the AI & Automation Specialist Level 4 apprenticeship — the modules cover AI governance, ethics, risk assessment, model evaluation, and integration with existing risk frameworks. It’s funded through the Apprenticeship Levy and designed for existing staff in operations, risk, compliance and finance roles, not new technical hires.

For senior managers and executives, the AI Leadership Pathway (AU0009/AU0010/AU0011) at Level 5 covers strategic AI risk, board-level AI governance, and where AI fits in your organisational risk appetite.

What to Do This Quarter

If you’ve read this far and you’re thinking “we don’t have most of this in place,” you’re in the same position as the majority of UK SMEs. Three things to do this quarter:

  1. Build the AI register. List every AI tool used anywhere in the business. Include consumer ChatGPT use that staff haven’t formally declared.
  2. Name the AI risk owner. One executive. Their name on a slide.
  3. Run one DPIA on the most data-sensitive AI use case you currently have running. The exercise will surface 80% of the gaps you need to close.

From there, the rest of the framework is a 6–12 month build. You don’t need to be perfect by the end of the quarter. You need to know where you stand.

How TESS Group Trains AI Compliance Skills

The skills covered in this guide map across several of our programmes and short courses. Pick the route that matches the depth and timeframe you need.

If you want…Best fitLength
A full AI builder + governance apprentice on your teamAI & Automation Specialist Level 415 months, levy-funded
Just the governance & risk module (standalone unit)AI Adoption & Governance UnitSingle unit, levy-funded
Senior leaders making AI risk decisionsAI Leadership Pathway (AU0009/AU0010/AU0011)Level 5, levy-funded
An immediate workshop for your risk & compliance teamAI Ethics & Governance short course1–2 days
An AI-ready uplift for your operations teamBuilding AI-Ready Teams short course1 day
An exec-level AI primer for the boardAI for Leaders short courseHalf day

Not sure which is right? Our apprenticeships vs short courses guide explains how to choose, or use the programme finder for a tailored recommendation. The levy calculator shows what your existing levy will cover.

Frequently Asked Questions

Does the EU AI Act apply to UK businesses?

Often yes — when your AI is used by EU customers, when its output is used in the EU, or when you place AI products on the EU market. UK businesses with EU clients should assume the AI Act applies to those activities and assess accordingly. The full obligations come into force across 2025–2026 depending on risk class.

Is using ChatGPT or Microsoft Copilot a UK GDPR issue?

It depends on configuration. Consumer ChatGPT processing personal data is almost always a GDPR risk because of the lawful basis, transfer and data minimisation issues. Enterprise versions (ChatGPT Enterprise, Microsoft Copilot for Microsoft 365 with proper licensing) include data protection terms that make compliant use possible — but you still need a DPIA, a vendor due-diligence record and an AI policy.

What is an AI register and do I need one?

An AI register is a single list of every AI tool used in the business, who owns it, what data it touches, what risk class it falls in, and what compliance is required. Yes, every UK business using AI should have one. The ICO, FCA and EU AI Act all expect, in different forms, that you can produce this list on request.

What's a DPIA and when do I need one?

A Data Protection Impact Assessment. Required under UK GDPR when processing is ‘likely to result in high risk to rights and freedoms’ — which the ICO has explicitly said includes most new AI use cases involving personal data. If you’re rolling out a new AI tool that processes customer or employee data, plan to do a DPIA before launch.

Do I need a Data Protection Officer for AI?

You need one if you already needed one under UK GDPR (public authority, large-scale processing of special category data, or systematic monitoring at scale). You don’t need a separate AI compliance officer in most UK SMEs — but you do need a named executive accountable for AI risk, who may or may not be the DPO.

What FCA expectations apply to AI in financial services?

Consumer Duty applies to AI use that affects retail customer outcomes — pricing, eligibility, advice, support. The FCA’s AI-specific guidance adds expectations on AI risk governance, model risk management, explainability, bias testing and operational resilience. SMF holders are personally accountable for AI risk in their area.

How do I train my team on AI compliance?

Two layers. Everyone using AI needs basic literacy: what’s permitted, what data can go in, when to flag concerns. People building, governing or operating AI need much deeper training in data protection, model evaluation, AI ethics and risk assessment. The AI & Automation Level 4 apprenticeship covers the deeper layer; baseline training can be a half-day workshop.

Is this legal advice?

No. This is a starting framework for UK businesses to understand the AI compliance landscape in 2026. For specific situations, particularly in regulated sectors, you should talk to your DPO, your legal team, or qualified counsel. TESS Group provides AI training and apprenticeships, not legal advice.

Build AI Capability In-House

The AI & Automation Specialist Level 4 apprenticeship trains your team to build, ship and govern AI tooling. Fully funded through the Apprenticeship Levy.

Book a Free Discovery Call
RD
Rod Doyle
Director, TESS Group
LO
Lisa O’Reilly
Director, TESS Group
Book a Free Discovery Call 4.9/5