If you run a UK business in 2026 and your team uses AI — for marketing copy, customer support, hiring, accounts, anything — you have a compliance problem you may not have fully scoped yet. There are at least five overlapping regulatory regimes that touch what your AI does, and the question your board, your insurers and your customers want answered is the same: are you OK?
This guide is the long answer. We’ll cover the five regulatory regimes that matter for UK businesses in 2026, what each one actually requires of you, sector-specific overlays for healthcare, financial services and recruitment, and the governance framework most UK SMEs end up adopting.
Important: this is a starting framework, not legal advice. For specific situations, especially in regulated sectors, talk to your DPO, your legal team, or qualified counsel.
The Five Regimes That Matter for UK AI in 2026
UK businesses using AI sit in an overlap of five regulatory regimes. None of them is “the AI law” on its own — together they form your compliance surface area.
| Regime | What it covers | Who enforces |
|---|---|---|
| UK GDPR + DPA 2018 | Personal data inputs and outputs of AI; automated decision-making | ICO |
| FCA Consumer Duty + AI guidance | AI in financial services products and customer outcomes | FCA |
| EU AI Act (extraterritorial) | UK businesses serving EU customers or placing AI products in EU | EU member-state regulators |
| UK AI regulatory principles | Cross-sector principles overlaid on existing regulators | FCA, ICO, MHRA, CMA, Ofcom, etc. |
| Sector-specific | MHRA (medical devices/AI), CQC (health/care), SRA (legal), employment law (recruitment) | Sector regulator |
“The mistake most UK businesses make is asking ‘is this AI use legal?’ The right question is which of the five regimes apply to OUR specific use of AI, what each requires, and where the gaps are. The answer is almost never ‘none.’”
1. UK GDPR and the ICO — The Default Floor
UK GDPR (the post-Brexit retained version) and the Data Protection Act 2018 are the floor for every UK business using AI. If your AI touches personal data — customer names, employee data, support tickets, sales pipeline notes, anything — you have UK GDPR obligations regardless of sector.
The ICO has issued specific AI guidance in 2023, 2024 and an updated version in 2026. The points that matter most:
- Lawful basis — you need one for the personal data your AI processes. “Legitimate interests” is usually the working answer for internal use, but it requires a documented LIA (Legitimate Interests Assessment).
- Data minimisation — train and prompt with the minimum personal data necessary. ChatGPT does not need your full customer list to draft an email.
- Transparency — your privacy notice has to explain that AI is being used to process personal data and how.
- Article 22 — solely automated decisions with legal or similarly significant effects (loan approvals, hiring, performance reviews) require explicit safeguards including human review.
- DPIA — almost any new AI use case touching personal data triggers a Data Protection Impact Assessment requirement.
- International transfers — if your AI vendor processes data outside the UK or EU adequacy areas, you need an appropriate transfer mechanism (IDTA, SCCs).
2. FCA Consumer Duty and AI Guidance — Financial Services
If you sell financial products to UK retail consumers — banks, insurers, brokers, lenders, wealth managers, pension providers, debt collectors — the FCA Consumer Duty applies, and the FCA has explicit AI guidance overlaid on top.
The Consumer Duty (in force since July 2023) requires you to deliver good outcomes for retail customers. AI use that affects customer outcomes — pricing, eligibility, advice, support — falls inside the Duty. You must be able to evidence that AI-driven decisions produce good customer outcomes, that vulnerable customers get appropriate care, and that any harms can be detected and remediated.
The FCA’s AI-specific guidance (2024 discussion paper plus 2026 update) sets expectations on:
- AI risk governance — board-level ownership, AI risk appetite, and lines of defence (covered in our AI for Finance & Operations short course for finance teams)
- Model risk management — particularly for credit, pricing and underwriting AI
- Explainability — you must be able to explain AI-driven decisions in customer outcomes terms, not just statistically
- Bias testing — documented testing for protected characteristics
- Operational resilience — AI components included in your important business services map
3. The EU AI Act — Yes, It Affects UK Businesses
UK businesses commonly assume the EU AI Act doesn’t apply to them. It often does. The Act has extraterritorial scope where:
- You place an AI system on the EU market, or put it into service in the EU
- The output of your AI system is used in the EU
- You’re a UK distributor, importer or deployer of an EU-bound AI product
If your UK business has EU customers using an AI-powered product, or you supply AI-enabled software to EU clients, the AI Act probably applies to that activity. The Act categorises AI by risk:
| Risk class | What’s in it (illustrative) | Required compliance |
|---|---|---|
| Prohibited | Social scoring, exploitative manipulation, real-time biometric ID in public (with exceptions) | Cannot deploy |
| High-risk | Recruitment, credit, education, critical infrastructure, law enforcement | Conformity assessment, technical documentation, post-market monitoring |
| Limited-risk | Chatbots, generated content (deepfakes), emotion recognition | Transparency obligations |
| Minimal-risk | Spam filters, AI-enabled video games | Voluntary codes of practice |
| General-purpose AI (GPAI) | Foundation models like GPT, Claude, Gemini | Model documentation, copyright disclosure (provider obligations) |
The phase-in dates: prohibited practices already in force, GPAI obligations in force, high-risk systems and most other obligations applying through 2026 with full enforcement by August 2026.
4. UK AI Regulatory Principles — Cross-Sector Overlay
The UK’s 2023 white paper set out five cross-sector AI regulatory principles, applied via existing regulators rather than a single new AI law:
- Safety, security, robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
Each UK regulator (FCA, ICO, MHRA, CMA, Ofcom and others) has been issuing its own application of these principles into its sector. The 2026 update is more prescriptive than 2023 — in practice you should expect specific evidential expectations from your sector regulator, not just principles to interpret.
The proposed UK AI Bill (still in legislative passage as of mid-2026) would add a more centralised regulatory function and explicit requirements for foundation model developers. The trajectory is clear even if the exact statute lands later: more specific obligations, more enforcement, more documentation.
5. Sector-Specific Overlays
Healthcare and care
The MHRA regulates AI as a medical device when it’s used to diagnose, treat or monitor conditions. The CQC inspects providers using AI in care delivery and expects evidence of safe deployment. UK NHS trusts and care homes deploying AI tools have specific procurement and clinical-safety requirements (DCB0129, DCB0160) that pre-date and extend beyond the cross-sector principles.
Recruitment and employment
UK employment law (Equality Act 2010) applies to AI-driven hiring decisions just as it does to human ones. Indirect discrimination via AI bias is unlawful. AI used in performance management, promotion or termination decisions triggers Article 22 of UK GDPR. The EHRC has issued AI-specific guidance for employers; expect specific case law within the next 24 months.
Legal services
The SRA has issued guidance on AI use by solicitors. Confidentiality obligations under Code of Conduct Rule 6 still apply when client data is processed by AI vendors. Many SRA-regulated firms are revising their IT policies in 2026 to control which AI tools are permitted for which work types.
Education
JCQ guidance for AI use in qualifications. DfE keeping watch on AI in schools and FE. Apprenticeship providers (like TESS Group) have specific Ofsted/IfATE expectations around AI tooling used in delivery. ESFA funding rules apply if AI affects evidence of compliance.
The Governance Framework Most UK SMEs End Up With
Across the UK SMEs we work with at TESS, the governance framework that emerges is broadly the same shape:
- AI register — a single list of every AI tool the business uses, who owns it, what data it touches, what risk class it falls in, what compliance is required and what evidence is held
- AI policy — written rules for staff: which tools are permitted, what data can and can’t go into them, when human review is required, how to report concerns
- AI risk owner — a named accountable executive (often the COO or Head of Risk; in regulated firms, the SMF holder)
- Quarterly AI risk review — a forum that reviews new use cases, incidents, and external developments
- Vendor due diligence — a process for evaluating AI vendors before signing contracts: data protection, security, model documentation, audit rights
- Staff training — everyone using AI gets baseline training; people building or governing it get deeper training
- Incident response — how AI errors get detected, escalated, and remediated with regulators where required
“You don’t need a 200-page AI policy. You need an AI register that’s actually maintained, an executive who owns the risk, and a quarterly forum where new use cases get reviewed before they ship. Get those three right and you’re ahead of 80% of UK businesses your size.”
Where the Skills Come From
For organisations that want to embed governance specifically (rather than the full apprenticeship), the AI Adoption & Governance apprenticeship unit covers this in more depth as a standalone qualification. The hardest part of all of this isn’t the policy — it’s having people inside the business who can actually operate the framework. AI risk management is now its own discipline, distinct from generic risk management or generic IT governance.
Most UK SMEs we work with grow these skills via the AI & Automation Specialist Level 4 apprenticeship — the modules cover AI governance, ethics, risk assessment, model evaluation, and integration with existing risk frameworks. It’s funded through the Apprenticeship Levy and designed for existing staff in operations, risk, compliance and finance roles, not new technical hires.
For senior managers and executives, the AI Leadership Pathway (AU0009/AU0010/AU0011) at Level 5 covers strategic AI risk, board-level AI governance, and where AI fits in your organisational risk appetite.
What to Do This Quarter
If you’ve read this far and you’re thinking “we don’t have most of this in place,” you’re in the same position as the majority of UK SMEs. Three things to do this quarter:
- Build the AI register. List every AI tool used anywhere in the business. Include consumer ChatGPT use that staff haven’t formally declared.
- Name the AI risk owner. One executive. Their name on a slide.
- Run one DPIA on the most data-sensitive AI use case you currently have running. The exercise will surface 80% of the gaps you need to close.
From there, the rest of the framework is a 6–12 month build. You don’t need to be perfect by the end of the quarter. You need to know where you stand.
How TESS Group Trains AI Compliance Skills
The skills covered in this guide map across several of our programmes and short courses. Pick the route that matches the depth and timeframe you need.
| If you want… | Best fit | Length |
|---|---|---|
| A full AI builder + governance apprentice on your team | AI & Automation Specialist Level 4 | 15 months, levy-funded |
| Just the governance & risk module (standalone unit) | AI Adoption & Governance Unit | Single unit, levy-funded |
| Senior leaders making AI risk decisions | AI Leadership Pathway (AU0009/AU0010/AU0011) | Level 5, levy-funded |
| An immediate workshop for your risk & compliance team | AI Ethics & Governance short course | 1–2 days |
| An AI-ready uplift for your operations team | Building AI-Ready Teams short course | 1 day |
| An exec-level AI primer for the board | AI for Leaders short course | Half day |
Not sure which is right? Our apprenticeships vs short courses guide explains how to choose, or use the programme finder for a tailored recommendation. The levy calculator shows what your existing levy will cover.
Frequently Asked Questions
Does the EU AI Act apply to UK businesses?
Often yes — when your AI is used by EU customers, when its output is used in the EU, or when you place AI products on the EU market. UK businesses with EU clients should assume the AI Act applies to those activities and assess accordingly. The full obligations come into force across 2025–2026 depending on risk class.
Is using ChatGPT or Microsoft Copilot a UK GDPR issue?
It depends on configuration. Consumer ChatGPT processing personal data is almost always a GDPR risk because of the lawful basis, transfer and data minimisation issues. Enterprise versions (ChatGPT Enterprise, Microsoft Copilot for Microsoft 365 with proper licensing) include data protection terms that make compliant use possible — but you still need a DPIA, a vendor due-diligence record and an AI policy.
What is an AI register and do I need one?
An AI register is a single list of every AI tool used in the business, who owns it, what data it touches, what risk class it falls in, and what compliance is required. Yes, every UK business using AI should have one. The ICO, FCA and EU AI Act all expect, in different forms, that you can produce this list on request.
What's a DPIA and when do I need one?
A Data Protection Impact Assessment. Required under UK GDPR when processing is ‘likely to result in high risk to rights and freedoms’ — which the ICO has explicitly said includes most new AI use cases involving personal data. If you’re rolling out a new AI tool that processes customer or employee data, plan to do a DPIA before launch.
Do I need a Data Protection Officer for AI?
You need one if you already needed one under UK GDPR (public authority, large-scale processing of special category data, or systematic monitoring at scale). You don’t need a separate AI compliance officer in most UK SMEs — but you do need a named executive accountable for AI risk, who may or may not be the DPO.
What FCA expectations apply to AI in financial services?
Consumer Duty applies to AI use that affects retail customer outcomes — pricing, eligibility, advice, support. The FCA’s AI-specific guidance adds expectations on AI risk governance, model risk management, explainability, bias testing and operational resilience. SMF holders are personally accountable for AI risk in their area.
How do I train my team on AI compliance?
Two layers. Everyone using AI needs basic literacy: what’s permitted, what data can go in, when to flag concerns. People building, governing or operating AI need much deeper training in data protection, model evaluation, AI ethics and risk assessment. The AI & Automation Level 4 apprenticeship covers the deeper layer; baseline training can be a half-day workshop.
Is this legal advice?
No. This is a starting framework for UK businesses to understand the AI compliance landscape in 2026. For specific situations, particularly in regulated sectors, you should talk to your DPO, your legal team, or qualified counsel. TESS Group provides AI training and apprenticeships, not legal advice.
Build AI Capability In-House
The AI & Automation Specialist Level 4 apprenticeship trains your team to build, ship and govern AI tooling. Fully funded through the Apprenticeship Levy.
Book a Free Discovery Call