The EU AI Act and Custom Software: What UK Businesses Commissioning AI Need to Know
The EU AI Act applies to UK businesses with EU exposure, regardless of Brexit. When you commission custom AI-powered software, the Act determines who carries which compliance obligations. That depends on whether you are the provider, the deployer, or a downstream party that has become the provider by rebranding or substantially modifying the system. This guide covers the provider vs deployer distinction, risk classification, the August 2026 high-risk systems milestone, and what to ask your development partner before you scope an AI-powered build.
Does the EU AI Act apply to my business?
The EU AI Act (Regulation EU 2024/1689) entered into force on 1 August 2024 and is rolling out in phases through 2027. The most significant upcoming compliance milestone for high-risk systems is 2 August 2026, when full obligations for many high-risk AI systems become enforceable under the current legal timetable.
The Act has extraterritorial scope under Article 2. Three scenarios bring UK businesses into scope:
- You place an AI system on the EU market. If you sell AI-powered software to EU customers, you are in scope regardless of where your company is incorporated.
- Your AI system’s outputs affect EU individuals. A recruitment platform that screens candidates in the EU, a credit scoring system used by EU lenders, or an assessment tool used in EU schools all trigger the Act.
- You have an EU subsidiary, channel partner, importer, or distributor through which AI reaches the EU.
If none of these apply, a purely domestic UK business with no EU customers or data subjects has no EU AI Act obligations. But most businesses with any European exposure are caught.
The UK’s own approach
The UK does not have a standalone AI law. Instead, existing regulators (the ICO, FCA, CMA, Ofcom, MHRA) apply their own frameworks to AI within their sector. The UK relies on five principles: safety, transparency, fairness, accountability, and contestability.
For UK businesses with EU exposure, this creates a dual-compliance burden: EU AI Act for anything touching the EU market, plus UK sector-specific regulation for domestic activities. Documented risk classification, evidence of human oversight, clear supplier records, and transparent technical documentation help across both regimes. The EU Act’s conformity assessment and CE marking requirements have no UK equivalent.
Provider vs deployer: who is liable when you commission custom software?
This is the question most existing EU AI Act guides skip, and it is the one that matters most when you commission a bespoke build.
The Act assigns obligations based on roles in the AI value chain. Two roles matter most when you commission custom software:
- Provider (Article 3(3)): The entity that develops an AI system, has one developed, and places it on the market or puts it into service under its own name or trademark.
- Deployer (Article 3(4)): The entity that uses an AI system under its own authority in a professional context.
Providers carry the heavy obligations: technical documentation, risk management, conformity assessment, quality management systems, post-market monitoring, and EU database registration. Deployers have lighter duties. These include AI literacy, transparency to end users, using the system according to the provider’s instructions, and, in some high-risk contexts, a fundamental rights impact assessment. For a broader look at responsible AI practices beyond the regulatory minimum, see our guide to responsible AI for business leaders.
Other roles may also matter. Importers and distributors have their own obligations when an AI system is brought into, or supplied within, the EU market. A non-EU provider may also need an EU-based authorised representative before placing a high-risk AI system on the EU market.
When you commission custom software, the allocation is not always obvious. In practice, many organisations are both provider and deployer across different systems.
Scenario 1: You commission a bespoke AI system and release it under your brand
You contract a development partner to build a recruitment screening tool. It goes to market under your company name. You are the provider. The development partner is a subcontractor. The full suite of provider obligations falls on you, including conformity assessment and technical documentation.
Scenario 2: You deploy a third-party AI product your partner integrates
Your development partner integrates a commercial AI API (Azure OpenAI, Anthropic Claude) into your CRM. The model vendor may be the provider of the general-purpose AI model. That does not automatically make them the provider of the AI system you place in front of users.
You are typically the deployer if you are using the system under the provider’s instructions. But you may become the provider if you productise the integration under your own name, substantially modify the system, or change its intended purpose so that it becomes high-risk.
That distinction matters. A chatbot that answers customer service questions is very different from an AI workflow that affects recruitment, credit, education, or access to essential services.
Scenario 3: Your development partner builds and sells the product
Your partner develops an AI product and sells it to multiple clients, including you. The partner places it on the market under their name. The partner is the provider. You are the deployer. Provider obligations sit with them.
The grey area: contracts that say nothing
If your contract with a development partner does not state who is the provider and who is the deployer, both parties face regulatory uncertainty. In practice, the entity whose name or trademark appears on the product when it reaches the market is likely treated as the provider. But “likely” is not the same as “clear.”
Article 25 adds another trap: a deployer, distributor, importer, or other third party can become the provider of a high-risk AI system if they rebrand it, make a substantial modification, or change the intended purpose so that the system becomes high-risk. Substantial modification means more than routine configuration. It includes changes that affect compliance, risk profile, or intended purpose.
Your contract should explicitly:
- State which party is the provider under the EU AI Act
- State which party is the deployer
- Allocate responsibility for technical documentation, conformity assessment, and post-market monitoring
- Specify who registers the system in the EU database (if high-risk)
- Define how provider obligations transfer if you white-label or rebrand the system
- Define what counts as a substantial modification, and who reassesses the system if the intended purpose changes
The decision flow below is a starting point. It does not replace legal advice, but it shows the questions that should be answered before an AI system reaches users.
<!-- Start -->
<div class="ea-df-start-node">
<div class="ea-df-start-title">You are commissioning AI-powered software</div>
<div class="ea-df-start-sub">Determine your role under the EU AI Act</div>
</div>
<div class="ea-df-vline" aria-hidden="true"></div>
<!-- Decision 1 -->
<div class="ea-df-decision">
<div class="ea-df-decision-icon"></div>
<div class="ea-df-decision-text">Under your name, substantially modified, or changed purpose?</div>
</div>
<!-- Fork into 2 branches -->
<div class="ea-df-fork" aria-hidden="true">
<div class="ea-df-fork-stem"></div>
<div class="ea-df-fork-bar"></div>
<div class="ea-df-fork-drops">
<div class="ea-df-fork-drop"></div>
<div class="ea-df-fork-drop"></div>
</div>
</div>
<div class="ea-df-branches">
<!-- Left branch: Yes -->
<div class="ea-df-branch">
<div class="ea-df-branch-label">YES</div>
<div class="ea-df-vline" aria-hidden="true"></div>
<!-- Decision 2 -->
<div class="ea-df-decision">
<div class="ea-df-decision-icon"></div>
<div class="ea-df-decision-text">Does it fall under Annex III high-risk categories?</div>
</div>
<!-- Nested fork -->
<div class="ea-df-fork" aria-hidden="true">
<div class="ea-df-fork-stem"></div>
<div class="ea-df-fork-bar"></div>
<div class="ea-df-fork-drops">
<div class="ea-df-fork-drop"></div>
<div class="ea-df-fork-drop"></div>
</div>
</div>
<div class="ea-df-branches">
<!-- Yes: Full provider obligations -->
<div class="ea-df-branch">
<div class="ea-df-branch-label">YES</div>
<div class="ea-df-vline" aria-hidden="true"></div>
<div class="ea-df-outcome magenta">
<div class="ea-df-outcome-dot magenta"></div>
<div class="ea-df-outcome-text">
<div class="ea-df-outcome-title">You are the Provider</div>
<div class="ea-df-outcome-desc">Full high-risk obligations: QMS, conformity assessment, EU database</div>
</div>
</div>
</div>
<!-- No: Provider, lighter load -->
<div class="ea-df-branch">
<div class="ea-df-branch-label">NO</div>
<div class="ea-df-vline" aria-hidden="true"></div>
<div class="ea-df-outcome teal">
<div class="ea-df-outcome-dot teal"></div>
<div class="ea-df-outcome-text">
<div class="ea-df-outcome-title">You are the Provider</div>
<div class="ea-df-outcome-desc">Lower AI Act load, but product safety and AI literacy still apply</div>
</div>
</div>
</div>
</div>
</div>
<!-- Right branch: No -->
<div class="ea-df-branch">
<div class="ea-df-branch-label">NO</div>
<div class="ea-df-vline" aria-hidden="true"></div>
<div class="ea-df-outcome">
<div class="ea-df-outcome-dot"></div>
<div class="ea-df-outcome-text">
<div class="ea-df-outcome-title">You are the Deployer</div>
<div class="ea-df-outcome-desc">AI literacy, transparency, use per provider instructions</div>
</div>
</div>
</div>
</div>
Risk classification: what makes an AI system high-risk?
The EU AI Act classifies AI systems into four tiers. Your compliance obligations depend entirely on where your system falls.
Prohibited (Article 5)
AI practices banned outright since February 2025. These include social scoring by public authorities, real-time biometric identification in public spaces (with narrow exceptions), manipulation techniques that cause harm, and exploitation of vulnerabilities. If your software does any of these, it cannot be placed on the EU market at all.
High-risk (Annex III)
AI systems used in specific domains carry the full compliance burden. The categories most relevant to custom software projects are:
- Employment and recruitment: CV screening, interview assessment, performance monitoring, promotion decisions
- Creditworthiness: Loan decisions, insurance pricing, credit scoring
- Education: Student scoring, examination assessment, adaptive learning that determines access
- Access to essential services: Benefits eligibility, healthcare triage, utility access
- Law enforcement and migration: Not typical for commercial software, but relevant for government contracts
Limited risk (Article 50)
AI systems that interact with people, generate synthetic content, or perform emotion recognition. The main obligations are primarily transparency obligations: tell users they are interacting with AI, or that content was AI-generated. Chatbots, AI-generated images, and deepfake detection tools often fall here, but context still matters.
Minimal risk
Everything else. Email categorisation, inventory forecasting, marketing automation, business intelligence, and AI-assisted code generation often fall here. Minimal risk does not mean no governance at all. General product safety law, UK GDPR, AI literacy, and any downstream obligations from model providers can still matter.
Where common custom software projects fall
| Project type | Likely classification | Key obligation |
|---|---|---|
| CRM with AI-powered lead scoring | Usually minimal, context-dependent | Check it is not used for employment, credit, or essential-service access |
| Customer service chatbot | Limited | Transparency: disclose AI interaction |
| Recruitment screening tool | High-risk | Full provider/deployer obligations |
| Educational assessment with AI marking | Potentially high-risk | Depends on whether AI determines access or grading |
| Internal productivity tools (Copilot, ChatGPT) | Minimal | AI literacy (already enforceable) |
| Insurance pricing model | High-risk | Full provider/deployer obligations |
| Content recommendation engine | Usually minimal or limited, context-dependent | Transparency if generating synthetic content or affecting sensitive access decisions |
If you are unsure whether a system qualifies as high-risk, do not rely on instinct alone. Document the system’s intended purpose, assess it against Article 6 and Annex III, and record the reasoning for the classification. Being conservative with edge cases is sensible, but the defensible position is a documented one. For more on the governance challenges of integrating AI into existing systems, see our guide to common AI integration challenges and how to navigate them.
What must be in place by August 2026?
The EU AI Act is not arriving all at once. Several provisions are already enforceable.
Already in force
- Prohibited AI practices banned since 2 February 2025
- AI literacy (Article 4) required since 2 February 2025: every organisation deploying AI must ensure staff have sufficient understanding of the systems they work with
- General-purpose AI model obligations since 2 August 2025: providers of GPAI models must provide technical documentation and comply with EU copyright law
2 August 2026: the major high-risk systems milestone
Full obligations for high-risk AI systems become enforceable. If you provide or deploy a high-risk AI system in the EU market, you must have:
- Quality management system (QMS): documented procedures covering the AI lifecycle from design through decommissioning
- Technical documentation: detailed records of system architecture, training data, validation methodology, performance metrics, and known limitations
- Conformity assessment: self-assessment for most high-risk systems, or third-party assessment by a notified body for biometric identification systems
- EU database registration: high-risk systems must be registered before being placed on the market
- Post-market monitoring: active, documented processes for monitoring system performance after deployment
- Incident reporting: mechanisms for reporting serious incidents to national authorities
Fines can reach up to 35 million EUR or 7% of global annual turnover for the most serious breaches. Authorities can also require non-compliant AI systems to be withdrawn from the EU market.
The Digital Omnibus caveat
Proposals have been discussed to delay Annex III high-risk obligations to December 2027. Product-embedded AI obligations may move to August 2028. Those proposals are not in force unless an amending regulation is adopted and published in the Official Journal. Until then, August 2026 remains the binding date for Annex III high-risk systems under the current timetable. Do not pause compliance planning on the assumption that a delay will arrive in time.
SME proportionality
The Act mandates proportionate treatment for SMEs: reduced fees for conformity assessments, access to regulatory sandboxes, and simplified documentation requirements. Proportionality is not exemption. If your system is in scope, the obligations still apply.
AI literacy: the obligation you may already be behind on
Article 4 has been in force since February 2025. It requires every organisation that deploys AI to ensure its staff have “sufficient AI literacy” for the AI systems they use. The Act does not prescribe a minimum standard.
In practice, sufficient AI literacy means:
- Staff understand what the AI tools they use can and cannot do
- Staff know the risks specific to the tools in their context (data leakage, hallucination, bias)
- The organisation has a documented acceptable use policy for AI tools
- Training records exist showing who completed what training and when
This applies even if you only use off-the-shelf tools like ChatGPT, Copilot, or AI features embedded in your CRM. If your team uses AI in any professional capacity, the literacy obligation is already live. If you have not documented it, you are technically behind.
For organisations that already hold ISO 27001 certification, much of this overlaps with existing acceptable use policies and supplier risk assessments. The gap is usually the AI-specific documentation rather than the underlying governance.
What to ask your development partner
If you are commissioning custom AI-powered software, these questions should be part of your scoping and procurement process.
Risk classification
Have they assessed the AI components of the proposed system against the Annex III high-risk categories? Can they explain why each component falls into the classification they have assigned?
Ask for the intended purpose in writing. Risk classification depends on what the system is designed and marketed to do, not only on which model or API sits underneath it.
Provider and deployer allocation
Does the contract explicitly state who is the provider and who is the deployer under the EU AI Act? If you are the provider, what documentation will the development partner deliver to support your compliance?
The contract should also address Article 25. If you rebrand the system, substantially modify it, or change its intended purpose, who reassesses the compliance position?
Technical documentation
Will the deliverables include the technical documentation the Act requires for high-risk systems? This means system architecture, training data documentation, validation methodology, performance metrics, and known limitations. This is not the same as standard project documentation.
Post-market monitoring
Who is responsible for ongoing compliance after handover? If you are the provider, how will you monitor the system’s performance in production? If the development partner provides managed support, does the support agreement cover AI Act monitoring obligations?
AI literacy
Has the development team documented their own AI literacy compliance under Article 4? This is a reasonable procurement question. A supplier that has not addressed its own obligations may not be equipped to help you address yours.
AI code attribution
If AI tools were used in the development process, how is that tracked and documented? This matters for audit trails and supply chain transparency. For a practical framework, see our guide on AI code attribution for enterprise procurement.
A development partner that can answer these questions clearly is demonstrating the kind of AI governance maturity that reduces your compliance risk. One that cannot is a flag.
How the EU AI Act connects to compliance you already have
The EU AI Act does not exist in isolation. If your organisation already holds certifications or operates in a regulated sector, you have a head start.
ISO 27001
ISO 27001’s information security management system overlaps significantly with EU AI Act requirements. Supplier risk assessments (A.5.19, A.5.23), acceptable use policies (A.5.10), secure development guidelines (A.8.28), and data classification all map to AI Act obligations. The gap is usually AI-specific documentation: risk classification rationale, conformity assessment records, and post-market monitoring plans.
UK GDPR and DPIA
Data protection impact assessments (DPIAs) under UK GDPR already require you to assess automated decision-making that significantly affects individuals. The EU AI Act’s fundamental rights impact assessment (FRIA) applies to certain deployers of high-risk systems. This mainly covers public bodies, private entities providing public services, and specific sensitive use cases. If you already conduct DPIAs for AI systems, you are partway there, but the FRIA is not always identical.
Sector regulators
UK sector regulators are applying their existing frameworks to AI:
- ICO: automated decision-making, profiling, and AI-related data protection
- FCA: AI in financial services, algorithmic trading, credit decisions
- Ofcom: AI in content moderation and online safety
- CMA: competition implications of AI market concentration
For UK businesses with EU exposure, sector-specific UK regulation and the EU AI Act apply in parallel. The EU Act provides the comprehensive framework. UK sector guidance layers on top for domestic activities.
Compliance as a commercial differentiator
EU AI Act readiness is becoming a procurement filter. Large enterprise buyers, public sector bodies, and EU organisations are starting to ask suppliers about AI Act compliance posture. Being able to demonstrate that you have classified your AI systems, allocated provider and deployer responsibilities, and documented your governance is a genuine differentiator, particularly when your competitors have not started.
If you want to assess your organisation’s broader AI readiness beyond regulation, our AI readiness checklist covers the five dimensions that matter: data, people, process, infrastructure, and governance. For public sector teams navigating the UK’s own AI governance requirements, our guide on using AI to meet the GDS Service Standard covers the domestic side.
Where to start
The EU AI Act is not something to address in late July. If your business commissions or builds AI-powered software with any EU exposure, start with these steps:
- Inventory your AI systems. List every AI tool you use internally and every AI component in the software you commission or operate. Include embedded AI in SaaS platforms.
- Classify each system by risk tier. Use the four-tier framework above, but document the intended purpose and Annex III reasoning for each decision.
- Identify your role for each system. Are you the provider, the deployer, an importer, a distributor, or more than one of these? If you commission custom software, check what your contract says.
- Document AI literacy compliance. Training records, acceptable use policy, and evidence that your team understands the AI tools they use. This obligation is already live.
- Review contracts with development partners. Ensure provider and deployer responsibilities are explicitly allocated.
- For high-risk systems, begin the compliance workload now. Technical documentation, quality management systems, and conformity assessments take months, not weeks.
If you are commissioning a new AI-powered build and want to scope it with EU AI Act obligations in mind from the start, book a consultation to discuss how the Act affects your project.
Frequently asked questions
Does the EU AI Act apply to UK companies?
What is the difference between a provider and deployer under the EU AI Act?
Is using ChatGPT or Copilot in my business high-risk under the EU AI Act?
Do UK SMEs get any concessions under the EU AI Act?
What happens if you don't comply with the EU AI Act?
Does the EU AI Act apply to custom software I commission from a development partner?
Who is liable under the EU AI Act, me or my software supplier?
Related guides
Using AI to Meet the GDS Service Standard
How AI tools, agent rules and skills (in tools like Claude Code and Cursor), and the GOV.UK Design System help delivery teams meet the GDS Service Standard across research, design, build, and operation, and where human effort still concentrates.
AI Code Attribution for Enterprise Procurement Teams
A practical framework for tracking and documenting AI-generated code. Repo-level model logs, PR attribution notes, CI licence gates, SBOM integration, and what procurement teams should require from suppliers.
Is Your Organisation Ready for AI? A Practical Readiness Checklist
Most AI projects stall before they deliver value. This guide provides a structured readiness assessment across five dimensions: data, people, process, infrastructure, and governance.