Skip to content
AI and Code

The EU AI Act and Custom Software: What UK Businesses Commissioning AI Need to Know

15 min read Matt Hammond

The EU AI Act applies to UK businesses with EU exposure, regardless of Brexit. When you commission custom AI-powered software, the Act determines who carries which compliance obligations. That depends on whether you are the provider, the deployer, or a downstream party that has become the provider by rebranding or substantially modifying the system. This guide covers the provider vs deployer distinction, risk classification, the August 2026 high-risk systems milestone, and what to ask your development partner before you scope an AI-powered build.

Does the EU AI Act apply to my business?

The EU AI Act (Regulation EU 2024/1689) entered into force on 1 August 2024 and is rolling out in phases through 2027. The most significant upcoming compliance milestone for high-risk systems is 2 August 2026, when full obligations for many high-risk AI systems become enforceable under the current legal timetable.

The Act has extraterritorial scope under Article 2. Three scenarios bring UK businesses into scope:

  • You place an AI system on the EU market. If you sell AI-powered software to EU customers, you are in scope regardless of where your company is incorporated.
  • Your AI system’s outputs affect EU individuals. A recruitment platform that screens candidates in the EU, a credit scoring system used by EU lenders, or an assessment tool used in EU schools all trigger the Act.
  • You have an EU subsidiary, channel partner, importer, or distributor through which AI reaches the EU.

If none of these apply, a purely domestic UK business with no EU customers or data subjects has no EU AI Act obligations. But most businesses with any European exposure are caught.

The UK’s own approach

The UK does not have a standalone AI law. Instead, existing regulators (the ICO, FCA, CMA, Ofcom, MHRA) apply their own frameworks to AI within their sector. The UK relies on five principles: safety, transparency, fairness, accountability, and contestability.

For UK businesses with EU exposure, this creates a dual-compliance burden: EU AI Act for anything touching the EU market, plus UK sector-specific regulation for domestic activities. Documented risk classification, evidence of human oversight, clear supplier records, and transparent technical documentation help across both regimes. The EU Act’s conformity assessment and CE marking requirements have no UK equivalent.

Provider vs deployer: who is liable when you commission custom software?

This is the question most existing EU AI Act guides skip, and it is the one that matters most when you commission a bespoke build.

The Act assigns obligations based on roles in the AI value chain. Two roles matter most when you commission custom software:

  • Provider (Article 3(3)): The entity that develops an AI system, has one developed, and places it on the market or puts it into service under its own name or trademark.
  • Deployer (Article 3(4)): The entity that uses an AI system under its own authority in a professional context.

Providers carry the heavy obligations: technical documentation, risk management, conformity assessment, quality management systems, post-market monitoring, and EU database registration. Deployers have lighter duties. These include AI literacy, transparency to end users, using the system according to the provider’s instructions, and, in some high-risk contexts, a fundamental rights impact assessment. For a broader look at responsible AI practices beyond the regulatory minimum, see our guide to responsible AI for business leaders.

Other roles may also matter. Importers and distributors have their own obligations when an AI system is brought into, or supplied within, the EU market. A non-EU provider may also need an EU-based authorised representative before placing a high-risk AI system on the EU market.

When you commission custom software, the allocation is not always obvious. In practice, many organisations are both provider and deployer across different systems.

Scenario 1: You commission a bespoke AI system and release it under your brand

You contract a development partner to build a recruitment screening tool. It goes to market under your company name. You are the provider. The development partner is a subcontractor. The full suite of provider obligations falls on you, including conformity assessment and technical documentation.

Scenario 2: You deploy a third-party AI product your partner integrates

Your development partner integrates a commercial AI API (Azure OpenAI, Anthropic Claude) into your CRM. The model vendor may be the provider of the general-purpose AI model. That does not automatically make them the provider of the AI system you place in front of users.

You are typically the deployer if you are using the system under the provider’s instructions. But you may become the provider if you productise the integration under your own name, substantially modify the system, or change its intended purpose so that it becomes high-risk.

That distinction matters. A chatbot that answers customer service questions is very different from an AI workflow that affects recruitment, credit, education, or access to essential services.

Scenario 3: Your development partner builds and sells the product

Your partner develops an AI product and sells it to multiple clients, including you. The partner places it on the market under their name. The partner is the provider. You are the deployer. Provider obligations sit with them.

The grey area: contracts that say nothing

If your contract with a development partner does not state who is the provider and who is the deployer, both parties face regulatory uncertainty. In practice, the entity whose name or trademark appears on the product when it reaches the market is likely treated as the provider. But “likely” is not the same as “clear.”

Article 25 adds another trap: a deployer, distributor, importer, or other third party can become the provider of a high-risk AI system if they rebrand it, make a substantial modification, or change the intended purpose so that the system becomes high-risk. Substantial modification means more than routine configuration. It includes changes that affect compliance, risk profile, or intended purpose.

Your contract should explicitly:

  • State which party is the provider under the EU AI Act
  • State which party is the deployer
  • Allocate responsibility for technical documentation, conformity assessment, and post-market monitoring
  • Specify who registers the system in the EU database (if high-risk)
  • Define how provider obligations transfer if you white-label or rebrand the system
  • Define what counts as a substantial modification, and who reassesses the system if the intended purpose changes

The decision flow below is a starting point. It does not replace legal advice, but it shows the questions that should be answered before an AI system reaches users.

Risk classification: what makes an AI system high-risk?

The EU AI Act classifies AI systems into four tiers. Your compliance obligations depend entirely on where your system falls.

Prohibited (Article 5)

AI practices banned outright since February 2025. These include social scoring by public authorities, real-time biometric identification in public spaces (with narrow exceptions), manipulation techniques that cause harm, and exploitation of vulnerabilities. If your software does any of these, it cannot be placed on the EU market at all.

High-risk (Annex III)

AI systems used in specific domains carry the full compliance burden. The categories most relevant to custom software projects are:

  • Employment and recruitment: CV screening, interview assessment, performance monitoring, promotion decisions
  • Creditworthiness: Loan decisions, insurance pricing, credit scoring
  • Education: Student scoring, examination assessment, adaptive learning that determines access
  • Access to essential services: Benefits eligibility, healthcare triage, utility access
  • Law enforcement and migration: Not typical for commercial software, but relevant for government contracts

Limited risk (Article 50)

AI systems that interact with people, generate synthetic content, or perform emotion recognition. The main obligations are primarily transparency obligations: tell users they are interacting with AI, or that content was AI-generated. Chatbots, AI-generated images, and deepfake detection tools often fall here, but context still matters.

Minimal risk

Everything else. Email categorisation, inventory forecasting, marketing automation, business intelligence, and AI-assisted code generation often fall here. Minimal risk does not mean no governance at all. General product safety law, UK GDPR, AI literacy, and any downstream obligations from model providers can still matter.

Where common custom software projects fall

Project typeLikely classificationKey obligation
CRM with AI-powered lead scoringUsually minimal, context-dependentCheck it is not used for employment, credit, or essential-service access
Customer service chatbotLimitedTransparency: disclose AI interaction
Recruitment screening toolHigh-riskFull provider/deployer obligations
Educational assessment with AI markingPotentially high-riskDepends on whether AI determines access or grading
Internal productivity tools (Copilot, ChatGPT)MinimalAI literacy (already enforceable)
Insurance pricing modelHigh-riskFull provider/deployer obligations
Content recommendation engineUsually minimal or limited, context-dependentTransparency if generating synthetic content or affecting sensitive access decisions

If you are unsure whether a system qualifies as high-risk, do not rely on instinct alone. Document the system’s intended purpose, assess it against Article 6 and Annex III, and record the reasoning for the classification. Being conservative with edge cases is sensible, but the defensible position is a documented one. For more on the governance challenges of integrating AI into existing systems, see our guide to common AI integration challenges and how to navigate them.

What must be in place by August 2026?

The EU AI Act is not arriving all at once. Several provisions are already enforceable.

Already in force

  • Prohibited AI practices banned since 2 February 2025
  • AI literacy (Article 4) required since 2 February 2025: every organisation deploying AI must ensure staff have sufficient understanding of the systems they work with
  • General-purpose AI model obligations since 2 August 2025: providers of GPAI models must provide technical documentation and comply with EU copyright law

2 August 2026: the major high-risk systems milestone

Full obligations for high-risk AI systems become enforceable. If you provide or deploy a high-risk AI system in the EU market, you must have:

  • Quality management system (QMS): documented procedures covering the AI lifecycle from design through decommissioning
  • Technical documentation: detailed records of system architecture, training data, validation methodology, performance metrics, and known limitations
  • Conformity assessment: self-assessment for most high-risk systems, or third-party assessment by a notified body for biometric identification systems
  • EU database registration: high-risk systems must be registered before being placed on the market
  • Post-market monitoring: active, documented processes for monitoring system performance after deployment
  • Incident reporting: mechanisms for reporting serious incidents to national authorities

Fines can reach up to 35 million EUR or 7% of global annual turnover for the most serious breaches. Authorities can also require non-compliant AI systems to be withdrawn from the EU market.

The Digital Omnibus caveat

Proposals have been discussed to delay Annex III high-risk obligations to December 2027. Product-embedded AI obligations may move to August 2028. Those proposals are not in force unless an amending regulation is adopted and published in the Official Journal. Until then, August 2026 remains the binding date for Annex III high-risk systems under the current timetable. Do not pause compliance planning on the assumption that a delay will arrive in time.

SME proportionality

The Act mandates proportionate treatment for SMEs: reduced fees for conformity assessments, access to regulatory sandboxes, and simplified documentation requirements. Proportionality is not exemption. If your system is in scope, the obligations still apply.

AI literacy: the obligation you may already be behind on

Article 4 has been in force since February 2025. It requires every organisation that deploys AI to ensure its staff have “sufficient AI literacy” for the AI systems they use. The Act does not prescribe a minimum standard.

In practice, sufficient AI literacy means:

  • Staff understand what the AI tools they use can and cannot do
  • Staff know the risks specific to the tools in their context (data leakage, hallucination, bias)
  • The organisation has a documented acceptable use policy for AI tools
  • Training records exist showing who completed what training and when

This applies even if you only use off-the-shelf tools like ChatGPT, Copilot, or AI features embedded in your CRM. If your team uses AI in any professional capacity, the literacy obligation is already live. If you have not documented it, you are technically behind.

For organisations that already hold ISO 27001 certification, much of this overlaps with existing acceptable use policies and supplier risk assessments. The gap is usually the AI-specific documentation rather than the underlying governance.

What to ask your development partner

If you are commissioning custom AI-powered software, these questions should be part of your scoping and procurement process.

Risk classification

Have they assessed the AI components of the proposed system against the Annex III high-risk categories? Can they explain why each component falls into the classification they have assigned?

Ask for the intended purpose in writing. Risk classification depends on what the system is designed and marketed to do, not only on which model or API sits underneath it.

Provider and deployer allocation

Does the contract explicitly state who is the provider and who is the deployer under the EU AI Act? If you are the provider, what documentation will the development partner deliver to support your compliance?

The contract should also address Article 25. If you rebrand the system, substantially modify it, or change its intended purpose, who reassesses the compliance position?

Technical documentation

Will the deliverables include the technical documentation the Act requires for high-risk systems? This means system architecture, training data documentation, validation methodology, performance metrics, and known limitations. This is not the same as standard project documentation.

Post-market monitoring

Who is responsible for ongoing compliance after handover? If you are the provider, how will you monitor the system’s performance in production? If the development partner provides managed support, does the support agreement cover AI Act monitoring obligations?

AI literacy

Has the development team documented their own AI literacy compliance under Article 4? This is a reasonable procurement question. A supplier that has not addressed its own obligations may not be equipped to help you address yours.

AI code attribution

If AI tools were used in the development process, how is that tracked and documented? This matters for audit trails and supply chain transparency. For a practical framework, see our guide on AI code attribution for enterprise procurement.

A development partner that can answer these questions clearly is demonstrating the kind of AI governance maturity that reduces your compliance risk. One that cannot is a flag.

How the EU AI Act connects to compliance you already have

The EU AI Act does not exist in isolation. If your organisation already holds certifications or operates in a regulated sector, you have a head start.

ISO 27001

ISO 27001’s information security management system overlaps significantly with EU AI Act requirements. Supplier risk assessments (A.5.19, A.5.23), acceptable use policies (A.5.10), secure development guidelines (A.8.28), and data classification all map to AI Act obligations. The gap is usually AI-specific documentation: risk classification rationale, conformity assessment records, and post-market monitoring plans.

UK GDPR and DPIA

Data protection impact assessments (DPIAs) under UK GDPR already require you to assess automated decision-making that significantly affects individuals. The EU AI Act’s fundamental rights impact assessment (FRIA) applies to certain deployers of high-risk systems. This mainly covers public bodies, private entities providing public services, and specific sensitive use cases. If you already conduct DPIAs for AI systems, you are partway there, but the FRIA is not always identical.

Sector regulators

UK sector regulators are applying their existing frameworks to AI:

  • ICO: automated decision-making, profiling, and AI-related data protection
  • FCA: AI in financial services, algorithmic trading, credit decisions
  • Ofcom: AI in content moderation and online safety
  • CMA: competition implications of AI market concentration

For UK businesses with EU exposure, sector-specific UK regulation and the EU AI Act apply in parallel. The EU Act provides the comprehensive framework. UK sector guidance layers on top for domestic activities.

Compliance as a commercial differentiator

EU AI Act readiness is becoming a procurement filter. Large enterprise buyers, public sector bodies, and EU organisations are starting to ask suppliers about AI Act compliance posture. Being able to demonstrate that you have classified your AI systems, allocated provider and deployer responsibilities, and documented your governance is a genuine differentiator, particularly when your competitors have not started.

If you want to assess your organisation’s broader AI readiness beyond regulation, our AI readiness checklist covers the five dimensions that matter: data, people, process, infrastructure, and governance. For public sector teams navigating the UK’s own AI governance requirements, our guide on using AI to meet the GDS Service Standard covers the domestic side.

Where to start

The EU AI Act is not something to address in late July. If your business commissions or builds AI-powered software with any EU exposure, start with these steps:

  1. Inventory your AI systems. List every AI tool you use internally and every AI component in the software you commission or operate. Include embedded AI in SaaS platforms.
  2. Classify each system by risk tier. Use the four-tier framework above, but document the intended purpose and Annex III reasoning for each decision.
  3. Identify your role for each system. Are you the provider, the deployer, an importer, a distributor, or more than one of these? If you commission custom software, check what your contract says.
  4. Document AI literacy compliance. Training records, acceptable use policy, and evidence that your team understands the AI tools they use. This obligation is already live.
  5. Review contracts with development partners. Ensure provider and deployer responsibilities are explicitly allocated.
  6. For high-risk systems, begin the compliance workload now. Technical documentation, quality management systems, and conformity assessments take months, not weeks.

If you are commissioning a new AI-powered build and want to scope it with EU AI Act obligations in mind from the start, book a consultation to discuss how the Act affects your project.

Frequently asked questions

Does the EU AI Act apply to UK companies?
Yes. The EU AI Act has extraterritorial scope under Article 2. It applies to any organisation that places an AI system on the EU market, puts an AI system into service within the EU, or deploys an AI system whose outputs are used by people in the EU. Brexit does not create an exemption. If your software serves EU customers, processes EU personal data, or produces outputs that affect EU individuals, the Act applies to you.
What is the difference between a provider and deployer under the EU AI Act?
A provider develops an AI system, has one developed, or places it on the market under their own name or trademark. A deployer uses an AI system under their own authority in a professional context. Providers carry the heavier obligations: technical documentation, conformity assessment, quality management systems, and post-market monitoring. Deployers have lighter duties. They still need AI literacy, transparency where relevant, use of the system according to the provider's instructions, and, in some high-risk contexts, a fundamental rights impact assessment.
Is using ChatGPT or Copilot in my business high-risk under the EU AI Act?
Almost certainly not. General-purpose AI tools used for content drafting, code assistance, or internal productivity fall into the minimal or limited risk category. High-risk classification under Annex III is reserved for AI systems used in specific domains: employment and recruitment decisions, creditworthiness assessment, educational scoring, access to essential services, law enforcement, and migration. Using ChatGPT to draft emails is not high-risk. Building a system that uses AI to screen job applicants is.
Do UK SMEs get any concessions under the EU AI Act?
Yes. The Act mentions SMEs 38 times and mandates proportionate treatment. Concessions include reduced fees for conformity assessments, access to regulatory sandboxes with free or reduced-cost participation, and simplified technical documentation requirements (proposed in the Digital Omnibus). Proportionality is built in, but it is not an exemption. If your AI system is high-risk and serves the EU market, the obligations still apply.
What happens if you don't comply with the EU AI Act?
Fines reach up to 35 million EUR or 7% of global annual turnover, whichever is higher. For prohibited AI practices, the ceiling is higher still. Beyond fines, national authorities can order the withdrawal of non-compliant AI systems from the EU market. For UK businesses, this means losing access to EU customers. SMEs face reduced fine caps, but even these can be material.
Does the EU AI Act apply to custom software I commission from a development partner?
Yes, if the software contains AI components and serves the EU market. The critical question is whether you or your development partner is the 'provider' under the Act. You may become the provider if the system is placed on the market under your name or brand. The same can happen if you substantially modify an existing high-risk system, or change the intended purpose so that a system becomes high-risk. Your contract with the development partner should explicitly allocate provider and deployer responsibilities.
Who is liable under the EU AI Act, me or my software supplier?
It depends on the contractual and commercial arrangement. The Act assigns obligations based on role: the provider (whoever places the AI system on the market under their name) carries the heaviest duties. If you commission bespoke software and release it under your brand, you are likely the provider. If your supplier sells a product and you deploy it, you are the deployer. If the contract is silent on this, both parties face regulatory uncertainty. The contract must explicitly state who is the provider and who is the deployer.

Ready to transform your software?

Let's talk about your project. Contact us for a free consultation and see how we can deliver a business-critical solution at startup speed.