Skip to content
AIIntellectual Property

AI Code Ownership in the UK: A Legal and Practical Guide for CTOs

Louise Clayton 10 min read
Scales of justice overlaid with code syntax representing UK AI intellectual property law

The UK is unusual: section 9(3) of the Copyright, Designs and Patents Act 1988 recognises computer-generated works and names the person who made the necessary arrangements as the author. That provision gives UK businesses a potential foothold for protecting AI-written code, but courts have never applied it to large language models. Until they do, practical policies, strong contracts, and automated licence scanning are your most reliable protection.

Most guidance on AI code ownership focuses on US copyright law, where the Copyright Office has repeatedly refused to register AI-generated works. The UK position is different, and for CTOs running development teams in Britain it matters. The Copyright, Designs and Patents Act 1988 (CDPA) contains a provision that no other major jurisdiction has matched. Whether it holds up for modern AI is an open question, but understanding it changes how you structure your contracts and policies.

This article covers the legal landscape, what it means for code written with tools like Claude Code, GitHub Copilot, and Cursor, and the practical steps you can take now without waiting for a court ruling.

For a broader look at AI code IP across all major jurisdictions, see our guide on who owns AI-written code.

How does UK law treat AI-generated code?

The CDPA grants copyright in computer-generated works to “the author,” defined in section 9(3) as “the person by whom the arrangements necessary for the creation of the work are undertaken.” This is a narrow but significant carve-out. In the US and EU, copyright requires a human author with creative expression. The UK law, written in 1988 for algorithmic generators and spreadsheet macros, does not.

The practical implications are:

  • A developer who writes detailed prompts, iterates on the output, and reviews the result has a reasonable argument for being the person who made the arrangements.
  • A developer who hits Accept on every suggestion with no review has a much weaker claim.
  • The provision was never tested against a generative AI system in litigation, so its scope for LLM output remains genuinely uncertain.

In contrast, the US Copyright Office confirmed in 2023 guidance that AI-generated material lacking human authorship is not registrable. The EU AI Act addresses transparency and liability but does not resolve copyright ownership of outputs. The UK’s position is therefore more favourable to rightsholders, even if it is not settled.

JurisdictionCopyright provision for AI outputCurrent status
United KingdomCDPA 1988 s.9(3): person who makes arrangements is named as author of computer-generated worksUntested for LLMs; provides a potential foothold but no court ruling yet
United StatesCopyright Office: AI-generated material without sufficient human authorship is not registrableConfirmed policy since 2023; registration refused for AI-only outputs
European UnionEU AI Act: addresses transparency and liability; no equivalent computer-generated works provisionOwnership of outputs follows general copyright rules; human authorship required
AustraliaNo computer-generated works provision; follows human authorship requirementCopyright Office review ongoing; no settled position

What does this mean for code written with Claude, Copilot, or Cursor?

The answer depends on how much human creative involvement went into the output, and what the tool’s own terms say.

Anthropic (Claude and Claude Code)

Anthropic’s terms of service assign ownership of outputs to the user. You have the contractual right to use, modify, and distribute code generated by Claude Code. For copyright purposes, the degree to which you directed the generation, reviewed the output, and made edits determines whether section 9(3) applies.

GitHub Copilot

Microsoft’s terms similarly grant users rights to Copilot suggestions. Paid Business and Enterprise subscribers receive a limited IP indemnity that covers third-party claims arising from Copilot output, provided the duplicate-detection filter is enabled. The indemnity has not been tested in court and should be treated as one layer of protection, not a complete shield.

Cursor and other IDE-integrated tools

Cursor’s terms grant the user full ownership of generated code. The same copyright uncertainty applies. Where Cursor is using an underlying model (Claude, GPT-4, etc.) the relevant provider’s terms also apply.

The common thread: contractual ownership is broadly assigned to users, but copyright protection of AI output depends on human involvement and is legally uncertain everywhere.

ToolContractual ownership assigned to userIP indemnity availableKey condition
Claude Code (Anthropic)YesNoDegree of human direction and review strengthens copyright claim
GitHub Copilot (Microsoft)YesYes, for Business and Enterprise tiersDuplicate-detection filter must be enabled; indemnity untested in court
CursorYesNoUnderlying model provider terms (Anthropic, OpenAI) also apply
Codeium / WindsurfYesNoEnterprise plans include data processing agreements

What should your employment contracts say about AI-generated IP?

Most standard employment contracts pre-date AI coding tools and do not address them. Without specific clauses, ownership of computer-generated works in an employment context is ambiguous, particularly when section 9(3) introduces a human-arrangement test that blurs the lines.

Contracts for developers using AI tools should include:

1. An explicit AI work product assignment clause

Add a clause that all work product created with AI assistance, including code suggestions, refactors, and generated tests, is assigned to the employer with the same effect as conventionally authored code. Do not rely on a generic “all intellectual property created in the course of employment” clause to cover AI output automatically.

2. A disclosure obligation

Require developers to log which AI tools they use on a per-project or per-repository basis. This creates the audit trail you need if ownership is ever challenged and gives the employer visibility into tool adoption.

3. A licence scan gate

Add a contractual obligation that no AI-generated code may be committed to a repository until it has passed an automated open-source licence scan. This protects the employer from inadvertent copyleft contamination.

4. An IP indemnity clause in supplier contracts

When outsourcing development to agencies or contractors who use AI tools, require: a warranty that all deliverables are free from third-party IP claims, an obligation to disclose AI tool usage, and an indemnity covering any IP infringement arising from AI-generated material. The party closest to the tooling should carry the risk.

A practical IP protection framework for enterprise teams

Waiting for legal certainty before acting is not a viable strategy. The following framework gives you meaningful protection now.

Step 1: Classify your AI tool usage

Categorise how your team uses AI tools across three risk tiers:

  • Low risk: AI for documentation, commit messages, test generation with human review on every output.
  • Medium risk: AI for feature code with peer review but no licence scan.
  • High risk: AI-generated code merged without review or scan, particularly in modules that will be licensed to third parties.

Step 2: Implement automated licence scanning

Integrate a licence scanner (FOSSA, Snyk, Black Duck, or GitHub Advanced Security) into your CI/CD pipeline as a blocking gate on every pull request. Configure it to flag any snippet matching a known copyleft licence. This step alone eliminates the most commercially dangerous risk.

Step 3: Require human review for all AI output

Establish a policy that every AI-generated code block must be reviewed and signed off by a named developer before merging. The review does not need to rewrite the code, but it must be documented. That documented human involvement strengthens your section 9(3) argument and creates the audit trail suppliers and enterprise customers may request.

Step 4: Maintain a model and version log

Log which AI tools and model versions are in use across each project. As terms of service change (they have changed several times in the past two years), you need to know which version’s terms governed which codebase.

Step 5: Review your customer-facing licence terms

If you licence software to third parties, add a representation that the product does not contain AI-generated material that infringes third-party IP. This is now a standard expectation in enterprise procurement. Failure to include it is increasingly a red flag during legal due diligence.

How to audit AI tool usage across your development team

Policy documents mean nothing without enforcement. The following mechanisms let you verify compliance without micromanaging.

CI/CD gates are the most reliable control. A licence scan that blocks the pull request is enforced every time, regardless of whether developers remember the policy. Pair it with a secret-scanning gate (GitHub Advanced Security, TruffleHog) to catch API keys that AI tools occasionally reproduce from training data.

Telemetry dashboards from enterprise AI tool subscriptions (GitHub Copilot for Business, Cursor Business) show per-developer acceptance rates, suggestion volumes, and tool version. Export this data monthly and review it as part of your security posture reporting.

Periodic spot checks on merged code in high-risk modules (licensing, payment, authentication) add a human layer above automated gates. Allocate 30 minutes per sprint to a lead developer reviewing AI-heavy pull requests for completeness of review.

Code provenance metadata in your Git commit convention can record AI involvement. A simple convention such as a [AI-assisted] tag in commit messages, enforced by a pre-commit hook, makes it easy to filter and audit AI contributions during due diligence.

Where this leaves you

The UK’s CDPA section 9(3) provision is an advantage compared to other jurisdictions, but it is not a guarantee. The prudent position is to:

  1. Treat section 9(3) as a potential layer of protection rather than a certainty.
  2. Use contractual mechanisms, assignment clauses, disclosure obligations, licence scans, and indemnities as your primary legal protection.
  3. Implement automated CI/CD gates so that compliance is structural, not policy-dependent.
  4. Keep a model version log so that you can demonstrate due diligence if ownership is ever challenged.

For services that involve Claude Code specifically, our Claude Code development team builds these controls into every engagement from day one.

Ready to transform your software?

Let's talk about your project. Contact us for a free consultation and see how we can deliver a business-critical solution at startup speed.