Who Owns AI-Written Code? What CTOs, Developers, and Procurement Teams Need to Know
AI-generated code may not be legally owned by anyone, since most jurisdictions require human authorship for copyright. Teams using tools like GitHub Copilot, Cursor, or Claude Code should treat AI output as a third-party contribution: review it, scan for open-source licence contamination, and address IP ownership explicitly in contracts.
Generative AI is transforming how software is written. Tools like GitHub Copilot, Claude Code, Cursor, and OpenAI Codex are now capable of suggesting full functions, refactoring legacy modules, and scaffolding new features in seconds.
But as this machine-authored code finds its way into production, a critical question arises: Who owns it and who’s responsible if something goes wrong?
In this post, we’ll unpack the legal grey areas, highlight risks around licensing and attribution, and offer practical guidance on how teams can safely adopt AI-assisted development tools.
Four Leading AI Coding Tools And How They Differ
Let’s start with a quick overview of the most popular options on the market:
🧠 GitHub Copilot (powered by OpenAI by default)
-
Suggests code directly in the IDE (VS Code, JetBrains)
-
Trained on a large corpus of public GitHub repositories
-
Copilot Enterprise allows integration with your private repos (e.g., on GitHub Enterprise)
-
Offers limited indemnity for paid users, plus admin controls
💬 Claude and Claude Code (Anthropic)
-
Claude is available via web UI and API; excels at long-context reasoning and is widely used for code review, explanation, and refactoring
-
Claude Code is Anthropic’s dedicated agentic coding tool. It operates directly in your terminal, can read your entire codebase, create and edit files, run tests, and execute multi-step engineering tasks autonomously
-
Claude Code is increasingly used in professional development workflows alongside editors like Cursor and VS Code
-
Anthropic’s terms of service state that you own the outputs Claude generates, though copyright protection depends on the level of human involvement (more on this below)
🖥 Cursor
-
A developer-focused IDE based on VS Code with built-in AI chat
-
Connects to your own codebase, enabling “context-aware” suggestions
-
Supports multiple models (Claude, GPT-4, etc.)
-
Keeps local context private by default (depending on model provider)
⚙️ OpenAI Codex
-
Powering tools like Copilot and the OpenAI API’s /v1/completions endpoint for code
-
Developers can build their own apps or plugins using Codex directly
-
Highly customisable, but offers no built-in safeguards or context management
Each tool differs in model transparency, context privacy, licensing protections, and enterprise readiness. These factors are key when choosing tools for production environments.
Who Owns AI-Generated Code?
In most jurisdictions:
-
Only humans can own copyright, meaning code produced purely by an AI may not be legally owned at all.
-
If a developer prompts the model and modifies the result, human authorship can be claimed.
-
However, legal precedent is evolving, and different countries may interpret this differently over time.
This matters because unowned or ambiguous code could be:
-
Freely copied or reused by others
-
Unprotectable under IP law
-
A risk in M&A, due diligence, or IP disputes
Claude Code: Copyright and Ownership
Claude Code has become one of the most widely adopted AI coding tools in professional software development. Understanding the copyright and ownership position is essential for any team using it in production.
What Anthropic’s terms say
Anthropic’s terms of service assign ownership of Claude Code outputs to the user. You retain the rights to code that Claude Code generates on your behalf. This is a contractual right between you and Anthropic, and it is clear and explicit.
However, contractual ownership is not the same as copyright protection. The distinction matters.
The copyright question
In the UK, US, and most other jurisdictions, copyright requires human authorship. Code produced entirely by an AI, with no meaningful human creative input, may not qualify for copyright protection at all.
This does not mean the code is worthless or unusable. It means a competitor could, in theory, reproduce identical output without infringing your copyright, because there may be no copyright to infringe.
What “substantial human involvement” means in practice
The good news for teams using Claude Code is that most real-world usage involves significant human direction. You are defining the requirements, reviewing the output, making architectural decisions, selecting what to keep, and modifying what needs changing. This level of involvement strengthens the case for human authorship considerably.
Where the position becomes weaker is when Claude Code generates large volumes of code with minimal review or modification. If a developer accepts multi-file output wholesale without meaningful editing, the human authorship argument is harder to sustain.
Practical steps to protect your IP when using Claude Code
-
Review and modify all AI-generated code before committing. Even small modifications strengthen the human authorship position.
-
Document your process. Record that a human developer directed the work, reviewed the output, and made editorial decisions. This creates an audit trail that supports an authorship claim.
-
Treat Claude Code as a contributor, not the author. Your internal policies should make clear that the developer, not the tool, is the author of record.
-
Address it in contracts. If you are building software for clients, include clear terms about AI tool usage, IP ownership, and indemnity. See our guidance on software source code ownership and intellectual property in software development for more.
-
Scan for open-source contamination. Claude Code, like all LLM-based tools, may produce output that resembles existing open-source code. Run licence scans as part of your CI/CD pipeline.
For teams that want the productivity benefits of Claude Code with the confidence that their IP is protected, working with an experienced Claude Code development team ensures the right governance is in place from the start.
The Real Risk: Training Data and Open Source Contamination
Most LLMs used for code generation were trained on public datasets, often including open source code. That creates two primary legal risks:
1. Inadvertent Inclusion of Copied Snippets
-
Some AI tools have reproduced exact or near-exact copies of open source code
-
This may expose you to GPL or SSPL licence obligations
2. Lack of Attribution
-
Licences like MIT or Apache require giving credit
-
AI tools don’t include attribution headers unless prompted
GitHub Copilot, for example, has faced criticism for potentially emitting code identical to snippets from public repos. While rare, it’s possible, and puts the onus on developers to check.
Enterprise Features That Reduce Risk
If you’re planning to use AI tools across a team or organisation, prioritise features that mitigate compliance and legal exposure:
✅ Private Codebase Integration
-
Copilot Enterprise and Cursor can restrict model access to your own repos only
-
This improves relevance while avoiding training-data surprises
✅ Prompt Isolation and Data Privacy
-
Claude and OpenAI’s API offer controls to disable data logging or sharing
-
Some platforms allow you to run models in a fully private environment
✅ Reference Transparency
-
Cursor and some LLM wrappers can show source URLs for completions
-
This allows developers to manually validate licences
✅ Indemnity and Commercial Terms
-
GitHub Copilot for Business offers limited indemnity against claims
-
For critical IP, look for signed contracts with your vendor, not just terms of service
How to Use AI Code Generators Safely
Whether you’re using Copilot, Claude, or any other tool, these principles apply:
🧾 1. Treat AI as a third-party contributor
-
Review, test, and document AI-generated code just as you would with open source
-
Avoid direct copy-paste of long completions without editing
🔍 2. Scan for licensing risk
-
Run code scans (e.g. FOSSA, Snyk, or GitHub Advanced Security) to identify similarities to existing OSS
-
Watch for suspicious patterns or license headers
📚 3. Maintain usage policies
-
Define when AI tools can and cannot be used (e.g. not in core IP or patented code)
-
Track model usage and train teams to review AI output critically
🛡 4. Address it in contracts
For client work or commercial products, ensure deliverables include:
-
Warranties of originality
-
Indemnity for IP issues
-
Disclosure of AI involvement if relevant
When to Avoid AI-Generated Code
Use extra caution when:
-
Developing core IP, patents, or client-critical software
-
Operating in regulated industries like finance or healthcare
-
Building software where open source obligations would be unacceptable
In those cases, consider disabling AI suggestions or using them only for exploratory work, not production code.
Legal Landscape: Still Evolving
The law is trying to catch up:
-
The EU AI Act includes provisions on traceability and transparency
-
In the US and UK, copyright regulators are investigating AI authorship
-
Lawsuits (e.g. against GitHub and OpenAI) could shape how AI training and output are regulated
For now, you carry the risk. So policies and process matter more than ever.
The Future: Trusted AI Tools with Built-In Governance
As adoption grows, the winners in this space will be those who offer:
-
Clear audit trails
-
Transparent model training disclosures
-
Enterprise licensing and indemnity
-
Private deployment options
-
Fine-tuned models trained only on your codebase
This is where tools like GitHub Copilot Enterprise, Cursor, and self-hosted models (e.g. using Azure OpenAI with your own vector database) are gaining traction.
In Summary: Build with AI, But Build Smart
AI-powered coding tools are here to stay. They boost velocity, improve quality, and reduce boilerplate, but they don’t remove your responsibilities.
-
Own your process
-
Validate your output
-
Secure your rights
-
Protect your clients
With the right governance, AI can be a powerful co-pilot, not a legal landmine. For a look at how this plays out in practice, read about how we ship AI in real-world delivery. For a broader view of what goes wrong when AI-generated code lacks human oversight, read why we don’t let AI ship code unsupervised.
Frequently Asked Questions
What contract clauses should I include when outsourcing work that uses AI coding tools?
Require a warranty of originality, an IP indemnity clause, and mandatory disclosure of AI tool usage. Add a provision that all AI-generated code must pass an open-source licence scan before delivery. These clauses shift liability to the party best placed to control the risk and give you a clear audit trail.
Can I get insurance to cover IP claims from AI-generated code?
Some technology errors and omissions (E&O) policies now include coverage for IP infringement arising from AI-assisted development. Coverage varies widely between insurers, so ask your broker specifically about AI code indemnity. GitHub Copilot for Business also offers a limited IP indemnity for paid subscribers.
Which tools can detect open-source licence contamination in AI output?
FOSSA, Snyk, Black Duck, and GitHub Advanced Security all scan codebases for snippets matching known open-source projects. Run these scans as part of your CI/CD pipeline so every pull request is checked automatically. Pair automated scanning with periodic manual reviews for highest confidence.
Does GitHub Copilot's indemnity actually hold up in a real dispute?
Copilot’s indemnity covers paid Business and Enterprise users against third-party IP claims, provided you have the duplicate-detection filter enabled. It has not yet been tested in court. Treat it as one layer of protection, not a complete shield, and combine it with your own licence scanning and contractual safeguards.
How do I audit whether my development team is following our AI code policy?
Use a combination of tooling telemetry, code provenance metadata, and regular spot checks. Most AI coding tools log usage data that admins can review. Integrate licence-scanning gates into your CI/CD pipeline so non-compliant code is flagged before it reaches production.
Who owns code written by Claude Code?
Anthropic’s terms of service assign ownership of Claude Code outputs to the user. You retain the contractual right to use, modify, and distribute the code. However, copyright protection is a separate question. In most jurisdictions, copyright requires human authorship, so the level of human involvement in directing and reviewing the output determines whether copyright applies. Treat Claude Code as a powerful assistant, not the author.
Is Claude Code output copyrightable?
It depends on the degree of human creative input. If a developer provides detailed direction, reviews the output, and makes meaningful modifications, the resulting code is more likely to qualify for copyright protection. Code generated with minimal human oversight is harder to protect. The safest approach is to always review, edit, and document your involvement with any AI-generated code.
Going deeper: UK law specifically
If your development team is based in the UK, or you are procuring software from UK suppliers, the legal position on AI-generated code has some important differences from the US and EU. The UK’s Copyright, Designs and Patents Act 1988 includes a specific provision for computer-generated works that no other major jurisdiction has matched. Our companion article on AI code ownership under UK law covers the CDPA section 9(3) provision in detail, along with employment contract clauses and a practical IP policy template for enterprise teams.
Control your own destiny
Talk Think Do is an industry-leading cloud application development company, offering application innovation services that support clients from project discovery to post go-live support. Our expertise extends to developing software for various operating systems, ensuring seamless integration and performance across different platforms.
During the discovery phase, we work alongside clients to fully define and clarify the goals of the project, to ensure that they receive an application that meets all of their unique business requirements. We can advise on whether you might benefit from owning the source code of your application, helping to minimise risks in delivery and ensure that every decision is made with your best interests at heart. This source code is crucial for creating and managing computer programs that drive cloud-native applications. Book a consultation today to discuss how our application innovation service could help you.
