Skip to content
Build, Buy, or Replace

What You Can (and Can't) Legally Copy When Replacing SaaS: A UK Guide for CTOs

20 min read Louise Clayton

UK law protects how software is written, not what it does. You can build a product with the same functionality as HubSpot, BambooHR, or any other SaaS tool, provided you do not copy their code or distinctive design. The principle is well established: the Copyright, Designs and Patents Act 1988 (CDPA) and EU case law both confirm that functionality is not copyrightable. Clean-room implementation, terms of service (TOS) review, and data portability planning are the three pillars of a defensible replacement project.

The Copyright, Designs and Patents Act 1988 (CDPA) protects software as a literary work. That protection covers the expression of software: the source code, the object code, and preparatory design materials. It does not protect the underlying functionality, the ideas, or the methods of operation.

This distinction matters enormously for SaaS replacement projects. If copyright protected what software does (rather than how it is written), building a competitor to any existing product would be illegal. That is not the law.

Two cases make this clear.

Navitaire developed an airline booking system. easyJet hired a separate company to build a replacement that replicated the user-facing functionality. Navitaire sued, arguing that copying the look and behaviour of its system infringed copyright.

The High Court ruled that the user interface screens, considered independently from the underlying code, were not protected as literary works. Replicating what the software did, without copying how it was written, did not amount to infringement. The judge noted that copyright cannot grant a monopoly over the functionality of a computer program.

SAS Institute v World Programming Ltd (2013)

SAS developed a statistical analysis language. World Programming Ltd (WPL) built a competing product that could run programs written in the SAS language. SAS argued that WPL had copied the functionality of its software.

The Court of Justice of the European Union (CJEU) confirmed the principle. Neither the functionality of a computer program, nor the programming language, nor the format of data files used by a program is a form of expression protected by copyright. WPL was entitled to study how the SAS system behaved and build its own implementation.

The practical principle

You can observe what software does, from its public-facing behaviour and published documentation, and build your own version that achieves the same result. What you cannot do is copy the source code, reproduce internal documentation, or lift distinctive creative elements such as bespoke graphic designs.

This is the legal foundation for every SaaS replacement project. The remaining questions are about process, contracts, and risk management.

What about terms of service restrictions?

Copyright law may permit you to build a functional equivalent, but your contract with the SaaS vendor might add restrictions that copyright does not.

Common contractual clauses

Many SaaS agreements include one or more of the following:

  • Non-compete clauses: restrictions on building or using a competing product while the SaaS subscription is active
  • Non-replication clauses: prohibitions on developing software that replicates the vendor’s product or specific features
  • Reverse engineering restrictions: prohibitions on decompiling, disassembling, or reverse engineering the software
  • Non-solicitation clauses: restrictions on hiring the vendor’s staff (relevant if you want to recruit someone who knows the system intimately)

These are contractual restrictions, not copyright protections. Their enforceability depends on how they are drafted and the circumstances of your case.

Enforceability under English law

English law treats clauses that restrict trade with scepticism. A non-compete or non-replication clause that is too broad, too long, or disproportionate to the vendor’s legitimate interests may be unenforceable as an unreasonable restraint of trade.

For example, a clause that says “the customer shall not develop any software that performs any function available in the vendor’s platform for a period of five years after termination” would likely be challenged as disproportionate. A narrower clause restricting the customer from building a direct clone using knowledge gained from proprietary API documentation might fare better.

The critical point is that enforceability is assessed on the specific wording. Have a solicitor review the relevant TOS before the replacement project begins. This is not an area where general principles substitute for a specific legal opinion.

Statutory override: decompilation for interoperability

Section 50B of the CDPA 1988 permits decompilation of a computer program where necessary to achieve interoperability with an independently created program. This is a narrowly defined right: it applies to interoperability, not to general reverse engineering or feature replication. It does not override all contractual restrictions, but a contractual clause that attempts to prohibit decompilation for interoperability may be void under section 296A of the CDPA.

In practice, most SaaS replacement projects do not need to decompile anything. You are building a new product, not modifying the old one. But if data migration requires understanding a proprietary file format, this provision may be relevant.

How does clean-room implementation work in practice?

Clean-room implementation is the standard method for building software that replicates functionality without copying expression. The concept is straightforward: the developers who write the new software must never have seen the source code of the original.

The two-team model

The process requires a separation between the specification team and the build team.

The specification team analyses the existing SaaS product from publicly available information only: the user interface, published API documentation, help articles, marketing materials, and their own experience as users. They document what the software does, the workflows it supports, and the data structures it exposes through its public interfaces. They do not access source code, internal documentation, or proprietary technical materials.

The build team receives the functional specification and builds the replacement software from scratch. They have no access to the original product, its documentation, or its codebase. Their work is informed entirely by the specification document.

Why the separation matters

If a vendor alleges that your replacement infringes their copyright, the clean-room process is your primary defence. It demonstrates that the build team could not have copied protected expression because they never had access to it.

Without this separation, even an independently written product can be harder to defend. A developer who has read the original source code and then writes functionally similar code creates an ambiguity: did they copy, or did they independently arrive at the same solution? Clean-room implementation removes that ambiguity.

Documenting the process

Documentation is what turns a clean-room process from a good idea into a legal defence. Record:

  • Who was on each team and confirm that no individual served on both
  • What sources the specification team used (list every document, URL, and interface they consulted)
  • What they explicitly excluded (source code, internal wikis, proprietary API documentation behind authentication)
  • The handover: when the specification was delivered, in what form, and the confirmation that no additional materials were shared
  • Version control history showing the build team’s independent development

How Talk Think Do structures this

When we deliver SaaS replacement projects, we formalise the clean-room boundary as part of the project setup. The specification phase produces a functional requirements document derived solely from public-facing behaviour and client-provided business knowledge. The development team works from that document, with AI-assisted development tools configured to exclude any proprietary code from the client’s existing vendor relationship.

What are my data portability rights?

Replacing a SaaS product is not just about building new software. You also need your data. UK data protection law provides some rights here, but they are narrower than many CTOs expect.

UK GDPR Article 20: data portability

Article 20 of the UK General Data Protection Regulation (GDPR) gives data subjects the right to receive their personal data in a structured, commonly used, and machine-readable format. This right applies when processing is based on consent or contract performance, and the processing is carried out by automated means.

For a business replacing a SaaS tool, this means you can request a machine-readable export of personal data that you provided to the platform. Employee records in an HR system, customer contact details in a CRM, and student assessment data in an education platform all fall within scope.

What Article 20 does not cover

The portability right has significant limitations:

  • Derived data: analytics, scores, predictions, and enrichments generated by the SaaS platform from your raw data are generally not covered. The vendor created these, and Article 20 applies to data “provided by” the data subject.
  • Non-personal data: business metrics, aggregated reports, and operational data that cannot be linked to an identifiable individual fall outside GDPR entirely.
  • Technical feasibility: the right requires data to be provided in a structured, machine-readable format, but it does not require the vendor to build a custom export tool. If the vendor’s standard export is a CSV dump, that may satisfy the obligation.

Practical challenges

Even where you have a legal right to export, practical obstacles are common:

  • API rate limits that throttle bulk data extraction
  • Proprietary data formats that require transformation before the data is usable in a new system
  • Incomplete exports where the standard export tool omits metadata, relationships, or historical records
  • Contractual export fees for data volumes above the standard tier
  • Retention periods where historical data is purged on a rolling basis

Plan early

The data export strategy should be defined before the replacement project begins. If you are mid-contract with the SaaS vendor, negotiate explicit data export provisions at the next renewal. Specify the format, the scope (including derived data if needed), and the timeline. This is cheaper and faster than relying on statutory rights after you have decided to leave.

AI-assisted development tools (Claude Code, Cursor, GitHub Copilot) are now standard in most development workflows. When building a SaaS replacement, they raise two distinct concerns. The first is that AI models trained on public code repositories might inadvertently reproduce fragments of the vendor’s code. The second, and increasingly common, is that developers deliberately feed the vendor’s UI into an AI tool and ask it to rebuild.

The training data risk

Large language models (LLMs) are trained on public code repositories. If the SaaS vendor’s code, or code substantially similar to it, exists in that training data, the model could generate output that resembles the original. This would undermine the clean-room process even if the human developers never saw the original code.

The probability of this happening with a bespoke SaaS product (as opposed to a widely used open-source library) is low but not zero. The risk is higher for code that follows common patterns in a specific domain.

The screenshot-to-code problem

A more immediate risk comes from a practice that has become routine in vibe coding workflows. A developer screenshots a SaaS product’s interface, pastes it into an AI tool like Cursor or Claude, and prompts: “build something like this.” The AI generates working code that reproduces the layout, colour choices, component arrangement, and interaction patterns visible in the screenshot.

This is not a theoretical concern. It is how a significant proportion of AI-assisted UI development now works. The developer may not think of it as copying, because the AI produces new code. But the legal analysis is straightforward: the AI has directly consumed the vendor’s protectable expression and produced output derived from it.

The clean-room principle depends on a separation between observing what software does and accessing how it is expressed. When you feed a screenshot into an LLM, you are handing the “build team” a copy of the protected expression. The output is informed by specific layout compositions, colour palettes, icon placements, typography, and micro-interactions that may attract copyright or design right protection. The fact that the code is technically new does not matter if the visual output reproduces protectable design elements.

The derivative works risk

Even when AI-generated output is not a pixel-perfect copy, it can still constitute a derivative work. Under the CDPA, a derivative work is one that incorporates a substantial part of an existing protected work. “Substantial” is assessed qualitatively, not just quantitatively. A distinctive colour scheme, a recognisable layout composition, or a characteristic animation sequence could be substantial even if the overall interface looks different.

AI paraphrasing does not make infringement disappear. If the output is close enough to the original that a reasonable observer would recognise the source, the risk of a derivative works claim is real. This applies to UI design, marketing copy, help text, and onboarding flows that a developer might feed into an AI tool as reference material.

What is safe to feed into AI tools

The distinction is between unprotectable functionality and protectable expression:

  • Safe: your own written notes describing what the software does, the workflows it supports, and the business problems it solves. Feature lists derived from your experience as a user. Public API documentation. Generic UX patterns (e.g. “a filterable data table with export functionality”).
  • Not safe: screenshots of the vendor’s interface. Exported HTML or CSS from the vendor’s application. The vendor’s marketing copy, help articles, or onboarding text used as direct input. Proprietary API schemas accessed behind authentication.

The rule is simple: if it is protectable expression, do not feed it to the build team’s AI tools. Describe what you need in your own words. Let the AI generate a fresh design informed by standard UI patterns, not by a copy of the original.

Mitigation for AI-generated code

Treat AI-generated code with the same rigour as any other contribution:

  • Use enterprise-grade AI tools with intellectual property (IP) indemnity provisions. GitHub Copilot Business, Anthropic’s commercial API terms, and similar enterprise agreements typically include indemnity clauses that shift some liability to the tool provider.
  • Run licence scans on all generated code. Integrate tools like FOSSA, Snyk, or Black Duck into your CI/CD pipeline to flag code that matches known open-source fragments. This catches potential licence violations before they reach production.
  • Document the AI tools used and their configuration. Record which models, which versions, and which settings were active during development. This creates provenance documentation for the generated code.
  • Maintain the clean-room boundary for AI tools. Do not feed the original vendor’s source code, screenshots, marketing copy, or proprietary API schemas into the AI tool as context. The model’s output is only as clean as its input.
  • Review AI-generated UI for visual similarity. Before shipping, compare the generated interface against the original. If a reasonable observer would recognise the source, redesign the distinctive elements.

For a detailed treatment of AI code ownership and IP attribution, see our guide on who owns AI-written code and the UK legal guide for CTOs on AI code ownership.

Can AI replace the clean-room teams?

The clean-room process described earlier in this guide assumes human teams. A specification team analyses the original product. A build team implements from the specification. The separation between them is the legal defence. In early 2026, a new question emerged: can AI models serve as those teams?

What AI-assisted clean room looks like

The concept is straightforward. One LLM (acting as the specification team) analyses publicly available information about a software product: its user interface, published documentation, API behaviour, and feature descriptions. It produces a functional specification. A second LLM (acting as the build team), with no access to the original product or the first model’s source materials, implements new software from that specification alone. A human orchestrates the process and reviews the output.

In March 2026, a project called MALUS demonstrated this approach as a “clean room as a service.” It used two separate large language models: one to extract functional specifications from source code, and another to implement new code from those specifications. The project was satirical in intent, but technically functional. It highlighted a capability that now exists for anyone with access to commercial AI tools.

Days later, the point was made concrete. When the source code of a major AI coding tool was inadvertently published through a build error, a developer executed a clean-room rewrite of the entire architecture in approximately two hours using AI agent coordination. The resulting project reimplemented the core modules (CLI, query engine, tool execution, multi-agent orchestration) in a different language, from a functional specification derived from public behaviour alone. It attracted over 70,000 GitHub stars within days.

Why it is legally untested

No UK or EU court has ruled on whether an AI model can serve as the “clean team” in a clean-room process. The traditional defence rests on demonstrating that the build team had no access to the original’s protected expression. When the build team is an LLM, this creates several unresolved questions.

Training data contamination. Most large language models have been trained on substantial portions of publicly available code. If the SaaS vendor’s code (or code closely resembling it) exists in the model’s training data, the “clean team” LLM may have already been exposed to the protected expression. Whether training-data exposure is legally equivalent to a human developer having read the original code is an open question. UK IP firm Marks & Clerk flagged this in April 2026 as a genuine grey area that copyright law “struggles to handle.”

The speed question. Courts have historically viewed clean-room claims with greater scepticism when the rewrite was suspiciously fast and clearly triggered by access to the original. A two-hour rewrite, executed in direct response to a source code leak, is an unusual fact pattern for a clean-room defence. Whether speed alone undermines the defence is untested, but it increases scrutiny.

AI-generated originals. If substantial portions of the original software were themselves AI-generated, their copyright status may be weaker. Under UK law (CDPA s.9(3)), the author of a computer-generated work is “the person by whom the arrangements necessary for the creation of the work are undertaken.” If the original code was produced with minimal human creative direction, it may attract limited or no copyright protection. This weakens any infringement claim against the clean-room rewrite, but it also means the rewrite itself may be similarly unprotectable.

How to use AI safely in a clean-room process

The uncertainty does not mean AI has no role. It means the process needs structure to remain defensible.

Preserve the human specification step. Use AI to accelerate the build, not to replace the separation. The specification should still be produced by a human team working from publicly available information. This is the step that creates the defensible boundary. Let AI assist with drafting and organising the specification, but the humans must determine what goes into it and verify that no protected expression has been included.

Keep the build-team LLM clean. Do not feed the original vendor’s source code, screenshots, internal documentation, or proprietary API output into the model used for implementation. Provide only the functional specification and standard technical references (framework documentation, API standards, design system guidelines). Document what was and was not provided as context.

Document the AI workflow. Record which models were used at each stage, what context was provided to each, and the separation between them. This is the AI equivalent of the two-team documentation described in the clean-room section above. If challenged, you need to demonstrate that the build-phase model received only the specification, not the original expression.

Run licence and similarity scans on all output. Automated licence scanning (FOSSA, Snyk, Black Duck) catches code fragments that match known repositories. For UI output, conduct a visual comparison against the original to identify any protectable elements that may have been reproduced.

Treat the result as higher-risk than a traditional clean room. Until courts rule on AI-assisted clean-room claims, the defence is weaker than a traditional two-human-team process. Budget for additional legal review and be prepared to demonstrate the separation more thoroughly than you would with human teams alone.

How Talk Think Do structures AI-assisted clean room

When we deliver SaaS replacement projects, the specification phase is always human-led. Our team analyses the existing product from public-facing behaviour and the client’s business knowledge, producing a functional requirements document. AI tools assist with drafting and organising, but do not replace the human judgement about what constitutes protectable expression versus unprotectable functionality.

The development phase uses AI-augmented development with strict context controls. The AI tools receive the functional specification, framework documentation, and our own codebase. They do not receive the vendor’s source code, screenshots, or proprietary documentation. Licence scanning runs on every pull request, and we document the AI tools, model versions, and context provided at each stage. For projects where the clean-room boundary is commercially important, we recommend independent legal review of the documentation before the build phase begins.

Can I replicate the UI, or just the functionality?

The answer depends on what you mean by “replicate.” Functional equivalence is fine. Visual copying is not.

What is not protectable

Generic user interface patterns are not protected by copyright or design rights. Tables, forms, navigation bars, dashboards with charts, list views with filters, and CRUD (create, read, update, delete) workflows are all standard patterns used across thousands of applications. Building a dashboard that displays the same data categories as the original SaaS tool does not infringe anything.

Workflow logic is similarly unprotectable. If the original tool follows a three-step process (select criteria, review results, export report), building a tool with the same three-step process does not copy any protected expression.

What may be protectable

Distinctive visual design can attract protection as an artistic work under the CDPA or as a registered or unregistered design right. This includes:

  • Bespoke icons and illustrations created specifically for the product
  • Distinctive colour schemes and typography combinations that form a recognisable brand identity
  • Unique layout compositions that go beyond standard UI conventions
  • Animation sequences and micro-interactions with creative originality

The threshold is originality: the design must be the author’s own intellectual creation. A standard Bootstrap layout is not original. A highly distinctive, custom-designed interface may be.

Practical advice

Build your own design system. Use established UI frameworks (Tailwind, Material Design, Fluent UI) as a starting point and apply your own branding. Do not screenshot the original product and attempt to match it pixel by pixel.

If you want the replacement to feel familiar to users, focus on preserving the workflow and information architecture rather than the visual design. Users adapt to new visual styles quickly when the workflow is consistent.

Before starting a SaaS replacement project, work through each of these steps:

  1. Review the existing SaaS terms of service. Identify any non-compete, non-replication, or reverse engineering restrictions. Flag these for solicitor review.
  2. Check for non-compete and non-solicitation clauses. Determine whether they apply during the subscription term only or survive termination. Assess breadth and likely enforceability.
  3. Establish the clean-room process. Define the specification team and build team. Document the boundary. Confirm that no individual serves on both teams.
  4. Audit your data sources. Catalogue the data held in the SaaS platform. Identify which data is covered by UK GDPR portability rights and which is not.
  5. Plan the data export. Request a test export from the vendor. Assess the format, completeness, and any gaps. Negotiate enhanced export provisions if needed.
  6. Configure AI tools for clean-room compliance. Ensure no proprietary code, screenshots, marketing copy, or internal documentation is fed into AI development tools as context. Enable licence scanning in the CI/CD pipeline.
  7. Ban screenshot-to-code from the vendor’s product. Explicitly prohibit the practice of screenshotting the existing SaaS interface and feeding it into AI tools. Describe the required functionality in your own words instead.
  8. Run an IP scan on all AI-generated code. Integrate automated licence and provenance scanning. Review flagged items before merging.
  9. Compare AI-generated UI against the original. Before shipping, visually compare the replacement interface against the vendor’s product. Redesign any elements where a reasonable observer would recognise the source.
  10. Document everything. Maintain a contemporaneous record of the specification sources, the team separation, the AI tools and context used, and the development process.
  11. Get legal sign-off. Have a solicitor qualified in intellectual property law review the TOS analysis, the clean-room documentation, and the data export plan before development begins.

Where to go from here

This guide covers the legal framework. The practical question of whether to replace a SaaS tool, and how to structure the project if you do, is covered in our pillar guide to replacing SaaS with custom AI-built software.

The legal assessment is one part of a broader decision. Costs, timeline, team capacity, and ongoing maintenance obligations all factor in. But the legal picture is clearer than many CTOs assume: UK law gives you broad freedom to build software that does what an existing product does, provided you respect the boundaries around code, design, contracts, and data.

If you are considering a SaaS replacement and want to discuss the legal and technical dimensions together, book a consultation.


This guide provides general information about UK copyright law as it applies to software functionality. It is not legal advice. The cases and principles discussed here reflect the law as at April 2026. Consult a solicitor qualified in intellectual property law for advice specific to your circumstances.

Frequently asked questions

Is it legal to build software that replicates SaaS features?
In most cases, yes. UK and EU copyright law (confirmed in Navitaire v easyJet and SAS Institute v WPL) protects the expression of software, not its functionality. You can legally build software that does the same thing as an existing product, provided you do not copy the code, UI design assets, or other protected expression. Always review the SaaS vendor's terms of service for non-compete or non-replication clauses before starting.
What is clean-room implementation and why does it matter?
Clean-room implementation means the developers writing the replacement software have never seen the source code of the original. A separate team documents the required functionality from public information and user experience alone, then hands that specification to the development team. This creates a defensible paper trail showing independent creation, which is the strongest protection against copyright infringement claims.
Can a SaaS vendor's terms of service prevent me from building a competing product?
Some SaaS agreements include non-compete or non-replication clauses. These are contractual restrictions, not copyright protections, and their enforceability depends on how broadly they are drafted and whether they amount to an unreasonable restraint of trade under English law. Review your specific agreement with a solicitor before proceeding.
Does UK GDPR give me the right to export my data from a SaaS platform?
UK GDPR Article 20 grants data portability rights for personal data processed on the basis of consent or contract performance. This gives you the right to receive your data in a structured, machine-readable format. However, it does not cover all data types (e.g. derived analytics or proprietary enrichments), and the vendor may charge for exports beyond the standard API. Plan your data migration strategy early.
What are the IP risks of using AI to build a SaaS replacement?
AI-generated code carries the same IP considerations as any SaaS replacement. The key risk is that AI models trained on public code repositories might reproduce fragments of copyrighted code. Mitigate this by using enterprise-grade AI tools with indemnity provisions, running licence scans on all generated code, and documenting your clean-room process. See our guide on AI code ownership for a detailed treatment.
Is it legal to screenshot a SaaS product and ask AI to rebuild the interface?
This is legally risky. When you feed a screenshot into an AI tool and prompt it to reproduce the interface, the AI has directly consumed protectable expression (the visual design). Its output is derived from that expression, even though the generated code is technically new. The result may constitute a derivative work if it reproduces distinctive layout compositions, colour schemes, or design elements. The safe approach is to describe the functionality you need in your own words and let the AI generate a fresh design from standard UI patterns.
Can AI tools serve as the 'clean team' in a clean-room implementation?
This is legally untested. No UK or EU court has ruled on whether an LLM can replace the human build team in a clean-room process. The key concern is training data contamination: if the model was trained on the vendor's code or similar code, the separation may not hold. Until courts provide clarity, use AI to accelerate the build phase rather than replace the human-led specification step. Document the AI tools used, the context provided, and the separation between specification and implementation.
Can I copy a SaaS product's user interface design?
Generally, no. While you can replicate the same functionality and workflow patterns, copying specific UI design elements (icons, layouts, colour schemes, typography choices) may infringe on design rights or copyright. Build your own interface informed by good UX practice, not by pixel-matching the original. Common UI patterns (tables, forms, dashboards) are not protectable, but a distinctive visual design may be.

Ready to transform your software?

Let's talk about your project. Contact us for a free consultation and see how we can deliver a business-critical solution at startup speed.