The GDS Service Standard for Private-Sector Delivery Teams
The UK Government’s GDS Service Standard is one of the best-articulated public statements of good digital delivery practice in the world. Most of it transfers directly to private-sector work. A few points need adapting for commercial context. One point, “make source code open”, frequently does not apply and is worth discussing on its own terms. This guide walks through what transfers, what adapts, what does not, and how a private-sector team can build its own adapted standard without the overhead of formal government service assessments.
Why should private-sector delivery teams care about the GDS Service Standard?
The Government Digital Service (GDS) Service Standard is 14 principles the UK Government uses to assess digital services built or funded by central government. It is supported by the GDS Service Manual, the Technology Code of Practice, and the GOV.UK Design System. Together, they form one of the most mature, open, and well-reasoned sets of digital delivery guidance published anywhere.
The Standard is unusual in three respects. It is written in plain English rather than enterprise-architect dialect. It is rooted in user research and iterative delivery rather than requirements-and-specification thinking. And it is backed by a real assessment process that has, for over a decade, blocked government services from launch when they fail it. That last point is what makes the Standard unusually honest. Principles that have been argued out in front of assessors, with a live service on the line, tend to be sharper than principles written in a whitepaper.
Private-sector teams deliver software under different constraints: commercial imperatives, shorter investment horizons, proprietary intellectual property, competitive pressure. None of those constraints invalidate good delivery practice. They do, however, mean that adopting the Service Standard wholesale, including formal service assessments and developing in the open, is usually the wrong call. The sensible move is to identify which practices transfer directly, which need adapting, and which are genuinely specific to the public-sector context. That is what the rest of this guide does.
For a companion treatment focused on how AI tooling, agent rules and skills (in tools like Claude Code and Cursor), and the GOV.UK Design System help a delivery team meet the Service Standard in full, see Using AI to Meet the GDS Service Standard.
Which Service Standard practices transfer directly to private-sector work?
Eleven of the 14 points transfer without adjustment. The underlying practice is the same whether the service is a child-benefit application or a fintech onboarding flow. Each of the following is a Service Standard point whose core practice applies to commercial software with no material change.
Understand users and their needs (Point 1). User research produces better services, commercial or public. Interviews, usability testing, journey mapping, and research synthesis are the same techniques whether the user is a citizen or a customer. Private-sector product teams that skip this point routinely ship beautiful services that nobody wants. The Service Manual’s research guidance is, if anything, more honest about the discipline’s limits than most commercial writing.
Solve a whole problem for users (Point 2). Customers, like citizens, rarely arrive at a product with a single task in mind. They arrive with a goal and a context. Thinking beyond the login-click-submit flow to the surrounding journey is universal practice, and the Service Standard articulates it better than most.
Provide a joined-up experience across channels (Point 3). Omnichannel delivery is a standard commercial discipline. The Service Standard’s framing (that a user should not notice which part of the organisation they are interacting with) reads as good customer experience strategy.
Make the service simple to use (Point 4). Simplicity is table stakes commercially. The GOV.UK content style and the “plain English” discipline in the Service Manual is directly applicable to product copy, microcopy, and support content.
Make sure everyone can use the service (Point 5). Accessibility is increasingly required commercially through EN 301 549, the European Accessibility Act, and various regional equivalents. The Service Standard’s accessibility expectation (that the service genuinely works for disabled users, not that an audit has been completed) transfers exactly.
Have a multidisciplinary team (Point 6). The composition expectation (user researcher, designer, content specialist, engineer, product manager, delivery manager) is good commercial practice. Matrix organisations often struggle here; the Service Standard framing gives a clean vocabulary for challenging under-resourced teams.
Use agile ways of working (Point 7). Iterative delivery, small batch size, and regular user feedback are standard commercial agile practice. The Service Standard articulates them more strictly than most private-sector implementations bother to.
Iterate and improve frequently (Point 8). Continuous delivery, feature flags, small pull requests, short release cycles, and fast rollback are standard engineering practice. The Service Standard makes them explicit.
Create a secure service (Point 9). Secure-by-design, threat modelling, least-privilege access, secrets management, and vulnerability management are required commercially. The specific references (CSA STAR, ISO 27001, Cyber Essentials) are often already in place for regulated private-sector work.
Use and contribute to open standards, common components and patterns (Point 13). Adopting open data, API, and interoperability standards reduces lock-in, lowers integration costs, and keeps a service’s architecture recognisable to any engineer who joins. Contributing fixes and improvements back to shared libraries is table-stakes engineering hygiene commercially. It is often the most practical way to live the spirit of Point 12 (make new source code open) without publishing whole product codebases. The reference standards a private-sector team adopts will typically be a mix of:
- GOV.UK Design System patterns (under the Open Government Licence)
- Industry standards (OpenAPI, JSON Schema, OpenTelemetry, Open Referral, iCal)
- The company’s own shared component library
Operate a reliable service (Point 14). Monitoring, structured logging, health checks, incident response, and post-incident reviews are universal engineering practice. The public-sector framing around service-level expectations reads as good commercial site-reliability engineering.
Eleven points, no adaptation needed. For a private-sector team starting from scratch, adopting these eleven points as an explicit standard gives you most of the benefit of the full Service Standard without any of the public-sector overhead.
Which practices need adapting for private-sector context?
Three points transfer with modification rather than wholesale. The underlying intent is universal; the specific reference or expectation needs translating.
Define what success looks like and publish performance data (Point 10). The Service Standard requires publication of performance data on a public GOV.UK dashboard. The underlying practice (agreeing explicit service-level measures and tracking them) is universal. For private-sector teams, the adapted form is a customer-facing status page, internal dashboards for leadership, and a regular operational review. The “public” element becomes “auditable to customers and regulators” rather than “visible to the world”. This is a sensible adaptation, not a downgrade.
Choose the right tools and technology (Point 11). The Service Standard’s Technology Code of Practice lists specific preferences: cloud-first, open standards, avoiding lock-in, documented Architecture Decision Records. All of that transfers. The one adaptation is that specific supplier references (Crown Commercial Service frameworks, G-Cloud, Digital Outcomes) do not apply commercially. The substance (prefer open standards, avoid unjustified lock-in, document architecture decisions) is universal.
Make new source code open (Point 12). This is the point that most often does not transfer, and it is discussed in its own section below. The adapted private-sector form is a deliberate open-by-default or closed-by-default policy per component, not a blanket rule.
For each of these three, the adaptation is to preserve the underlying practice while swapping the specific artefact for a commercial equivalent.
Which practices are genuinely public-sector specific?
Four elements of the full GDS framework are genuinely public-sector specific. Private-sector teams adapting the Service Standard should recognise them as such and not force-fit them.
Formal service assessments. Service assessments are conducted by a lead assessor and specialist assessors from outside the delivery team, gating the end of alpha and the end of beta before a service moves on. The process works because government services reach a mandated gate; commercial products rarely do. The private-sector adapted form is an internal review at the equivalent moments, with reviewers outside the delivery team but inside the organisation. It is useful. It is not the same thing.
Develop in the open as a default. Public-sector services are funded by taxpayers and are accountable to citizens. That rationale does not apply to commercial software. Public versus private on this point is not a matter of maturity; it is a matter of context. More on this below.
Mandatory use of the GOV.UK Design System. Central government services are required to use the GOV.UK Design System. Private-sector teams usually have their own design system and their own brand expression. The Design System remains useful as a reference, and specific components (accessible forms, error summaries, data tables, progress indicators) can be borrowed directly under the Open Government Licence. It is a library, not a requirement. More importantly, the thinking that sits behind a good design system transfers regardless of brand. A versioned component library, accessibility-tested primitives, explicit content patterns, a single error-summary convention, a single progress-indicator pattern, and a documented way to add or retire components are good engineering and design discipline in any organisation. A branded commercial app with its own design system benefits from the same underlying practice the GOV.UK Design System encodes; only the visual shell and brand vocabulary differ.
Public-sector procurement frameworks. G-Cloud, Digital Outcomes, and the wider Crown Commercial Service procurement process do not apply commercially. However, the questions in these frameworks (evidence of ISO 27001, accessibility compliance, data-protection posture, sustainable team composition) are often a useful template for commercial procurement.
A team treating the Service Standard as a starting point should expect to strip these four elements out, or replace them with commercial equivalents, rather than try to replicate them.
Why is “develop in the open” laudable but sometimes wrong for private delivery?
Point 12 of the Service Standard reads, in full: “Make all new source code open and reusable, and publish it under appropriate licences (or provide a convincing explanation as to why this cannot be done for specific subsets of the source code)”. The wording is deliberate and strong. It is, in the government context, a sound principle.
The rationale is threefold. First, taxpayers paid for the code and have a reasonable claim to see and reuse it. Second, publishing code avoids duplicated effort across public bodies that regularly solve the same problem. Third, open code invites peer review and forces a standard of quality that internal-only code often avoids.
All three reasons are genuinely good. All three have limits in a private-sector context.
Commercial intellectual property is a legitimate business asset. The code that encodes a firm’s proprietary pricing algorithm, its customer-matching model, or its bespoke workflow engine is often a source of competitive advantage. Opening it would not be a public benefit; it would be a direct transfer of value to competitors who did not invest in the work. The taxpayer-accountability rationale does not apply, because customers are not taxpayers and the revenue model is different.
The reduced-duplication argument also changes. Two government departments building near-identical case-management tools is a waste of public money. Two commercial firms building near-identical products is normal market dynamics; that is how competition works.
The peer-review argument has more force, and is worth taking seriously. Code quality does tend to improve when engineers know their work will be read by outsiders. This is an argument for a private-sector team to publish selected work (utilities, tooling, framework contributions, patterns) rather than every line of every service.
A sensible private-sector adaptation of Point 12 has four elements.
- Open by default for generic, non-differentiating code. Libraries, build tooling, configuration utilities, sample integrations, and educational content should be open unless there is a specific reason not to be.
- Closed by default for differentiating, customer-specific, or proprietary code. Core product logic, customer-tuned implementations, trade-secret algorithms, and contractual bespoke work should be closed unless there is a specific reason to open a particular component.
- Deliberate per-component decisions, not blanket policy. Every significant codebase should have an explicit position, documented in an Architecture Decision Record, on its openness. “We haven’t decided” is the anti-pattern.
- Open contribution back to dependencies. Private-sector teams should contribute fixes and improvements back to the open-source libraries they depend on. This is the cheapest and clearest form of “develop in the open” discipline, and it benefits the commons without compromising commercial position.
This adaptation preserves the spirit of Point 12 (that code produced should be examined and, where generic, shared) while recognising the different rationale that applies to commercial work. The Service Standard itself is open about the trade-off by requiring “a convincing explanation” for any closed subset, which is exactly the discipline a private-sector team should apply.
For a related treatment of AI-generated code and its intellectual property consequences, see Who Owns AI-Written Code? and the AI code attribution guide.
What does a private-sector-adapted Service Standard look like?
The diagram below shows how the 14 points layer across direct-transfer, adapted, and public-sector-specific categories. The diagram is visible on wider screens; the prose carries the same information.
A private-sector team adopting an adapted Service Standard typically ends up with an 11 + 3 model: eleven points applied directly, three points adapted to commercial context, and the public-sector-specific practices either replaced by internal equivalents or dropped. The discipline is to write this down, review it annually, and treat it as a living document rather than a one-off whitepaper. A rulepack alongside the code (for example in a Cursor .cursor/rules/ directory, or referenced from a CLAUDE.md at the repository root) is an excellent place to encode it, for exactly the reasons described in the companion guide.
What does this look like on a real project: an internal fault-reporting app for a property-management firm?
To make the adapted Standard concrete, here is a worked example. The scenario is an internal, branded app used by a property-management firm’s own staff (site managers, regional operations, and the helpdesk team) to log and triage faults across the estate: leaks, heating outages, lift failures, access-control issues, safety defects. The app issues work orders to approved contractors, tracks SLA timers against the firm’s tenant-service commitments, holds photos and site notes, and feeds a regional performance dashboard. It is not a tenant-facing product and not a public service. It is branded in the firm’s own design language and accessed by staff via single sign-on on desktop and mobile.
The scenario is useful because it exercises every Service Standard point naturally: real users under pressure, an end-to-end journey from report to close, regulatory and safety obligations, sensitive data, SLA-driven reliability, offline and photo capture, and integrations with contractor systems.
User research, whole problem, joined-up experience, and simplicity (Points 1 to 4). Staff users are routinely neglected because they “have to use the tool anyway”. Apply the Service Standard exactly. Shadow a site manager on a Monday-morning walk-round. Watch the helpdesk triage a flood report live. Follow a work order from raise to close. Understand the handoff to contractors and the verification after the fix. Write plain-English microcopy so a new starter is productive on day one. Think about the whole journey (report, triage, dispatch, fix, verify, close) rather than optimising one screen at a time. Direct transfer.
Accessibility (Point 5). Staff include disabled employees. Equality Act obligations apply to employment tools as strongly as to customer-facing products, and where the firm trades across the EU the European Accessibility Act and EN 301 549 also bite. WCAG 2.2 AA is the target. On a fault-reporting app specifically: large tap targets for gloved site staff, typography readable in outdoor light, photo and status capture that does not rely on colour alone, and screen-reader-friendly lists for helpdesk triage. Direct transfer.
Multidisciplinary team, agile ways of working, and iteration (Points 6 to 8). The delivery team should include a product manager who has walked the sites, a user researcher, a designer, engineers, a delivery manager, and access to domain expertise in facilities management. Ship iteratively against real incidents, not against a Gantt chart. Direct transfer.
Security (Point 9). The app holds site plans, access-control information, contractor credentials, photos that may include residents or staff, and incident notes with potential legal consequence. Threat-model for insider risk (a compromised contractor account) as well as external attack. Enforce single sign-on, least-privilege access, and immutable audit logging. Treat photo storage as sensitive by default. Direct transfer.
Performance data (Point 10), adapted. There is no public GOV.UK dashboard. The equivalent is an internal regional dashboard showing fault volumes, SLA adherence against the firm’s tenant-service commitments, contractor performance, and re-open rates, visible to operations leadership and auditable to clients where contracts require it. Same practice, internal audience.
Choice of tools and technology (Point 11), adapted. Crown Commercial Service frameworks do not apply; the firm’s existing approved-technology list, vendor security review, and procurement route take their place. The substance (prefer open standards, avoid unjustified lock-in, document Architecture Decision Records for material technology choices) is unchanged.
Open source (Point 12), adapted. The fault-reporting app itself stays closed. It encodes the firm’s triage logic, SLA policy, and contractor-routing rules, all of which are commercially differentiating. Narrow, generic utilities may legitimately be opened on a per-component basis with an Architecture Decision Record — for example, an SLA-timer library, audit-log middleware, or a photo EXIF scrubber. Fixes to the open-source dependencies the app already uses are contributed back.
Open standards and common components (Point 13). Strong direct transfer. Use OpenAPI for the contractor-integration contract, OpenTelemetry for observability, OIDC for identity, and the firm’s own component library for the user interface. Where the firm’s design system does not yet cover a pattern (an accessible error summary, a progress indicator for a multi-step fault report, an accessible data table for helpdesk triage), borrow the GOV.UK Design System component under the Open Government Licence rather than rebuilding it.
Reliable operation (Point 14). If the fault-reporting app is down, site staff cannot log incidents and SLAs slip immediately. Apply the same site-reliability discipline as a customer-facing service: structured logging, health checks, alerting on SLA-timer integrity, paged on-call, post-incident reviews, and a runbook for the realistic failure modes (single-sign-on outage, contractor-API timeout, photo-upload back-pressure).
The public-sector-specific elements drop out cleanly for this kind of project. There is no formal service assessment, no develop-in-the-open default, no mandatory GOV.UK Design System, and no Crown Commercial Service procurement. The internal equivalents take their place: an end-of-alpha and end-of-beta review by reviewers outside the delivery team but inside the firm, the firm’s own design system, and the firm’s own procurement. The 11 + 3 shape of the adapted Standard holds, the practice is recognisable to any delivery team that has worked to the Service Standard before, and the commercial context is respected throughout.
What does AI-augmented delivery look like on an adapted Standard?
The mechanics of AI-augmented delivery for a private-sector team are substantially the same as for a public-sector team, with three adjustments.
Accessibility checks reference the adapted standard. A private-sector rule targeting WCAG 2.2 AA is the direct equivalent of a public-sector rule targeting the same. The reference framework (European Accessibility Act, EN 301 549, or voluntary commitment) changes; the underlying practice does not.
Design System references are your own. Where a public-sector rule requires GOV.UK Design System components, the private-sector equivalent requires your own design system components. The rule pattern is identical. A private-sector team can also cite the GOV.UK Design System as a supplementary reference for components their own system has not yet covered, under the Open Government Licence. The transferable part is the design-system discipline itself — versioned primitives, accessibility tested once and reused, documented patterns, clear deprecation — rather than the specific GOV.UK visual language.
Content and procurement rules reflect commercial context. Copywriting rules reference the company’s voice and style guide rather than the GOV.UK content style guide, although borrowing from the Service Manual is a free upgrade for any team still writing in consultant dialect. Procurement rules reflect the commercial contracting context.
The rest of the AI-augmented delivery picture is the same:
- Agent rules enforcing accessibility, security, and iteration discipline
- Agent skills scaffolding consistent artefacts
- AI transcription and synthesis accelerating user research
- Human effort concentrated on judgement-heavy work
For the full treatment, see Using AI to Meet the GDS Service Standard.
What should procurement teams ask suppliers who claim to work to an adapted Standard?
If you are evaluating a private-sector supplier who claims Service Standard-aligned delivery practice, the following questions surface whether the claim is real.
- “Which Service Standard points do you apply directly, and which do you adapt?” A mature supplier will answer specifically, not in general. The eleven direct-transfer points are easy to name.
- “What is your position on open source for the work you deliver to us?” A good answer distinguishes generic components (likely open) from customer-specific or proprietary work (likely closed), with a per-component position rather than a blanket rule.
- “Which accessibility standard do you deliver to, and how do you evidence it?” Good answers cite WCAG 2.2 AA or equivalent, automated and manual testing, and evidence at pull-request level.
- “How do you handle user research and service design, given AI does not substitute for these?” A mature supplier will name senior people accountable for research and design, and distinguish clearly where AI helps and where it does not.
- “Do you run internal equivalents of GDS service assessments, and if so, at what cadence?” An end-of-alpha and end-of-beta review by reviewers outside the delivery team is a good answer.
- “What is your posture on iterative delivery, pull-request size, and feature-flagging?” Good answers include concrete numbers (for example, a pull-request size cap or a target deployment frequency).
- “Can you show evidence of a recent delivery against these standards?” A mature supplier can share a sanitised evidence pack from recent work.
These are essentially the same questions a public-sector procurement team would ask, translated for commercial context. That is the point: good delivery practice is not secretly different between sectors. The Service Standard is simply one of the clearest public articulations of it.
How do we use this approach at Talk Think Do?
At Talk Think Do we maintain a shared delivery standard that draws directly from the GDS Service Standard, adapted for our commercial context. The eleven direct-transfer points are baked into our ways of working across private-sector and public-sector delivery alike. The three adapted points are handled with per-project decisions rather than a blanket policy. The public-sector-specific elements apply only where the work genuinely sits inside central government delivery.
Our rulepack maps to one or more Service Standard points across:
- Accessibility
- Content and copywriting
- Performance
- Security posture
- Engineering conventions
When we work with public-sector clients, we layer on additional rules and references to the GOV.UK Design System, the Service Manual, and the Technology Code of Practice. When we work with private-sector clients, we reference the client’s design system and voice. The core practice is shared.
For the wider picture of how we combine this with AI-augmented delivery, see our AI approach, The AI Velocity Report, Shipping AI in the Real World, and Why We Don’t Let AI Ship Code Unsupervised. For the specific case of encoding Service Standard principles into AI-augmented delivery, see Using AI to Meet the GDS Service Standard.
If you are a private-sector delivery leader considering an adapted Service Standard for your team, or a procurement team evaluating a supplier on these terms, book a consultation.
Frequently asked questions
What is the GDS Service Standard?
Can private-sector teams adopt the Service Standard directly?
Why does the Service Standard recommend developing in the open?
Which GDS practices are genuinely public-sector specific?
How does the GOV.UK Design System fit for a private-sector team?
How does AI-augmented delivery change the picture?
Where should a private-sector team start with adopting the Standard?
Related guides
In-House DevOps vs DevOps-as-a-Service: A Cost and Capability Comparison
Should you hire a DevOps engineer or work with a DevOps-as-a-Service partner? A practical comparison of cost, coverage, risk, and how AI-augmented delivery changes the economics.
DevOps Maturity: Where Does Your Team Stand and What Should You Fix First?
A practical DevOps maturity model with AI-augmented practices at each level. Self-assessment questions, DORA metrics, and a prioritised improvement path.
From Prototype to Production: What AI-Built Software Needs to Ship
AI tools make prototyping nearly free. The gap between a working demo and production-grade software is where most projects stall. A practical guide to bridging it.