ISO 27001 and AI: How to Maintain Compliance When Using AI Development Tools
AI coding tools send code snippets and context to external servers, creating new third-party data risks that ISO 27001 controls must address. The key controls are A.5.23 (cloud services), A.8.28 (secure development), and A.5.10 (acceptable use). Document your tools, classify the data they handle, implement a data-in-prompt policy, and maintain an audit trail. A practical 10-point checklist is at the end of this article.
ISO 27001 certification does not prohibit the use of AI development tools. What it requires is that every tool processing information assets is identified, assessed, and controlled within your Information Security Management System (ISMS). Most organisations certified before 2023 have an ISMS that was never designed with AI tools in mind.
The gap is significant. When a developer pastes a function into ChatGPT to debug it, or uses Claude Code to refactor a module, code is transmitted to an external API. That transmission is a data flow. Data flows need to be inventoried, classified, and governed under ISO 27001:2022.
This article maps the relevant controls, identifies the practical risks, and gives you a 10-point checklist to bring AI tool usage into compliance.
For a worked example of how we applied these principles during our own ISO 27001:2022 recertification, see how a custom AI Copilot saved 65 hours and passed with zero non-conformances.
Why do AI development tools create ISO 27001 risks?
The core risk is data leaving your controlled environment in a form you did not anticipate.
When a developer uses an AI coding tool, the tool typically sends:
- The code in the current file or selection (which may contain business logic, database schemas, or API key patterns).
- The surrounding context window (which in tools like Cursor or Claude Code can be hundreds of kilobytes).
- Conversation history and project-level context in some configurations.
This data goes to the AI provider’s infrastructure, is processed by their model, and may be retained for service improvement depending on the subscription tier. For organisations handling personal data, commercially sensitive IP, or systems subject to sector-specific regulations (financial services, healthcare, education), this creates a material risk.
The second risk is training data contamination. Some AI providers use submitted queries to improve their models unless you opt out. If a developer’s code prompts train a future model version, proprietary algorithms, database schemas, or security configuration patterns could theoretically be reproduced in completions generated for other users.
Which ISO 27001 controls apply to AI tool usage?
A.5.23: Information security for use of cloud services
This control, introduced in ISO 27001:2022, requires organisations to identify, assess, and manage the security risks of using cloud services. AI tools accessed via external APIs are cloud services for this purpose.
Your ISMS must document each AI tool, the data classifications it handles, the provider’s security posture (check for their own ISO 27001 or SOC 2 certifications), and the contractual terms governing data processing.
A.5.10: Acceptable use of information and assets
Your acceptable use policy must explicitly address AI tools. Which tools are approved? What categories of data may be entered into them? What is prohibited? Developers need a clear, written policy rather than informal guidance.
A.8.11: Data masking
When developers need to test or debug code involving personal data, AI tools should be fed masked or synthetic data rather than real records. This control applies directly to the practice of copying production data into a prompt.
A.8.28: Secure development
The secure coding guidelines required by A.8.28 should address AI-generated code specifically. Key requirements: AI output must be reviewed before committing, automated licence scanning must be applied, and AI-suggested code must meet the same security standards as hand-written code (static analysis, dependency checking, input validation verification).
A.5.19 and A.5.20: Supplier security requirements
AI tool providers are suppliers. They should appear in your supplier register with a documented risk assessment. Your supplier agreements (or the provider’s terms of service, where bespoke agreements are not available) must address data processing responsibilities, breach notification, and data retention.
How should you handle data when using LLMs in development?
Implement a data classification policy for AI inputs
Define explicitly which data classifications may be used with which tool categories:
| Data classification | External AI tools (e.g. Claude API, Copilot) | Self-hosted models (e.g. Azure AI Foundry) |
|---|---|---|
| Public / internal | Permitted | Permitted |
| Confidential (business IP) | Permitted with review | Permitted |
| Personal data (any GDPR category) | Prohibited | Permitted with controls |
| Special category personal data | Prohibited | Prohibited without legal basis |
| Payment data (PCI DSS scope) | Prohibited | Prohibited |
Use synthetic and anonymised test data
Establish a process for generating synthetic test data for any debugging or development work that would otherwise require real records. Tools like Faker, Mockaroo, and custom data generation scripts are low-cost and straightforward to implement.
Opt out of training where available
Enterprise tiers of most major AI tools (GitHub Copilot Enterprise, Claude for Teams, OpenAI API) do not use your data for model training by default. Confirm this in the provider’s data processing addendum and document it in your ISMS. Where opt-out is not available, reassess whether the tool is appropriate for your compliance posture.
Consider self-hosted models for sensitive workloads
For workloads involving highly sensitive data or subject to strict data residency requirements, self-hosted models via Azure AI Foundry eliminate the third-party transmission risk entirely. Data stays within your Azure environment, and you control logging, retention, and access.
What audit trail requirements apply?
ISO 27001 does not specify a prescriptive log format for AI tool usage, but several controls require that you can demonstrate:
- Which tools are in use (asset register, A.5.9).
- What data they process (data flow documentation, part of your DPIA if GDPR-intersecting).
- Who approved their use (change management records).
- That your policies are being followed (access logs, CI/CD gate outputs).
Practically, this means:
- Maintain a live inventory of approved AI tools, updated whenever a new tool is adopted.
- Log AI tool usage at the enterprise subscription level (most enterprise tiers provide admin dashboards with usage data).
- Store CI/CD gate outputs, licence scan results, and SAST reports as artefacts attached to each build, so they are available for audit.
- Include AI tool risk assessments in your regular internal audit schedule.
Practical compliance checklist for development teams
Work through this checklist to assess and improve your current posture.
- Inventory all AI tools in use, including free-tier and personal accounts used for work purposes. Shadow AI use is a common compliance gap.
- Add each tool to your cloud services register (A.5.23) with the data classification it is permitted to handle.
- Add AI providers to your supplier register (A.5.19) with a documented risk assessment.
- Update your acceptable use policy (A.5.10) with explicit guidance on which tools are approved and what data may be entered into them.
- Create or update your secure development guidelines (A.8.28) to require human review and automated scanning of all AI-generated code before merging.
- Implement a CI/CD gate for licence scanning (FOSSA, Snyk, or GitHub Advanced Security) that blocks pull requests containing problematic open-source snippets.
- Verify training data opt-out for each tool at the tier your organisation uses, and document this in your ISMS.
- Prohibit personal and payment data from AI tool prompts and implement a synthetic test data process as the approved alternative.
- Train your development team on the acceptable use policy. Documented training completion is an audit requirement.
- Include AI tool usage in your next internal audit with a specific focus on whether the data classification policy is being followed in practice.
How Talk Think Do addressed this during ISO 27001:2022 recertification
We went through ISO 27001:2022 recertification while actively using AI tools across our development practice. The key steps were: inventorying every tool in use (including tools developers had adopted informally), updating our acceptable use policy, implementing enterprise subscriptions with training opt-out, and adding AI providers to our supplier risk register.
The result was zero non-conformances at audit. The full account, including how a custom AI Copilot saved 65 hours during the recertification process itself, is in our ISO 27001 compliance case study.
For organisations building or maintaining ISO 27001-compliant systems, our enterprise software development practice includes compliance posture assessment and ongoing ISMS support.