Is Your Organisation Ready for AI? A Practical Readiness Checklist
AI Readiness Assessment
Score your organisation across five dimensions and get a personalised readiness recommendation. Check every statement that applies.
Data Readiness
AI applications need data to work on. The question is whether your data is accessible, clean enough, and governed appropriately.
- ☐ We know where our data lives and can access it programmatically 3 pts
- ☐ Our data is reasonably clean and consistent 2 pts
- ☐ We have data governance policies and know the privacy requirements 2 pts
- ☐ We have labelled data for classification or training tasks 1 pt
- ☐ Our data is documented with schemas or data dictionaries 1 pt
Dimension maximum: 9 points
People Readiness
AI projects need sponsorship, skills, and willingness to change.
- ☐ We have an executive sponsor who owns the business outcome 3 pts
- ☐ We have software engineering capability (in-house or partner) 2 pts
- ☐ The team that will use the AI output is engaged and willing to change 2 pts
- ☐ We have (or can access) data science skills for custom model work 1 pt
Dimension maximum: 8 points
Process Readiness
AI delivers the most value when it is embedded in an existing process, not bolted on as a side project.
- ☐ We have identified specific processes where AI could add value 2 pts
- ☐ We can measure the current performance of those processes 2 pts
- ☐ We know where the AI output will be consumed 2 pts
- ☐ The process owner is involved in the AI initiative 2 pts
Dimension maximum: 8 points
Infrastructure Readiness
AI workloads need compute, storage, and connectivity.
- ☐ We have Azure (or equivalent cloud) infrastructure in place 3 pts
- ☐ Our systems have APIs or can be connected via integration 2 pts
- ☐ We have security controls suitable for AI workloads 2 pts
- ☐ We have monitoring and alerting infrastructure 1 pt
Dimension maximum: 8 points
Governance Readiness
Responsible AI is not optional for enterprise deployment.
- ☐ We have (or can create) an AI usage policy 2 pts
- ☐ We can classify AI use cases by risk level 2 pts
- ☐ We have audit and logging infrastructure for AI outputs 2 pts
- ☐ We have mapped AI governance to our compliance requirements 2 pts
Dimension maximum: 8 points
Scoring: 41 points total
- 30+ points
- Production Ready. You have the foundations in place. Focus on problem selection and start with a structured pilot.
- 20–29 points
- Targeted Pilot. You have gaps, but they are manageable. Start with a low-risk use case that works with your current data and infrastructure.
- 10–19 points
- Focused Assessment. Significant gaps exist, but that does not mean you should wait. A structured assessment identifies the highest-value opportunities and the specific gaps to close.
- 0–9 points
- Foundational Work. Focus on cloud infrastructure, data accessibility, and executive alignment before AI-specific initiatives. These foundations serve many purposes beyond AI.
Take the AI Readiness Assessment
Score your organisation across five dimensions and get a personalised readiness recommendation. It takes about three minutes.
5
Dimensions assessed
3 min
To complete
Free
Instant results
Most AI projects stall not because the technology fails, but because the organisation was not ready. Data is inaccessible. Sponsorship is vague. The use case is poorly chosen. This guide provides a structured assessment across five dimensions: data, people, process, infrastructure, and governance. Use it to identify where you are ready, where you are not, and what to do about it.
The timeframes in this guide reflect AI-augmented practices as of early 2026. AI tooling is advancing rapidly, and these timelines are compressing quarter by quarter. Treat specific figures as a reasonable upper bound rather than fixed estimates. Book a consultation for current timelines tailored to your situation.
Why readiness matters more than technology
AI tools are mature enough for production use in enterprise environments. Azure OpenAI, Azure AI Foundry, and the surrounding ecosystem provide the models, infrastructure, and security controls that serious organisations require. The technology is not the bottleneck.
The bottleneck is organisational readiness.
Gartner, McKinsey, and every other analyst firm report the same pattern: the majority of AI projects fail to move from pilot to production. The reasons are consistent. The data was not ready. The business case was unclear. Nobody owned the outcome. The organisation did not change its processes to use the AI output.
Readiness assessment is not bureaucratic caution. It is the difference between a pilot scoped to prove value and deliver ROI, and a proof of concept that impresses in a demo but is never used. Total spend depends on scope, integrations, governance overhead, and how many delivery cycles you run. See our pricing for current ranges.
The five dimensions of AI readiness
1. Data readiness
AI applications need data to work on. The question is whether your data is accessible, clean enough, and governed appropriately.
Accessible. Can you get the data out of your systems in a format that an AI application can consume? Data locked in legacy databases, spreadsheets, email inboxes, or third-party SaaS products with limited APIs creates a bottleneck. AI-augmented analysis can map your data landscape faster than manual investigation, but the data still needs to be technically reachable.
Clean enough. Perfect data is not required. But the data needs to be consistent enough for AI to work with. Duplicates, conflicting records, and missing fields degrade AI output quality. Assess the state of your data honestly. Some cleanup may be needed before AI can deliver reliable results.
Governed. Who owns the data? Who can access it? What are the retention and privacy requirements? AI applications that process personal data need GDPR compliance, data processing agreements, and clear access controls. Data governance is not optional for production AI.
Assessment checklist (9 points maximum):
- We know where our data lives and can access it programmatically (3 pts)
- Our data is reasonably clean and consistent (2 pts)
- We have data governance policies and know the privacy requirements (2 pts)
- We have labelled data for classification or training tasks (1 pt)
- Our data is documented with schemas or data dictionaries (1 pt)
Use the interactive assessment above to calculate your score automatically.
2. People readiness
AI projects need sponsorship, skills, and willingness to change.
Executive sponsorship. An AI project without a senior sponsor who owns the business outcome will stall at the first obstacle. The sponsor does not need to understand the technology. They need to care about the outcome and remove blockers.
Technical capability. You need people who can build, deploy, and maintain AI applications. This does not necessarily mean a data science team. AI-augmented software engineering teams can build production AI applications using pre-trained models and platforms like Azure AI Foundry. You need data science expertise specifically for custom model training and fine-tuning.
Change willingness. AI that is not embedded in a workflow is a toy. The people who will use the AI output need to be involved in the design, willing to change their process, and supported through the transition. Bottom-up enthusiasm without top-down support (or vice versa) rarely works.
Assessment checklist (8 points maximum):
- We have an executive sponsor who owns the business outcome (3 pts)
- We have software engineering capability (in-house or partner) (2 pts)
- The team that will use the AI output is engaged and willing to change (2 pts)
- We have (or can access) data science skills for custom model work (1 pt)
Use the interactive assessment above to calculate your score automatically.
3. Process readiness
AI delivers the most value when it is embedded in an existing process, not bolted on as a side project.
Identified opportunities. Map your current processes and identify where AI can add value. The highest-value targets are typically:
- Manual, repetitive tasks with clear rules (classification, data entry, routing)
- Decision support where humans need to synthesise large amounts of information
- Content generation where volume or speed matters
- Search and retrieval across large document or data sets
Measurable baseline. You need to know how the process performs today to measure AI’s impact. How long does the task take? How many errors occur? What is the cost per unit of work? Without a baseline, you cannot demonstrate ROI.
Integration points. Where does the AI output go? Into an existing application? A dashboard? An API? A notification? The delivery mechanism matters. AI that produces output nobody sees or uses delivers no value.
Assessment checklist (8 points maximum):
- We have identified specific processes where AI could add value (2 pts)
- We can measure the current performance of those processes (2 pts)
- We know where the AI output will be consumed (2 pts)
- The process owner is involved in the AI initiative (2 pts)
Use the interactive assessment above to calculate your score automatically.
4. Infrastructure readiness
AI workloads need compute, storage, and connectivity.
Cloud maturity. AI applications run on cloud infrastructure. If your organisation is already on Azure (or another major cloud), the infrastructure foundation is in place. If you are still primarily on-premises, cloud migration may be a prerequisite. Legacy modernisation and cloud migration can run in parallel with AI planning.
API and integration surface. AI applications need to connect to your data sources and business systems. A mature API and integration layer makes this straightforward. If your systems are siloed with no APIs, building the integration surface is a prerequisite.
Security and compliance controls. Azure AI services support virtual networks, private endpoints, managed identity, and customer-managed encryption keys. Your infrastructure team needs to configure these controls before production AI workloads go live. Azure AI Foundry provides a unified platform for managing model deployment, access control, and monitoring.
Assessment checklist (8 points maximum):
- We have Azure (or equivalent cloud) infrastructure in place (3 pts)
- Our systems have APIs or can be connected via integration (2 pts)
- We have security controls suitable for AI workloads (2 pts)
- We have monitoring and alerting infrastructure (1 pt)
Use the interactive assessment above to calculate your score automatically.
5. Governance readiness
Responsible AI is not optional for enterprise deployment.
AI usage policy. A documented policy covering: where AI can be used, what data it can access, how outputs are reviewed, and what human oversight is required. This does not need to be 50 pages. A clear, practical policy that the team actually follows is worth more than a comprehensive document that nobody reads.
Risk framework. Not all AI applications carry the same risk. A chatbot answering general questions is lower risk than an AI making lending decisions. Classify your AI use cases by risk level and apply governance proportionally.
Audit and accountability. Who is accountable when AI produces a wrong or harmful output? How do you trace the input, model, and output for a specific decision? Production AI needs audit trails, logging, and clear lines of accountability.
Compliance mapping. If you operate in a regulated industry, map AI governance to your existing compliance frameworks. ISO 27001, Cyber Essentials, FCA rules, DfE requirements, and GDPR all have implications for AI deployment.
Assessment checklist (8 points maximum):
- We have (or can create) an AI usage policy (2 pts)
- We can classify AI use cases by risk level (2 pts)
- We have audit and logging infrastructure for AI outputs (2 pts)
- We have mapped AI governance to our compliance requirements (2 pts)
Use the interactive assessment above to calculate your score automatically.
Interpreting your score
Use the interactive assessment at the top of this page to calculate your score automatically. The assessment covers all five dimensions and provides a personalised recommendation based on your total.
30+ points: production ready. You have the foundations in place. Focus on problem selection and start with a structured pilot.
20-29 points: targeted pilot. You have gaps, but they are manageable. Start with a low-risk use case that works with your current data and infrastructure. Address governance and process gaps in parallel.
10-19 points: focused assessment. Significant gaps exist, but that does not mean you should wait. A structured AI assessment identifies the highest-value opportunities and the specific gaps to close. This is the best investment at this stage.
Below 10 points: foundational work first. Focus on cloud infrastructure, data accessibility, and executive alignment before AI-specific initiatives. These foundations serve many purposes beyond AI.
Where AI delivers value fastest
If you are ready for a pilot, these use cases consistently deliver measurable value in the shortest time.
Internal knowledge retrieval (RAG). Connect AI to your existing documents, policies, and knowledge base. Employees ask questions in natural language and get accurate, sourced answers. This uses data you already have, requires no training, and delivers immediate time savings. RAG pipelines on Azure AI Search are the most common starting point for enterprise AI.
Document processing and classification. Automate the reading, classification, and extraction of information from documents (invoices, contracts, applications, reports). Azure Document Intelligence handles common document types. Custom models handle domain-specific formats.
Content generation with human review. Draft reports, summaries, communications, or specifications using AI, with human review before publication. This accelerates content-heavy processes without removing human judgement.
Process automation. Identify manual steps in existing workflows (data entry, routing, triage, status updates) and automate them with AI. The key is choosing steps where errors are low-cost and reversible, so the AI can deliver value while trust is established.
The assessment-to-production path
Step 1: assessment (2-4 weeks)
A structured assessment identifies your highest-value AI opportunities, maps them to your readiness across all five dimensions, and produces a prioritised roadmap. This is the single most valuable step. It prevents the most common failure mode: investing in the wrong problem.
Step 2: pilot (6-12 weeks)
Build a focused pilot on the highest-value, lowest-risk opportunity. The pilot proves value on real data with real users. It also proves the delivery model, the governance framework, and the team’s ability to build and operate AI in production.
Step 3: production and scale (3-6 months)
Harden the pilot for production. Add monitoring, logging, and operational controls. Then evaluate the next opportunity on the roadmap. Each successive AI project is faster because the infrastructure, governance, and team capability compound.
AI-augmented delivery compresses each of these steps. The same tools and practices that accelerate software development also accelerate AI application development, because AI applications are software.
Where to start
- Take the interactive assessment at the top of this page. It scores your organisation across all five dimensions and gives you a personalised recommendation. Be honest. The assessment is for you, not for a vendor.
- Identify one specific use case where AI could deliver measurable value with your current data.
- Get a structured assessment. A 2-4 week engagement with an experienced team validates your thinking, identifies gaps you may have missed, and produces an actionable plan.
See our AI development and implementation service for how we approach this, or book a consultation to discuss your readiness and opportunities.
Frequently asked questions
How do I know if my organisation is ready for AI?
What is the most common reason AI projects fail?
Do I need a data science team to use AI?
How much does an AI pilot project cost?
What data do I need for AI to work?
Is AI safe for regulated industries?
Related guides
AI Code Attribution for Enterprise Procurement Teams
A practical framework for tracking and documenting AI-generated code. Repo-level model logs, PR attribution notes, CI licence gates, SBOM integration, and what procurement teams should require from suppliers.
RAG vs Fine-Tuning vs Prompt Engineering: Choosing the Right AI Architecture
Three approaches to getting your data into AI, each with different costs, timelines, and trade-offs. A practical comparison for enterprise teams evaluating AI architectures on Azure.