Your Development Team Left: A Practical Guide to What Happens Next
Your developers left. Your vendor disappeared. Your contractor finished and moved on. The system is still running and the business still depends on it. This guide provides a step-by-step plan: protect access immediately, assess what you have, establish operational support, and then decide on the future. AI-augmented codebase analysis compresses the scariest part (understanding a system nobody on your team built) from weeks to days.
The timeframes in this guide reflect AI-augmented practices as of early 2026. AI tooling is advancing rapidly, and these timelines are compressing quarter by quarter. Treat specific figures as a reasonable upper bound rather than fixed estimates. Book a consultation for current timelines tailored to your situation.
The first 48 hours
Panic is understandable. But the risk in this situation is not that the system will stop working tomorrow. The risk is that you lose access to something critical or that a problem occurs that nobody can fix. Address both risks immediately.
Protect access
Verify that you have access to every account and service the system depends on. Check each one. Do not assume.
Source code. Do you have the repository? Is it on GitHub, Azure DevOps, or another platform under your organisation’s account? If the repository is under the departing team’s personal or company account, get it transferred now. If you only have a deployed application and no source code, this is a critical gap that needs addressing immediately.
Hosting and infrastructure. Azure portal, AWS console, or wherever the system runs. Verify that your organisation has owner-level access. Check that the account is not tied to an individual’s email address that will be deactivated.
Domain and DNS. Who owns the domain registration? Who controls the DNS records? If these are under the departing vendor’s account, transfer them to yours.
SSL certificates. When do they expire? Who can renew them? An expired certificate takes the site down with a scary browser warning.
Third-party services. Payment processors, email services, analytics, monitoring, API keys for external integrations. List every external service the system uses and verify you have account access.
Secrets and credentials. Database passwords, API keys, encryption keys. If these are documented, secure the documentation. If they are only in the deployed environment (environment variables, key vault), verify you can access them.
Ensure continuity
Backups. Verify that database backups are running and that you can restore from them. Test a restore if you have not done so recently.
Monitoring. Is there any monitoring in place? If the departing team had alerting configured, verify it still works and that alerts go to someone on your team.
Change passwords. Change passwords on shared accounts and remove access for departing team members. This is basic security hygiene and should happen regardless of circumstances.
Assessing what you have
Once access is secured, the next step is understanding what you are working with. This is traditionally the hardest part: a codebase nobody on your team wrote, with varying levels of documentation.
How AI-augmented assessment changes this
AI tools (Cursor, Claude Code) can read, reason over, and map an unfamiliar codebase in hours. A team using these tools can produce:
- Architecture overview: what components exist, how they connect, what technologies are used
- Dependency inventory: every external library, framework, API, and service the system depends on, with version and support status
- Risk assessment: security vulnerabilities, outdated dependencies, hard-coded credentials, missing error handling
- Data model documentation: database schema, relationships, and data flow
- API inventory: every endpoint, its inputs, outputs, and authentication requirements
- Deployment documentation: how the system is built, tested, and deployed (or the absence of these processes)
This assessment takes 3-5 days with an AI-augmented team. The same work would take 2-4 weeks with traditional manual investigation. The output gives you an operational picture: what you have, what state it is in, and what the immediate risks are.
What the assessment tells you
The assessment answers three questions:
- Is the system stable? Can it keep running without intervention? Are there ticking time bombs (expiring certificates, filling disks, unpatched vulnerabilities)?
- Can we operate it? Can someone respond to incidents, apply patches, and make small changes? What is needed to get to that point?
- What is the medium-term outlook? Is this a healthy system that needs support, or a fragile system that needs modernisation?
Three paths forward
The assessment determines which path is appropriate. Do not choose a path before the assessment. Decisions made in a crisis without data lead to wasted budget.
Path 1: stabilise and support
When it fits: The system is fundamentally sound. It runs on supported technology, has reasonable code quality, and does not have critical security gaps. It needs an owner, not a rewrite.
What happens: A managed application support team takes operational ownership. They set up monitoring (if missing), establish incident response procedures, apply security patches, and handle bug fixes and small changes. The system is stable and supported.
Timeline: 1-2 weeks to establish operational support after the assessment.
Cost: Scales with system complexity and SLA requirements. See our pricing for current ranges.
Path 2: stabilise and modernise
When it fits: The system works but has significant technical debt, security gaps, or architectural limitations that will create problems over time. It needs support now and investment soon.
What happens: Managed support stabilises the system immediately. A modernisation roadmap is developed based on the assessment findings. Modernisation proceeds incrementally (strangler fig pattern) while the system remains operational. See our guides on legacy system costs and modernise, rebuild, or replace for decision frameworks.
Timeline: Support established in 1-2 weeks. Modernisation roadmap in 4-6 weeks. Modernisation delivery in 3-12 months depending on scope.
Cost: Support retainer plus modernisation project costs. AI-augmented delivery compresses the modernisation timeline by 40-50%. See pricing for indicative ranges.
Path 3: rebuild
When it fits: The system is on end-of-life technology, has fundamental architectural problems, or is so poorly built that ongoing support costs would exceed the cost of a replacement. The assessment makes this clear.
What happens: The existing system is supported at minimum viable level while a replacement is built. The old system serves as a living specification. AI-augmented teams analyse the legacy codebase to extract business rules, data flows, and edge cases, then rebuild on modern architecture. See our guide on prototype to production or custom software development for the approach.
Timeline: Support immediately. Rebuild 3-12 months with AI-augmented delivery.
Cost: Support retainer plus rebuild project costs.
What to look for in a support partner
If you need to bring in a partner (and in this situation, you almost certainly do), evaluate them on:
Takeover experience. Have they taken over systems they did not build before? How many? This is a specific skill that not all agencies have. Ask for references from similar situations.
AI-augmented capability. A team that uses AI tools to analyse unfamiliar codebases reaches operational capability dramatically faster. Ask about their tools, process, and assessment timeline.
SLA and responsiveness. In the immediate term, you need someone who can respond to incidents. Understand their response times, on-call coverage, and escalation process.
Honesty about scope. A good partner will tell you what they can and cannot support. If the technology stack is outside their expertise, they should say so and help you find the right partner, not promise and underdeliver.
Support-to-development path. The partner should be able to transition from pure support to active development if the assessment reveals that modernisation or rebuilding is needed. A partner who can only keep the lights on but cannot improve the system creates a dead end.
Where to start
- Protect access today. Go through the access checklist above. Do not wait until something breaks.
- Get an assessment this week. A 3-5 day AI-augmented assessment gives you the operational picture. Without it, you are making decisions blind.
- Stabilise before you strategise. Get operational support in place before deciding whether to modernise or rebuild. Crisis decisions are bad decisions.
See our managed application support service for how we handle system takeovers, or our internal systems handover service for the structured approach. Book a consultation to discuss your situation.
Frequently asked questions
What should I do first when my development team leaves?
How quickly can a new team take over a system they did not build?
How much does it cost to hand over a system to a new support team?
What if there is no documentation?
Should I modernise the system or just support it?
Can you take over a system built with any technology?
Related guides
Choosing a Software Development Partner in the Age of AI
How to evaluate software development agencies when AI-augmented delivery is the new baseline. Eight criteria that matter, how to assess AI maturity, and red flags to watch for.
Managed Support vs Hiring: When to Outsource Application Maintenance
Should you hire a developer to maintain your software or use a managed support partner? A practical cost, risk, and capability comparison with AI-augmented economics.