Legacy .NET Application Assessment: A Scoring Framework for Modernisation Readiness
Before committing budget to a .NET modernisation programme, you need an honest assessment of what you are working with. This framework scores legacy .NET applications across six dimensions to determine modernisation readiness, prioritise components, and estimate effort.
- Score each application across six dimensions: codebase health, dependency risk, test coverage, architectural complexity, business alignment, and team capability.
- Each dimension scores 1-5. Applications scoring 18+ out of 30 are strong modernisation candidates.
- Low scores are not failures. They indicate that a different strategy (rebuild, replace, retire) may be more appropriate.
- AI-assisted analysis compresses the technical scoring from weeks to days.
- The framework works for individual applications and for prioritising across a multi-application estate.
Most modernisation projects fail at the assessment stage, not the migration stage
The pattern is predictable. A team commits to modernising a legacy .NET application based on a rough estimate. Three weeks in, they discover the application has 40 undocumented WCF endpoints, a custom ORM built in 2009, and zero integration tests. The timeline doubles. The budget request goes back to the board. Confidence evaporates.
A structured assessment prevents this. It is not a planning document or a project proposal. It is a technical and strategic evaluation that answers one question: is this application a good candidate for modernisation, and if so, what does the migration actually involve?
This framework is what we use internally. It is not theoretical. It is the output of assessing dozens of .NET estates and learning which indicators predict a successful modernisation and which predict an expensive surprise.
The six dimensions
Each dimension scores from 1 (poor) to 5 (excellent). The total score out of 30 determines the overall modernisation readiness.
Dimension 1: Codebase health (1-5)
What you are measuring: code structure, naming consistency, separation of concerns, and adherence to established patterns.
| Score | Indicator |
|---|---|
| 5 | Clean separation of concerns, consistent patterns (MVC, repository, service layer), minimal code duplication, clear naming conventions |
| 4 | Mostly well-structured with some inconsistency. Business logic occasionally leaks into controllers or data access code |
| 3 | Mixed quality. Some areas are well-structured, others are tangled. Multiple patterns used inconsistently |
| 2 | Significant structural issues. Business logic spread across layers, high coupling, inconsistent patterns |
| 1 | No discernible architecture. God classes, circular dependencies, copy-paste duplication throughout |
How AI helps: Claude Code can analyse a codebase and produce a structural report in hours. It identifies pattern usage, coupling metrics, duplication, and naming inconsistencies. This replaces days of manual code review.
Dimension 2: Dependency risk (1-5)
What you are measuring: how many dependencies are unsupported, unmaintained, or unavailable on .NET 10.
| Score | Indicator |
|---|---|
| 5 | All NuGet packages have .NET 10 equivalents. No Windows-only dependencies. No custom native interop |
| 4 | Most packages have equivalents. 1-2 packages need replacement with well-known alternatives |
| 3 | Several packages need replacement. Some Windows-only dependencies that need workarounds |
| 2 | Significant dependency challenges. Custom native interop, abandoned packages, or deep dependency on Windows-only APIs |
| 1 | Heavy reliance on unsupported frameworks (WCF, .NET Remoting, Web Forms) with no clear migration path for custom components |
Tool support: Run dotnet tool install -g upgrade-assistant and use .NET Upgrade Assistant to produce an automated dependency analysis. Cross-reference with Claude Code’s analysis for a complete picture.
Dimension 3: Test coverage (1-5)
What you are measuring: the extent and quality of the existing test suite.
| Score | Indicator |
|---|---|
| 5 | 70%+ code coverage. Unit tests, integration tests, and end-to-end tests. Tests are reliable and run in CI |
| 4 | 50-70% coverage. Good unit tests for core business logic. Some integration tests. Tests are mostly reliable |
| 3 | 30-50% coverage. Tests exist but are patchy. Some tests are flaky or disabled. No integration tests |
| 2 | Under 30% coverage. Tests exist for some components but provide low confidence in correctness |
| 1 | No meaningful test coverage. Manual testing only |
Why this matters for migration: Test coverage directly determines how confidently you can verify that migrated code behaves identically to the original. Low test coverage means more manual verification, longer timelines, and higher risk of regression. Consider investing in test coverage before starting migration if the score is below 3.
Dimension 4: Architectural complexity (1-5)
What you are measuring: how complex the migration will be from an architectural perspective. Lower complexity scores higher.
| Score | Indicator |
|---|---|
| 5 | Simple web application (MVC or API). Single database. Standard authentication. No background processing |
| 4 | Web application with some complexity: background jobs, caching, multiple data stores, or moderate WCF usage |
| 3 | Moderate complexity: multiple services, WCF with custom bindings, Service Fabric hosting, or significant Windows service components |
| 2 | High complexity: distributed transactions, custom message queuing, heavy .NET Remoting, or COM interop |
| 1 | Extreme complexity: custom runtime hosting, deep Win32 interop, real-time systems, or undocumented proprietary protocols |
Dimension 5: Business alignment (1-5)
What you are measuring: how well the modernisation aligns with business priorities.
| Score | Indicator |
|---|---|
| 5 | Modernisation is directly blocking a revenue-generating initiative. Executive sponsorship is strong. Budget is allocated |
| 4 | Modernisation supports a strategic priority (compliance, cost reduction, talent retention). Sponsorship exists |
| 3 | Modernisation is agreed to be necessary but is not tied to a specific initiative. Budget is available but not allocated |
| 2 | Modernisation is acknowledged but competes with other priorities. No specific sponsorship |
| 1 | No business case. The system works and nobody is asking for change |
Why this matters: Technical readiness without business alignment leads to stalled projects. A score of 1-2 here means the project will likely lose funding or priority mid-stream, regardless of how well the technical migration goes.
Dimension 6: Team capability (1-5)
What you are measuring: whether the team has (or can access) the skills needed for migration.
| Score | Indicator |
|---|---|
| 5 | Team has .NET 10 experience, CI/CD maturity, and migration experience. Familiar with AI-assisted development tooling |
| 4 | Team knows .NET well and has some modern .NET experience. Comfortable with CI/CD. Open to AI tooling |
| 3 | Team is .NET Framework-experienced but has not worked with .NET 10. CI/CD is basic or manual. No AI tooling experience |
| 2 | Team is small, stretched, or partially available. Limited modern .NET exposure. Manual deployment processes |
| 1 | Original development team is gone. No institutional knowledge of the system. Maintenance-only team |
Bridging the gap: Low team capability scores are not blockers if you bring in external expertise. A delivery partner with .NET modernisation experience (and AI-assisted development capability) can execute the migration while upskilling your internal team. This is a common pattern and often the most efficient one.
Interpreting the total score
| Total Score (out of 30) | Recommendation |
|---|---|
| 24-30 | Strong modernisation candidate. Proceed with confidence. AI-assisted migration will deliver the most value here |
| 18-23 | Good candidate with caveats. Address the lowest-scoring dimensions before or during migration |
| 12-17 | Mixed signals. Conduct a deeper assessment. Consider whether targeted modernisation (migrating specific components rather than the whole application) is more appropriate |
| 6-11 | Weak candidate for modernisation. Evaluate rebuild, replace, or retire as alternatives. See our Modernise, Rebuild, or Replace guide |
Critical rule: A score of 1 on business alignment (Dimension 5) overrides the total. Without business alignment, do not start a modernisation programme regardless of technical readiness.
Worked example: scoring a real estate
Consider a mid-size insurance company with a .NET Framework 4.6.2 policy administration system:
| Dimension | Score | Reasoning |
|---|---|---|
| Codebase health | 3 | MVC frontend is well-structured. Service layer exists but business logic leaks into controllers in some areas. Repository pattern used inconsistently |
| Dependency risk | 3 | Heavy use of WCF (12 service contracts). EF6 data access. Most NuGet packages have .NET 10 equivalents. Some custom XML serialisation |
| Test coverage | 2 | Unit tests exist for the service layer (~40% coverage there) but no integration tests and no frontend tests. Tests are not in CI |
| Architectural complexity | 3 | WCF services add complexity. SQL Server stored procedures handle some business logic. Windows service for background jobs |
| Business alignment | 4 | Compliance deadline requires cloud hosting within 12 months. Budget allocated. CTO is the sponsor |
| Team capability | 3 | Strong .NET developers but no .NET 10 experience. Manual deployments. No AI tooling experience |
| Total | 18 | Good candidate with caveats |
This application scores 18: a good candidate with caveats. The action plan would be:
- Invest in test coverage before migration (raise Dimension 3 from 2 to at least 3)
- Bring in external migration expertise to bridge the team capability gap (Dimension 6)
- Start with language migration (Pillar 1) and WCF-to-gRPC conversion for the service layer
- Use the compliance deadline as the forcing function for business alignment
Prioritising across a multi-application estate
When assessing multiple applications, score each one independently and then sort by a simple priority formula:
Priority = Total Score x Business Alignment Score
This weights applications that are both technically ready and business-critical. An application scoring 22 total with a business alignment of 5 (priority: 110) should migrate before an application scoring 25 total with a business alignment of 2 (priority: 50).
What we have seen in practice
[CLIENT EXAMPLE: Government department with 8 .NET Framework applications ranging from 50k to 400k LOC. Assessment scored each application independently. Three scored 20+ and were migrated in parallel. Two scored 12-15 and were deferred for deeper analysis. Two scored below 10 and were recommended for replacement. One was retired entirely. The structured assessment prevented a “migrate everything” approach that would have wasted budget on applications better served by other strategies.]
[CLIENT EXAMPLE: Retail company with a single 280k LOC .NET Framework 4.7.2 e-commerce platform. Initial assessment scored 16 overall, with the lowest scores in test coverage (1) and team capability (2). We recommended a 4-week test-writing sprint before starting migration. After improving test coverage to ~55%, the reassessed score was 21 and migration proceeded successfully over 10 weeks.]
Running your own assessment
You can apply this framework internally. The technical dimensions (1-4) can be partially automated using the .NET Upgrade Assistant and Claude Code. The strategic dimensions (5-6) require honest internal evaluation.
For a more thorough assessment with external objectivity and migration-specific expertise, book a free Legacy .NET Assessment consultation. We will apply this framework to your estate and produce a detailed migration plan with timelines and cost estimates.
This guide is part of our .NET modernisation playbook. For the business case behind modernisation investment, see The Real Cost of Legacy .NET.
Frequently asked questions
How long does a legacy .NET assessment take?
Can we do the assessment ourselves?
What if our application scores poorly on every dimension?
Should we assess every application in our estate?
How does AI-assisted analysis work in the assessment phase?
Related guides
Modernise, Rebuild, or Replace: A Decision Framework for Legacy Systems
Six modernisation strategies explained in plain language. Decision criteria, cost and risk comparisons, and how AI-augmented delivery changes which options are viable.
Signs Your Legacy System Is Costing You More Than You Think
Legacy systems hide their true costs in maintenance burden, talent risk, security exposure, and missed opportunities. Eight warning signs and how AI-augmented analysis reveals the full picture.