Skip to content
Legacy Modernisation

Legacy .NET Application Assessment: A Scoring Framework for Modernisation Readiness

9 min read Matt Hammond

Before committing budget to a .NET modernisation programme, you need an honest assessment of what you are working with. This framework scores legacy .NET applications across six dimensions to determine modernisation readiness, prioritise components, and estimate effort.

  • Score each application across six dimensions: codebase health, dependency risk, test coverage, architectural complexity, business alignment, and team capability.
  • Each dimension scores 1-5. Applications scoring 18+ out of 30 are strong modernisation candidates.
  • Low scores are not failures. They indicate that a different strategy (rebuild, replace, retire) may be more appropriate.
  • AI-assisted analysis compresses the technical scoring from weeks to days.
  • The framework works for individual applications and for prioritising across a multi-application estate.

Most modernisation projects fail at the assessment stage, not the migration stage

The pattern is predictable. A team commits to modernising a legacy .NET application based on a rough estimate. Three weeks in, they discover the application has 40 undocumented WCF endpoints, a custom ORM built in 2009, and zero integration tests. The timeline doubles. The budget request goes back to the board. Confidence evaporates.

A structured assessment prevents this. It is not a planning document or a project proposal. It is a technical and strategic evaluation that answers one question: is this application a good candidate for modernisation, and if so, what does the migration actually involve?

This framework is what we use internally. It is not theoretical. It is the output of assessing dozens of .NET estates and learning which indicators predict a successful modernisation and which predict an expensive surprise.

The six dimensions

Each dimension scores from 1 (poor) to 5 (excellent). The total score out of 30 determines the overall modernisation readiness.

Dimension 1: Codebase health (1-5)

What you are measuring: code structure, naming consistency, separation of concerns, and adherence to established patterns.

ScoreIndicator
5Clean separation of concerns, consistent patterns (MVC, repository, service layer), minimal code duplication, clear naming conventions
4Mostly well-structured with some inconsistency. Business logic occasionally leaks into controllers or data access code
3Mixed quality. Some areas are well-structured, others are tangled. Multiple patterns used inconsistently
2Significant structural issues. Business logic spread across layers, high coupling, inconsistent patterns
1No discernible architecture. God classes, circular dependencies, copy-paste duplication throughout

How AI helps: Claude Code can analyse a codebase and produce a structural report in hours. It identifies pattern usage, coupling metrics, duplication, and naming inconsistencies. This replaces days of manual code review.

Dimension 2: Dependency risk (1-5)

What you are measuring: how many dependencies are unsupported, unmaintained, or unavailable on .NET 10.

ScoreIndicator
5All NuGet packages have .NET 10 equivalents. No Windows-only dependencies. No custom native interop
4Most packages have equivalents. 1-2 packages need replacement with well-known alternatives
3Several packages need replacement. Some Windows-only dependencies that need workarounds
2Significant dependency challenges. Custom native interop, abandoned packages, or deep dependency on Windows-only APIs
1Heavy reliance on unsupported frameworks (WCF, .NET Remoting, Web Forms) with no clear migration path for custom components

Tool support: Run dotnet tool install -g upgrade-assistant and use .NET Upgrade Assistant to produce an automated dependency analysis. Cross-reference with Claude Code’s analysis for a complete picture.

Dimension 3: Test coverage (1-5)

What you are measuring: the extent and quality of the existing test suite.

ScoreIndicator
570%+ code coverage. Unit tests, integration tests, and end-to-end tests. Tests are reliable and run in CI
450-70% coverage. Good unit tests for core business logic. Some integration tests. Tests are mostly reliable
330-50% coverage. Tests exist but are patchy. Some tests are flaky or disabled. No integration tests
2Under 30% coverage. Tests exist for some components but provide low confidence in correctness
1No meaningful test coverage. Manual testing only

Why this matters for migration: Test coverage directly determines how confidently you can verify that migrated code behaves identically to the original. Low test coverage means more manual verification, longer timelines, and higher risk of regression. Consider investing in test coverage before starting migration if the score is below 3.

Dimension 4: Architectural complexity (1-5)

What you are measuring: how complex the migration will be from an architectural perspective. Lower complexity scores higher.

ScoreIndicator
5Simple web application (MVC or API). Single database. Standard authentication. No background processing
4Web application with some complexity: background jobs, caching, multiple data stores, or moderate WCF usage
3Moderate complexity: multiple services, WCF with custom bindings, Service Fabric hosting, or significant Windows service components
2High complexity: distributed transactions, custom message queuing, heavy .NET Remoting, or COM interop
1Extreme complexity: custom runtime hosting, deep Win32 interop, real-time systems, or undocumented proprietary protocols

Dimension 5: Business alignment (1-5)

What you are measuring: how well the modernisation aligns with business priorities.

ScoreIndicator
5Modernisation is directly blocking a revenue-generating initiative. Executive sponsorship is strong. Budget is allocated
4Modernisation supports a strategic priority (compliance, cost reduction, talent retention). Sponsorship exists
3Modernisation is agreed to be necessary but is not tied to a specific initiative. Budget is available but not allocated
2Modernisation is acknowledged but competes with other priorities. No specific sponsorship
1No business case. The system works and nobody is asking for change

Why this matters: Technical readiness without business alignment leads to stalled projects. A score of 1-2 here means the project will likely lose funding or priority mid-stream, regardless of how well the technical migration goes.

Dimension 6: Team capability (1-5)

What you are measuring: whether the team has (or can access) the skills needed for migration.

ScoreIndicator
5Team has .NET 10 experience, CI/CD maturity, and migration experience. Familiar with AI-assisted development tooling
4Team knows .NET well and has some modern .NET experience. Comfortable with CI/CD. Open to AI tooling
3Team is .NET Framework-experienced but has not worked with .NET 10. CI/CD is basic or manual. No AI tooling experience
2Team is small, stretched, or partially available. Limited modern .NET exposure. Manual deployment processes
1Original development team is gone. No institutional knowledge of the system. Maintenance-only team

Bridging the gap: Low team capability scores are not blockers if you bring in external expertise. A delivery partner with .NET modernisation experience (and AI-assisted development capability) can execute the migration while upskilling your internal team. This is a common pattern and often the most efficient one.

Interpreting the total score

Total Score (out of 30)Recommendation
24-30Strong modernisation candidate. Proceed with confidence. AI-assisted migration will deliver the most value here
18-23Good candidate with caveats. Address the lowest-scoring dimensions before or during migration
12-17Mixed signals. Conduct a deeper assessment. Consider whether targeted modernisation (migrating specific components rather than the whole application) is more appropriate
6-11Weak candidate for modernisation. Evaluate rebuild, replace, or retire as alternatives. See our Modernise, Rebuild, or Replace guide

Critical rule: A score of 1 on business alignment (Dimension 5) overrides the total. Without business alignment, do not start a modernisation programme regardless of technical readiness.

Worked example: scoring a real estate

Consider a mid-size insurance company with a .NET Framework 4.6.2 policy administration system:

DimensionScoreReasoning
Codebase health3MVC frontend is well-structured. Service layer exists but business logic leaks into controllers in some areas. Repository pattern used inconsistently
Dependency risk3Heavy use of WCF (12 service contracts). EF6 data access. Most NuGet packages have .NET 10 equivalents. Some custom XML serialisation
Test coverage2Unit tests exist for the service layer (~40% coverage there) but no integration tests and no frontend tests. Tests are not in CI
Architectural complexity3WCF services add complexity. SQL Server stored procedures handle some business logic. Windows service for background jobs
Business alignment4Compliance deadline requires cloud hosting within 12 months. Budget allocated. CTO is the sponsor
Team capability3Strong .NET developers but no .NET 10 experience. Manual deployments. No AI tooling experience
Total18Good candidate with caveats

This application scores 18: a good candidate with caveats. The action plan would be:

  1. Invest in test coverage before migration (raise Dimension 3 from 2 to at least 3)
  2. Bring in external migration expertise to bridge the team capability gap (Dimension 6)
  3. Start with language migration (Pillar 1) and WCF-to-gRPC conversion for the service layer
  4. Use the compliance deadline as the forcing function for business alignment

Prioritising across a multi-application estate

When assessing multiple applications, score each one independently and then sort by a simple priority formula:

Priority = Total Score x Business Alignment Score

This weights applications that are both technically ready and business-critical. An application scoring 22 total with a business alignment of 5 (priority: 110) should migrate before an application scoring 25 total with a business alignment of 2 (priority: 50).

What we have seen in practice

[CLIENT EXAMPLE: Government department with 8 .NET Framework applications ranging from 50k to 400k LOC. Assessment scored each application independently. Three scored 20+ and were migrated in parallel. Two scored 12-15 and were deferred for deeper analysis. Two scored below 10 and were recommended for replacement. One was retired entirely. The structured assessment prevented a “migrate everything” approach that would have wasted budget on applications better served by other strategies.]

[CLIENT EXAMPLE: Retail company with a single 280k LOC .NET Framework 4.7.2 e-commerce platform. Initial assessment scored 16 overall, with the lowest scores in test coverage (1) and team capability (2). We recommended a 4-week test-writing sprint before starting migration. After improving test coverage to ~55%, the reassessed score was 21 and migration proceeded successfully over 10 weeks.]

Running your own assessment

You can apply this framework internally. The technical dimensions (1-4) can be partially automated using the .NET Upgrade Assistant and Claude Code. The strategic dimensions (5-6) require honest internal evaluation.

For a more thorough assessment with external objectivity and migration-specific expertise, book a free Legacy .NET Assessment consultation. We will apply this framework to your estate and produce a detailed migration plan with timelines and cost estimates.

This guide is part of our .NET modernisation playbook. For the business case behind modernisation investment, see The Real Cost of Legacy .NET.

Frequently asked questions

How long does a legacy .NET assessment take?
A structured assessment typically takes 2-3 weeks. AI-assisted codebase analysis compresses the technical analysis from weeks to days. The remaining time is spent on business context gathering, stakeholder interviews, and strategic planning.
Can we do the assessment ourselves?
You can apply this scoring framework internally. The technical dimensions (codebase health, dependency risk, test coverage) can be measured with tooling. The strategic dimensions (business alignment, team capability) require honest internal evaluation. External assessment adds objectivity and migration-specific expertise.
What if our application scores poorly on every dimension?
A uniformly low score often indicates that a rebuild is more appropriate than modernisation. The framework helps you reach that conclusion with evidence rather than gut feeling. See our Modernise, Rebuild, or Replace guide for the full decision framework.
Should we assess every application in our estate?
Start with the applications that are causing the most pain or blocking the most important initiatives. Assess 2-3 applications first, then use those assessments to calibrate your approach before tackling the full estate.
How does AI-assisted analysis work in the assessment phase?
Claude Code reads the codebase and produces structured reports on dependency trees, API usage patterns, unsupported APIs, test coverage gaps, and code complexity metrics. This replaces weeks of manual code review with hours of automated analysis, reviewed and interpreted by a senior architect.

Ready to transform your software?

Let's talk about your project. Contact us for a free consultation and see how we can deliver a business-critical solution at startup speed.