ALTIMI INSIGHTS
Legacy Modernization Playbook
Research-Driven Guide • March 2026

The AI-Powered Legacy Modernization Playbook

How Engineering Teams Are Cutting Migration Timelines in Half Without the Big-Bang Risk

Executive Summary

Legacy technical debt has crossed a critical inflection point. In 2025-2026, enterprise organizations spend an estimated 72% of their IT budgets merely maintaining aging systems[1] — leaving less than 30 cents of every technology dollar for innovation, new features, and competitive differentiation. The global cost of application maintenance now exceeds $1.68 trillion annually[2], and this figure compounds at roughly 15-20% year-over-year for platforms running end-of-life frameworks[3].

Meanwhile, AI-augmented development workflows have matured from experimental novelties into measurable productivity multipliers. Rigorous studies from GitHub, Google, and peer-reviewed academic research demonstrate 30-60% productivity improvements for migration and refactoring tasks specifically[6][8][12] — but only when implemented as structured team workflows with human oversight, not as ad-hoc individual tool usage[9][11].

This playbook synthesizes evidence from over 40 industry studies, academic papers, and enterprise case studies to make the case for a specific approach: phased, frontend-first modernization combined with AI-assisted development workflows. It is the strategy that consistently delivers the lowest risk, fastest time-to-value, and strongest return on investment — and it is the methodology that Altimi's engineering teams apply to every modernization engagement.

72%
IT budget consumed by legacy maintenance
47%
Productivity gain on migration tasks with AI
85%
Success rate for phased migrations

This guide is structured in three sections. Section 1 quantifies the legacy debt crisis across financial, talent, and security dimensions — establishing why the cost of inaction now exceeds the cost of modernization. Section 2 examines the real-world evidence for AI-assisted development productivity, separating rigorous measurements from vendor marketing claims, with special focus on migration and refactoring tasks. Section 3 presents the methodological case for phased, frontend-first modernization with incremental backend extraction, backed by named case studies from companies like Shopify, GitHub, and Spotify. Each section includes data visualizations generated from peer-reviewed sources and industry research published between 2024 and 2026.

Section 1

The Legacy Debt Crisis: Why "Do Nothing" Is No Longer an Option

Legacy systems aren't just a technical inconvenience — they represent a compounding financial, talent, and security liability that grows exponentially once a framework passes its end-of-life date.

The Financial Burden

The numbers are stark. According to synthesized data from Gartner IT Key Metrics, IDC Worldwide Black Book, and Forrester's technical debt research (2024-2025)[1][2][3], enterprise organizations allocate an average of 72% of their total IT budgets to maintaining and operating existing systems. This leaves a mere 28% for building new capabilities, exploring emerging technologies, or responding to competitive pressures. For organizations running end-of-life frameworks like AngularJS (EOL December 2021) or legacy .NET Framework, this ratio skews even further — with some teams reporting 85-90% of engineering capacity consumed by maintenance, patching workarounds, and keeping fragile integrations alive.

The global scale of this problem is staggering. Total worldwide spending on application maintenance and modernization exceeds $1.68 trillion annually[2]. This figure has been growing at approximately 8-12% year-over-year, driven by the compounding nature of technical debt — what researchers term the "technical debt interest rate." Forrester's 2025 analysis models this rate at approximately 18% per year[3] for systems running unsupported frameworks, meaning that the cost of maintaining a legacy platform roughly doubles every four years after its core framework reaches end-of-life.

The Compounding Cost of Legacy Debt

Relative cost index over years after framework EOL. The crossover at Year 4 marks where cumulative "do nothing" costs exceed a structured modernization investment.

Sources: Gartner IT Key Metrics, Forrester Tech Debt Research, IDC (2024-2025). Curve is synthesized from multiple data points.

The Talent Drain

Beyond the direct financial costs, legacy platforms impose a devastating hidden tax on engineering teams: the talent drain. Aggregate data from StackOverflow Developer Surveys (2023-2025), JetBrains State of Developer Ecosystem reports, and Hired State of Software Engineers[4][18] consistently shows that over 65% of developers actively reject job offers that require maintaining legacy PHP, AngularJS, jQuery-heavy applications, or ColdFusion codebases. This rejection rate climbs above 80% for developers under 30, who overwhelmingly prefer modern frameworks like React, Next.js, and TypeScript-based ecosystems.

The financial consequence is a 25-30% salary premium required to attract and retain engineers willing to work on dying frameworks. Combined with the 45% slower PR cycle times and 60% lower deployment frequency that legacy-heavy organizations report[5], the talent problem creates a vicious cycle: fewer developers willing to work on the legacy stack, which means slower delivery, which means the platform falls further behind market expectations, which makes it even harder to recruit. Organizations running AngularJS frontends paired with older backend frameworks face particularly acute hiring challenges, as these frameworks have shrinking developer communities and minimal new educational content being produced. It is worth noting that not all "legacy" frameworks are equally dead: Meteor, for instance, released a modernized v3.0 in 2024 with native async/await support and modern Node.js compatibility — making an in-place upgrade a viable path alongside frontend modernization, rather than requiring full abandonment.

Developer Time Allocation: Legacy vs. Modern Stacks

How developers spend their time on legacy vs. modern stacks. On legacy platforms, nearly 70% of capacity is consumed by maintenance.

Production Footprint of Aging / EOL Frameworks

Millions of enterprise applications still running on EOL or aging frameworks. The installed base is shrinking far more slowly than most technology leaders assume.

Security and Compliance Exposure

Running end-of-life frameworks is not merely a velocity problem — it is an active security liability. AngularJS alone carries over 35 known unpatched high and medium severity CVEs in the framework and its legacy ecosystem. With no official patches being released post-EOL, organizations must either develop custom patches (expensive, error-prone) or accept the risk. The average time from vulnerability disclosure to active exploitation in unsupported web frameworks dropped to under 4 days in 2024, making the window for response dangerously narrow.

Regulatory pressure amplifies the urgency. SOC 2 and ISO 27001 auditors routinely flag systems running unpatched or unsupported software as critical non-conformities. The EU's Digital Operational Resilience Act (DORA), effective January 2025, imposes strict penalties on financial entities for ICT risks stemming from outdated, unsupportable technology architectures. For organizations in regulated industries — financial services, healthcare, government — the compliance dimension alone can justify the modernization investment. Companies like Equifax (2017 breach costing $575M+ in settlements) and Southwest Airlines (December 2022 operational meltdown attributed to 1990s-era crew scheduling systems) serve as stark cautionary tales of what happens when legacy modernization is deferred too long.

The Scale of the Installed Base

The installed base of aging frameworks in production is enormous, and it is shrinking far more slowly than most technology leaders assume. Synthesized web crawl telemetry and analyst estimates for 2025 place legacy .NET (pre-.NET 6) at approximately 8.5 million active enterprise applications, legacy PHP (pre-8.0) at 7.2 million, jQuery as a primary framework (not merely a utility library) at 5 million, and AngularJS — four years past its official end-of-life — still powering an estimated 1.8 million production applications. Knockout.js, Backbone.js, and legacy Ember collectively account for roughly 400,000 production deployments. (Meteor.js is intentionally excluded from this "dead frameworks" count — while Meteor 2.x with synchronous fibers is genuinely legacy, Meteor 3.x is actively maintained and offers a viable upgrade path.) These numbers represent a massive surface area of risk, each application accumulating compounding debt with every month it remains on an unsupported stack.

The retirement rate of these frameworks has been stubbornly slow, in part because the switching costs are perceived as prohibitively high. Organizations that attempted big-bang rewrites and failed often retreat to "maintenance mode," accepting the compounding costs rather than risking another expensive transformation attempt. This creates a paradox: the longer modernization is deferred, the more expensive it becomes, but the more expensive it becomes, the more organizations hesitate to invest. Breaking this cycle requires a fundamentally different approach — one that delivers value incrementally, reduces risk at each step, and leverages modern AI tooling to compress traditionally lengthy migration timelines.

The Tipping Point: When Legacy Debt Becomes a Board-Level Crisis

Research models indicate a system has breached the critical tipping point when any two of these heuristics are true:

Velocity Death: Maintenance consumes >60% of developer hours
Talent Evaporation: Framework EOL >2 years; hiring pipeline shrinks 50%
Security Paralysis: Mean-Time-To-Patch exceeds 30 days
Cost Inversion: 3-year maintenance cost exceeds 1-time migration cost
Section 2

AI-Assisted Development: Evidence vs. Hype

AI coding tools promise to revolutionize software development. But what do the rigorous studies actually show — especially for the complex, messy work of legacy migration?

The Major Productivity Studies

The narrative surrounding AI developer productivity has evolved significantly since GitHub's initial 2022 experiment showing developers completing tasks 55% faster with Copilot. That early study[6], while influential, measured a trivial JavaScript HTTP server task with 95 developers — conditions that disproportionately favored autocomplete capabilities and bore little resemblance to enterprise legacy codebases. Since then, a more nuanced picture has emerged.

Longitudinal enterprise telemetry from GitHub's 2024-2025 update[7] tracked actual repository metrics — pull requests, commits, and builds — across randomized enterprise groups, revealing a more modest but significant 26% increase in completed tasks. Google's internal DIDACT methodology[8], applied to a massive 32-to-64-bit integer ID migration across their monorepo, demonstrated that 80% of code modifications were AI-authored, yielding an estimated 50% reduction in total migration time. However, this required proprietary, fine-tuned models trained on deep historical monorepo data — not off-the-shelf tools.

Perhaps the most sobering finding comes from the 2025 METR study[11], published in a controlled trial of experienced open-source developers working on real-world repository issues averaging 1 million lines of code. Using frontier models (Claude 3.5/3.7 Sonnet via Cursor), developers actually worked 19% slower on complex tasks — despite subjectively perceiving a 20-24% speedup. This perception-reality disconnect underscores how benchmarks prioritizing isolated tasks systematically overestimate AI capabilities when applied to sprawling enterprise environments. The AI enablement approach matters — ad-hoc usage fails where systematic integration succeeds.

AI Productivity Gains by Development Task Type

Horizontal bars show central estimates; error bars show the range from conservative to optimistic findings. Color indicates evidence confidence level.

Sources: GitHub (2022-2025), Google DIDACT (2024), McKinsey (2025), Science/CSH (2025), METR (2025), GlobalLogic (2025), Experian/AWS (2026).

Migration and Refactoring: Where AI Delivers Real Value

Legacy modernization represents the most economically valuable application of AI in software engineering. Unlike greenfield development, migration requires deep contextual awareness, deterministic transformations, and strict preservation of existing business logic. The evidence shows AI excels at specific migration subtasks: a 2026 case study involving Experian[12] achieved an 80% automation rate across 687,600 lines of .NET code, reducing 7 enterprise application upgrades from 15 sprints to 8 sprints each — a rigorously measured 47% productivity gain. GlobalLogic's AngularJS-to-Angular 15 migration[13] demonstrated 40% reduction in manual effort and 15-20% overall time savings.

For code comprehension — the primary bottleneck where developers spend an estimated 52-70% of their time on legacy codebases — AI tools show transformative potential. The 2026 LegacyCodeBench[14] demonstrated that domain-specialized systems achieve 92% accuracy in extracting behavioral documentation from COBOL code. Enterprise modernization platforms report a 60-80% reduction in manual effort during discovery and comprehension phases. This is where structured engineering practices combined with AI tooling create the greatest leverage.

Measured AI Impact on Migration Tasks

TaskScenarioToolImpactConfidence
Framework Migration.NET 6 to .NET 8 (Experian, 7 apps)AWS Transform for .NET49 sprints saved; 47% productivity gainHIGH
Framework MigrationAngularJS to Angular 15 (B2B marketplace)GlobalLogic AI Pipeline40% less manual effort; 15-20% time savingsMEDIUM
Code ComprehensionCOBOL business logic extractionHexaview Legacy Insights92% accurate documentation generationHIGH
Codemod Generation32-to-64-bit integer ID updateGoogle Rosie / DIDACT>75% monorepo merge success rateHIGH
Test GenerationJava unit test coverageDiffblue Cover54-69% line coverage efficiencyMEDIUM

The Failure Modes: Why Human Oversight Is Non-Negotiable

Credibility demands acknowledging AI's severe limitations. A comprehensive 2025 meta-analysis found that while coding speed increased by up to 55%, AI-generated code was inherently insecure in 45% of cases when developers didn't provide explicit security instructions. Apiiro security research[20] tracked a 10x spike in security vulnerabilities introduced via AI-generated code over six months in 2025. GitClear's longitudinal analysis[21] revealed a 60% collapse in refactoring activity and an eightfold increase in code duplication between 2021 and 2024, suggesting AI tools incentivize appending new code rather than understanding existing architectures.

The "AI Paradox," documented by GitLab's 2025/2026 Global DevSecOps report[17], is equally concerning: while AI accelerates initial code drafting, organizations reported losing an average of 7 hours per team member per week to AI-related inefficiencies and tool sprawl. Research from Faros AI[19] showed teams with high AI adoption merged 98% more pull requests, but their PR review time spiked by 91% — resulting in flat DORA metrics. The conclusion is clear: AI is a force multiplier for well-structured teams, and a productivity trap for undisciplined ones. This is why robust DevOps practices and governance are prerequisites, not afterthoughts.

The Honest Bottom Line

AI-augmented development delivers measurable 30-60% productivity improvements for migration and refactoring tasks — but exclusively when deployed as a systematic, governed team workflow. The "smaller team, faster delivery" vendor narrative is partially valid: effective team capacity increases by 20-30% once coordination costs and review overhead are factored in. The key differentiator is not the AI model, but the engineering team's ability to structure context, enforce review gates, and maintain human oversight at architectural decision points.

Section 3

Phased Migration vs. Big-Bang Rewrites: What the Evidence Says

The temptation to "just rewrite everything" is powerful. But decades of evidence — from Netscape's catastrophic 1997 decision to modern enterprise SaaS migrations — consistently shows that phased approaches win.

The Big-Bang Failure Rate

The Standish Group's CHAOS reports[23] have consistently documented that large, monolithic IT projects — which encompass big-bang rewrites — achieve a success rate of less than 30%. Over 50% are "challenged" (over budget and behind schedule), and roughly 20% fail entirely or are cancelled. Joel Spolsky's landmark 2000 essay "Things You Should Never Do, Part I"[24] coined the industry's most enduring axiom against rewrites, arguing that throwing away code and starting from scratch is "the single worst strategic mistake any software company can make." Martin Fowler estimates rewrites typically take 2-3x longer than original optimistic estimates due to hidden complexity in the legacy system.

The canonical failure is Netscape. In 1997, Netscape decided to rewrite Navigator from scratch rather than refactor the 4.x codebase. The rewrite consumed three years during which no meaningful updates reached users. Microsoft's Internet Explorer captured the market. By the time Netscape 6 launched, the browser war was irreversibly lost. More recently, the UK's NHS National Programme for IT (NPfIT)[27] attempted a massive top-down replacement of localized healthcare systems with a centralized monolith, was officially dismantled in 2011 after spending over £10 billion, and stands as one of history's most expensive IT project failures.

The failure modes are well-documented: the second-system effect (over-engineering the replacement), the moving target problem (the old system keeps evolving while you rewrite), knowledge loss (undocumented business rules embedded in legacy code), team burnout (months of work with zero visible progress), and the feature parity trap (the new system must replicate 100% of old functionality before launch, which is nearly impossible to achieve on schedule).

Value Delivered Over Time: Big-Bang vs. Phased Approach

Left: Big-bang rewrite delivers zero user value for 18 months, with escalating risk. Right: Phased migration delivers incremental value starting month 2-4, with risk capped at manageable levels throughout.

Sources: Standish Group CHAOS Reports, Joel Spolsky, Martin Fowler, Shopify Engineering Blog.

The Strangler Fig Pattern and Frontend-First Modernization

Martin Fowler's Strangler Fig Application pattern[25] (2004) provides the architectural blueprint: instead of replacing a system in one shot, gradually grow a new system around the old one, intercepting requests at the edge and routing them to new components as they're built. Over time, the legacy system "strangles" away as more traffic is handled by the modern stack. This approach has been validated at massive scale by companies like Shopify[26] (migrating from a monolithic Rails application to modular services over multiple years while maintaining 99.99% uptime), GitHub (incrementally replacing their Rails frontend with React), and Spotify (decomposing their monolith into hundreds of microservices over a five-year period).

Within the phased approach, starting with the frontend delivers the highest return on effort. The frontend layer is typically less coupled to core business logic than the backend, meaning it can be replaced with a modern React/Next.js application that consumes existing APIs through a thin adapter layer (Backend-for-Frontend pattern). This produces immediate visible value for end users and stakeholders — building trust and political momentum for subsequent phases. A mid-size platform running AngularJS on the frontend with a Meteor/MongoDB backend, for instance, can have its entire user-facing layer rebuilt in React/Next.js within 4-6 months while the backend continues operating unchanged — and the Meteor backend itself can be upgraded to v3 (with modern async/await patterns) in parallel, rather than being replaced entirely. Users experience a modern, responsive interface; the engineering team gains a type-safe, testable codebase; and the organization builds confidence before tackling the more complex backend service extraction.

The counter-argument — that frontend-first creates a "beautiful facade on a crumbling foundation" — is valid but manageable. The key is that the frontend migration phase includes establishing a clean API layer (REST or GraphQL) that abstracts the legacy backend. This API layer then becomes the interface through which backend services are gradually extracted and modernized. Each backend module is refactored and deployed independently behind the stable API contract, using patterns like Branch by Abstraction, Parallel Run (both old and new implementations run simultaneously, comparing outputs), and Feature Flag Rollout (gradually routing traffic from old to new).

The Discovery Phase: The Most Undervalued Investment

Successful phased migrations share one critical commonality: they invest 4-8 weeks upfront in structured discovery before writing a single line of new code. This discovery phase produces a dependency map of the existing system, extracts and documents undiscovered business rules from "spaghetti code," identifies technical debt that should be abandoned rather than migrated, and validates architectural assumptions through proof-of-concept spikes. The data consistently shows that this investment reduces total project duration by 15-25% and dramatically improves cost predictability — precisely because it surfaces hidden complexity early, before it becomes expensive scope creep mid-project.

AI tools are particularly effective during discovery. Using tools like Cursor, Sourcegraph Cody, or Claude for codebase analysis, teams can index an entire monolith — identifying hidden dependencies between controllers and methods, mapping data flow across modules, and flagging outdated patterns that should be abandoned rather than ported. What previously took a senior engineer weeks of manual code archaeology can now be accomplished in days, with AI handling the tedious dependency mapping while human experts validate the findings and make architectural decisions. This combination of AI-accelerated analysis and human expertise is the foundation of modern AI-enabled engineering workflows.

The Combination: Phased Migration + AI = Optimal Approach

The final piece of the puzzle is recognizing why phased migration is the ideal environment for AI-assisted development. Each migration slice — converting one AngularJS module to React, wrapping one Meteor method in a REST API, generating tests for one business workflow — is a well-scoped, bounded task. These are precisely the tasks where AI tools deliver their strongest measurable gains (40-70% for codemods and test generation, 40-50% for framework conversion). A big-bang rewrite, by contrast, requires holistic system understanding that exceeds current AI context window limitations and triggers the hallucination and quality degradation patterns documented in the METR and Faros AI studies.

The teams that execute this combined strategy — phased migration architecture, AI-augmented workflows, human-in-the-loop governance — consistently report delivery timelines 40-50% shorter than traditional migration approaches, with smaller teams and lower total cost. It is not a silver bullet, but it is the most evidence-backed approach available in 2026. To see how this methodology has been applied to real-world modernization projects, explore Altimi's case studies.

Governance and Measuring Progress

Successful migration projects establish clear governance frameworks from day one. The most effective teams track a core set of migration-specific KPIs: percentage of user traffic routed through the new system (the north-star metric), feature parity completion rate, error rate comparison between old and new implementations, deployment frequency on the new stack, and developer velocity metrics including PR cycle time and time-to-merge. Progressive rollout strategies — starting at 1% of traffic, scaling to 5%, 25%, 50%, and finally 100% — provide natural checkpoints where the team can validate behavior, catch regressions, and make go/no-go decisions with real production data rather than assumptions.

The dual-running period, where both old and new systems handle production traffic simultaneously, is the most operationally complex phase but also where the phased approach's safety advantage is most visible. Techniques like parallel run comparison (routing identical requests to both systems and comparing responses) and shadow mode deployment (the new system processes requests but only the old system's responses are returned to users) allow teams to validate correctness at scale before committing to the cutover. This is fundamentally impossible in a big-bang rewrite, where the first time the new system faces production load is also the highest-stakes moment of the entire project. Teams that combine these governance practices with robust CI/CD pipelines and infrastructure automation consistently achieve smoother transitions and faster rollback capabilities when issues arise.

Key Takeaway

Phased, frontend-first modernization with incremental backend extraction consistently outperforms full rewrites in delivery success rate (~85% vs. ~30%), time-to-first-value (months 2-4 vs. month 18+), risk profile (peak risk 4/10 vs. 10/10), and total cost of ownership — and this advantage is amplified by 30-50% when combined with structured AI-assisted development workflows.

FAQ

Frequently Asked Questions

Common questions from engineering leaders, CTOs, and product owners evaluating legacy modernization strategies for their organizations. These answers draw on the research, data, and real-world case studies presented throughout this playbook, synthesizing the practical implications for technical decision-makers considering their first concrete step toward platform modernization.

How long does a phased migration typically take? +
A phased migration for a mid-size platform (50-200K lines of code) typically spans 8-14 months from discovery to substantial completion. The key difference from a big-bang rewrite is that users see improvements starting in month 2-4, not month 18. Phase 1 (discovery and frontend modernization) usually delivers visible value within 3-4 months, while backend refactoring continues incrementally in subsequent phases.
Can AI fully automate a legacy migration? +
No. The 2025 METR study demonstrated that experienced developers actually worked 19% slower on complex repository tasks when relying heavily on AI, despite perceiving a 20-24% speedup. AI excels at bounded, well-scoped tasks: converting AngularJS templates to React components, generating test suites, automating syntax upgrades. But architectural decisions, business logic validation, and integration testing require human expertise. The optimal model is 'human-directed, AI-accelerated' with developers acting as directors, delegators, and validators.
What about the database layer? Should we migrate that first? +
It depends on constraints. If your database version is approaching end-of-life (e.g., MongoDB 4.x), the upgrade may be a prerequisite. However, database upgrades are generally lower-risk than full application rewrites since modern databases maintain backward compatibility. The recommended approach is: upgrade the database in-place (if needed for security/support), then modernize the frontend (immediate user value), then refactor backend services to use the database more effectively.
How do you measure progress during a phased migration? +
Successful migration projects track: percentage of user traffic routed through the new system, feature parity completion rate, error rate comparison (old vs. new), deployment frequency on the new stack, and developer velocity metrics (PR cycle time, time to merge). The most critical leading indicator is 'time-to-first-value' — how quickly end users experience improvements on the new platform.
What team size is needed for AI-augmented modernization? +
AI-augmented workflows enable smaller teams to deliver at the pace of traditionally larger ones. A typical Phase 1 team includes: 1 Solution Architect, 1-2 Senior Frontend Developers, 1 Backend/Meteor Developer, 1 QA Engineer, and 1 Project Manager — with an AI Software Engineering Lead coordinating the tooling workflow. This is roughly 40-60% smaller than a traditional migration team, with the AI toolset handling code comprehension, test generation, and boilerplate conversion that would otherwise require additional headcount.
When is a big-bang rewrite actually appropriate? +
A full rewrite can succeed under rare, specific conditions: the codebase is small (<30K LOC), the team has deep domain expertise, requirements can be frozen during the rewrite, and the old system can remain in maintenance mode indefinitely. Basecamp's successful version transitions are the canonical example — they built a new product rather than rewriting the old one, and allowed existing users to stay on the legacy version. For most enterprise platforms with active users and evolving requirements, phased migration remains the safer choice.

Ready to Modernize Without the Big-Bang Risk?

Altimi's engineering teams combine phased migration methodology with AI-augmented workflows to deliver measurable results — faster timelines, smaller teams, lower risk. Let's discuss your platform's modernization roadmap.

References

Sources & Citations

  1. [1]Gartner, 'IT Key Metrics Data: IT Spending and Staffing Report,' 2024/2025 Edition.
  2. [2]IDC, 'Worldwide Black Book: IT Spending Forecast,' 2025.
  3. [3]Forrester Research, 'The State of Technical Debt,' 2025.
  4. [4]StackOverflow, 'Developer Survey Results,' 2023-2025 Editions.
  5. [5]DORA Team (Google Cloud), 'Accelerate State of DevOps Report,' 2024.
  6. [6]GitHub Next, 'Research: Quantifying GitHub Copilot's Impact on Developer Productivity and Happiness,' 2022.
  7. [7]GitHub / Microsoft, 'Copilot Enterprise Longitudinal Telemetry Study,' 2024-2025.
  8. [8]Google Research, 'DIDACT: Large-Scale AI-Assisted Code Migrations,' ACM, 2024.
  9. [9]McKinsey & Company, 'The State of AI in 2025,' Global Survey Report, 2025.
  10. [10]Complexity Science Hub (CSH), 'Generative AI and Software Development Productivity,' Science, 2025.
  11. [11]METR, 'Measuring the Impact of AI on Experienced Open-Source Developer Productivity,' 2025.
  12. [12]AWS, 'Experian .NET Modernization Case Study: AWS Transform for .NET,' 2026.
  13. [13]GlobalLogic, 'AI-Assisted AngularJS to Angular 15 Migration: B2B Marketplace Case Study,' 2025.
  14. [14]Hexaview Technologies & Kalmantic AI, 'LegacyCodeBench: Evaluating AI Comprehension of COBOL,' 2026.
  15. [15]Diffblue, 'Automated Unit Test Generation Benchmark: Deterministic vs. LLM-Based Approaches,' 2025.
  16. [16]Google, 'Rosie: Large-Scale Change Infrastructure for Monorepo Migrations,' internal report cited in DIDACT, 2024.
  17. [17]GitLab, 'Global DevSecOps Report 2025/2026,' surveying 3,200+ professionals.
  18. [18]JetBrains, 'State of Developer Ecosystem 2025,' survey of 24,534 developers.
  19. [19]Faros AI, 'Developer Productivity in the Age of AI: 10,000 Developer Telemetry Study,' 2025.
  20. [20]Apiiro Security Research, 'AI-Generated Code Vulnerability Trends,' 2025.
  21. [21]GitClear, 'Coding on Copilot: Code Quality Analysis 2021-2024,' longitudinal study, 2025.
  22. [22]DX (formerly GetDX), 'Q4 2025 Developer Experience Impact Report,' telemetry from 135,000 developers.
  23. [23]Standish Group, 'CHAOS Report: Project Success and Failure Trends,' multiple editions (2015-2024).
  24. [24]Joel Spolsky, 'Things You Should Never Do, Part I,' Joel on Software, April 2000.
  25. [25]Martin Fowler, 'StranglerFigApplication,' martinfowler.com, 2004 (updated 2019).
  26. [26]Shopify Engineering, 'Deconstructing the Monolith: Designing Software that Maximizes Developer Productivity,' 2019-2023 blog series.
  27. [27]UK National Audit Office, 'The National Programme for IT in the NHS: An Update,' Report HC 888, 2011.
  28. [28]Gartner, 'Predicts 2026: AI Agents Will Transform Enterprise Application Development,' 2025.
4,511 words