Back to Blog
Legacy Modernization Discovery: AI Across Code, Tickets, and Runbooks
Engineering
Apr 2, 2026
8 min read

Legacy Modernization Discovery: AI Across Code, Tickets, and Runbooks

Everyone talks about AI-generated code. Fewer teams use AI systematically for discovery—the slow, expensive phase where architects guess what the system actually does in production.

Why this is different from Copilot

You need cross-artifact reasoning: source, config, deploy scripts, ticketing history, and on-call notes. The output is not code—it is a decision-grade map for investment.

How to solve it

1. Build a unified index. Ingest repositories with commit metadata, open/closed tickets linked to components, and runbooks from Confluence or Wikis. Tag by service boundary where known.

2. Generate service narratives. For each module, produce: stated purpose, inbound/outbound dependencies, data stores touched, known failure modes from incidents, and "hot spots" by change frequency.

3. Propose strangler seams. Ask models to suggest boundaries where coupling is lowest and business capability is cohesive—then have senior engineers validate.

4. Quantify risk. Combine static analysis signals (cyclomatic complexity, test coverage) with dynamic hints from logs if you can feed them safely.

5. Maintain a living backlog. Treat AI-generated maps as hypotheses with owner assignments and expiry—systems drift weekly.

Pitfalls

Scanning code without production context (you will beautify the wrong thing). Exposing secrets or PII in prompts—sanitize and use private endpoints. Trusting dependency graphs from imports alone when runtime calls differ.

Outcome

Executives get a prioritized modernization roadmap tied to business capability and operational pain—not just a static architecture diagram.

Subscribe to our newsletter

Get the latest AI insights and PepperStack news delivered straight to your inbox.