You want to use AI coding to streamline legacy code maintenance. You point the AI agent at the codebase, and immediately hit a wall.
"Why is it implemented this way? I have no idea."
Human Teams Had "The Person Who Remembers"
In traditional development teams, this problem was often solved through tribal knowledge. Ask "Why does this method return a String?" and a senior member would answer: "Oh, that's because the external sales management system only accepts strings via SOAP. We've recognized it as tech debt since 2019."
Teams with an ADR (Architecture Decision Record) culture could sometimes trace the reasoning from their records. But in practice, few teams sustain ADR workflows. For humans, the recording overhead is high, and the practice tends to fall into disuse β a structural problem inherent to manual documentation.
Whether relying on tribal knowledge or ADRs, the assumption was always that design intent existed somewhere. But AI agents have no team memory, no tacit knowledge. The code and documentation you hand them is their entire world.
Specs Only Capture the "How"
Having documentation doesn't automatically solve the problem.
Specs and API definitions describe what was implemented and how β the How. "Return type is String," "Input is a comma-separated string." These describe implementation details but never explain why String was chosen, or why comma-separated was the format.
AI agents try to make sense of code using only How information. They infer context from variable names and comments, but when comments have drifted from the actual implementation, or when naming conventions carry historical baggage, the AI builds on false assumptions. It cannot detect that a comment is lying.
Where to Find Clues to the "Why"
So how do you excavate lost design intent? In legacy codebases, clues to the Why are scattered in unexpected places.
Issue trackers and fix PRs are valuable sources. Discussions often capture why a fix was needed and what alternatives were considered. Even when the commit message is just "fix," the linked Issue may contain the decision trail.
Commented-out old code and TODO comments can also provide clues. Fragments like // For backward compatibility with legacy batch system or // TODO: Migrate to List preserve traces of past design decisions.
The challenge is that this information is dispersed and difficult to search systematically. A human developer can connect the dots through contextual reasoning, but simply handing the codebase to an AI agent won't bridge those gaps.
Making Design Intent Searchable
sqlew is a tool that structures and accumulates the reasoning behind design decisions, enabling AI agents to search "Why is it implemented this way?" via MCP.
For example, when starting legacy code maintenance, you register the Whys you've excavated from Issues and comments into sqlew. When the AI agent reads the code and flags an unusual return type, it searches sqlew and instantly finds: "This was a design decision driven by interface constraints with the external system, recognized as tech debt since 2019."
Design intent search that git blame can't provide. A system where AI can reference the decision trail even when the commit message just says "fix." We believe legacy code maintenance is precisely where persisting the Why delivers the most value.
References
- sqlew Examples: "Why?" Search β https://github.com/sqlew-io/sqlew-examples/tree/main/04-why-search





