How to Make AI Answer "Why Did You Choose This Design?"

With AI coding now mainstream, more developers find themselves squinting at AI-generated pull requests and wondering: "Why on earth did the AI go with this design?"

One session it diligently follows the repository pattern; the next, it's writing raw SQL queries. Yesterday it respected the agreed-upon error handling strategy; today it acts as if that conversation never happened. This lack of consistency in AI-generated code drives up review costs and ultimately slows development down.

The root cause is straightforward: the AI doesn't know the project's design decisions or the reasoning behind them.

Instructions Alone Can't Ensure Consistency

The common approach is to write directives in CLAUDE.md: "Use the repository pattern," "Handle errors through the centralized handler." But LLM research on instruction following shows that compliance rates vary widely and tend to degrade as reasoning chains grow longer. The drift you've noticed — where the AI gradually deviates from established patterns as a session progresses — is a well-documented phenomenon.

Instructions tell the AI what to do, but not why. And rules followed without understanding the reason are easily broken the moment context shifts.

Providing the "Why"

Software engineering has an established practice called ADR (Architecture Decision Record) — a document that captures design decisions using three elements: Context, Decision, and Consequences. AWS Prescriptive Guidance (2022) highlights ADRs as a method for streamlining project decision-making.

The guide emphasizes that the power of ADRs lies in focusing on the reason behind a decision, not the implementation. When the rationale is clear, decisions are far less likely to be casually overridden.

This same principle applies to AI. Instead of commanding "Use the repository pattern," you provide the rationale: "We adopted the repository pattern to ensure the data access layer remains swappable." The AI moves from complying with instructions to understanding the design intent behind them.

Guiding through reasoning rather than controlling through commands — this approach makes a fundamental difference in the consistency of AI-generated code.

What sqlew Is Building Toward

sqlew is an MCP tool that accumulates design decisions in an ADR-inspired structure, enabling AI agents to retrieve the relevant rationale just-in-time via MCP. No RAG or embedding setup required — it's operational immediately.

Giving AI not just the "what" but the "why." Extending the concept AWS championed for human teams to human-AI collaboration — that's the idea we've built into sqlew.


References


sqlew OSS

  • Retain your projects' Memories
  • No external transaction
  • Open source & free forever
View on GitHub

sqlew Cloud

  • Team collaboratiom ready
  • Easy to setup, including audit features
  • 14 days Free trial available
Try for Free