Write a function that returns a string.
Give this instruction to an AI coding tool, and the implementation you get back varies wildly depending on context. In C++, you might get char*, or you might get std::string. Both satisfy "returns a string," but the right answer depends entirely on the use case.
Now, what if you said this instead?
"Write a function that returns a string as a JSON response."
With that context, the reasoning is clear: JSON payloads are dynamically sized, so std::string is the natural choice. This works the same way for humans and AI alike. When you know not just what to build but why it's needed, design decisions become sharper.
Work Without Intent Confuses Humans Too
This isn't an AI-specific problem. Work without visible intent degrades quality for humans as well.
An extreme example: forcing someone to dig a hole and fill it back in, repeatedly, has historically been recognized as a form of psychological torture. The burden isn't the physical labor — it's the absence of purpose. In software development, the familiar experience of "I built exactly what the spec said" that turns out to be wrong comes from the same root. Specs document how, but often omit why.
Without purpose, humans execute mechanically and quality drops. AI behaves the same way. There's a meaningful gap between following instructions and acting with understanding.
"Why" Elevates the Design Level
This gap becomes especially pronounced in languages with strict type systems.
Returning to the "return a string" example: in C++, the options include char* (raw pointer), std::string (standard library string), and std::string_view (lightweight read-only reference). When you know what the result will be used for, decisions about memory management, lifetime considerations, and performance trade-offs follow naturally.
The same applies to AI coding. LLMs have learned from vast codebases and are adept at pattern-matching "for this use case, this type is conventional." But without knowing the use case, that pattern-matching doesn't activate.
Communicating design intent is the trigger that unlocks AI's reasoning capabilities.
What the Experiment Revealed: Concrete Impact of Intent
So how much difference does an environment where design intent is recorded and referenced actually make?
We conducted a controlled experiment building a TypeScript workflow approval system across 12 sequential tasks.
- Recorded and referenced design intent via sqlew
- Operated without design intent
Both executed identical tasks for comparison.
Development Time: Up to 22.4% Reduction
Overall, the condition with design intent completed development in 398 minutes versus 513 minutes without — a 22.4% reduction. This gap wasn't uniform: for tasks where design intent was most relevant, such as feature additions and architectural changes, reductions exceeding 50% were observed. Across all tasks, roughly 80% were completed faster when design intent was available.
What's particularly interesting is that this gap widened over time rather than appearing upfront. In the first few tasks, the overhead of recording design intent made it appear slightly disadvantaged. But as the project grew complex, the ability to reference past design decisions produced an accelerating advantage.
How AI's "Thinking" Changed
More revealing than the numbers was the qualitative shift in AI reasoning patterns.
Without design intent, AI tended to fall into a trial-and-error loop: write → fail → fix → retry. With design intent provided, a context-informed reasoning pattern emerged: reference past decisions → predict conflicts → avoid and implement.
This manifested clearly in prior decision reference frequency. Analyzing AI's thought process, the density of keywords referencing past design decisions diverged at an accelerating rate: +21% early on, +64% mid-project, and +137% by the final tasks. As recorded design intent accumulated, AI increasingly leveraged it for new decisions — a positive feedback loop.
Instructions about what to do alone don't stabilize AI behavior. Providing the rationale for why changes AI's reasoning patterns at a fundamental level. This finding from the experiment underscores how critical communicating design intent is in AI-assisted development.
AI Makes the Habit of Recording Intent Practical
The importance of recording design intent has been recognized since Michael Nygard proposed ADR (Architecture Decision Record) in 2011. But the operational overhead for humans made sustained adoption difficult.
The proliferation of AI coding tools is changing this equation. Automating the recording of design decisions and delivering just the right context at just the right time — sqlew realizes this through MCP.
Communicating not just what to build but why to build it. That simple shift changes AI reasoning patterns and improves both development efficiency and quality. The experimental data backs this up.
References
- "Rediscovering Architectural Decision Records: How Persistent Design Context Improves LLM Code Generation" — Shingo Kitayama (2026) — sqlew Efficacy Study
- "LLMs Get Lost In Multi-Turn Conversation" — Bao et al. (2025) — arXiv:2505.06120
- "Context Length Alone Hurts LLM Performance Despite Perfect Retrieval" — ACL Findings (EMNLP 2025) arXiv:2510.05381






