You keep adding rules to CLAUDE.md β coding standards, architecture policies, naming conventions, testing guidelines. Before you know it, it's hundreds of lines long. And then you notice: the AI isn't following half of what's written there.
It's become like a corporate mission statement framed on the office wall. It exists. It says all the right things. But nobody actually follows it in their daily work.
Long Mission Statements Don't Get Read
Research by Chroma Research ("Context Rot") confirmed across 18 LLM models that simply increasing input token count degrades output quality, even when task complexity remains constant. ACL Findings (EMNLP 2025) went further, showing that even when relevant information is correctly present in context, the sheer volume of surrounding tokens degrades reasoning accuracy.
You might think, "But recent models have doubled their context windows, so this shouldn't be a problem anymore." However, a wider window and actually following what's inside it are two different things. A larger context window means you can pass more instructions β it doesn't mean the AI will follow more instructions.
Think of it like parenting. If you tell a child "go study" every single day, eventually they tune it out. The words still reach their ears, but the repeated instruction fades into background noise. Rules piled up in CLAUDE.md work the same way β the more you add, the less weight each individual rule carries.
Plans Reflect It, but Implementation Doesn't
Observing Claude Code's behavior reveals an interesting pattern. CLAUDE.md content is actively referenced during Plan Mode, when the agent constructs its implementation plan. But once the plan is finalized and the agent enters the implementation phase, it primarily follows the generated plans file rather than CLAUDE.md itself.
This is actually rational behavior. During implementation, what matters is "what to do for this specific task," not the entire project rulebook. The problem is that CLAUDE.md content isn't always accurately reflected in the plans file. The longer your CLAUDE.md grows, the more likely it is that rules get lost in translation.
This is why you should carefully review the plans Claude Code proposes. The plan is the only bridge between your CLAUDE.md and actual implementation. Anything missed at this stage flows directly into implementation as a deviation.
From Mission Statement to Enforceable Rules
The root cause of CLAUDE.md becoming a "decorative mission statement" is that it's just a static list of instructions. It tells the AI what to do but carries no enforcement mechanism, and there's no built-in system for the AI to proactively reference the right rules at the right time.
sqlew's Constraint feature offers one approach to this problem. Constraints are managed as independent entities with a priority field. AI agents automatically retrieve them via MCP at the start of each task. Instead of "read the entire document and find the relevant section," only the constraints relevant to the current task are delivered just-in-time.
For example, registering a constraint like "All API endpoints must use the /v2/ prefix" means that every time an API-related task begins, the AI retrieves and applies this rule. The compliance rate is substantially different from a single line buried somewhere in a lengthy CLAUDE.md.
You Don't Need to Abandon CLAUDE.md
To be clear, sqlew doesn't replace CLAUDE.md β it coexists with it. CLAUDE.md remains well-suited for describing project-wide invariant policies. But for rules you need strictly enforced, and for design decisions that accumulate as a project progresses, a structured mechanism is more effective.
There's a reason compliance differs between a mission statement on the wall and a checklist reviewed at the start of every shift. Is your CLAUDE.md still just a mission statement?
References
- "Context Rot: How Increasing Input Tokens Impacts LLM Performance" β Chroma Research β https://research.trychroma.com/context-rot
- "Context Length Alone Hurts LLM Performance Despite Perfect Retrieval" β ACL Findings (EMNLP 2025) β https://aclanthology.org/2025.findings-emnlp.1264.pdf
- "Agent READMEs: A First Empirical Analysis of Context Files" β Singh et al. (2025) β arXiv:2506.15631






