Why AI Coding Review Takes Longer: What's Happening Behind 10x Faster Implementation

Adopting a coding agent makes development faster. That's not wrong per se. Looking at implementation speed alone, 10x or more compared to hand-written code isn't unusual.

But something curious is happening on the ground. Despite adopting AI, engineers' working hours aren't decreasing β€” they're actually increasing.

Implementation Got Faster, But...

A 2025 study by METR (Model Evaluation & Threat Research) reported that experienced open-source developers using AI coding tools actually took 19% longer to complete tasks. This was despite the developers themselves feeling they had become faster.

Several factors contribute to this gap between perception and measurement. But one of the largest is the growing cost of code review.

The volume of code AI generates is staggering. What takes a human an hour to write, an agent produces in minutes. But verifying whether that code is correct remains a human responsibility. Even if implementation speed increases 10x, review speed doesn't increase 10x. The result is that the bottleneck in the entire development process has simply shifted from "writing" to "reading."

Why Reviewing AI-Generated Code Is So Difficult

Reviewing hand-written code and reviewing AI-generated code are qualitatively different tasks. When reviewing a colleague's code, you can generally infer their intent and design philosophy. With AI-generated code, that assumption doesn't hold.

You can't be sure it matches the request. AI "interprets" instructions before implementing. There's no guarantee that interpretation aligns with the requester's intent. Asking for an API design and getting UI components thrown in is an everyday occurrence. Even when the output appears to meet requirements, the scope may be subtly different, forcing reviewers to untangle what was requested from what the AI decided on its own.

It edits unrelated code. Agent-style AI will modify files not mentioned in the instructions if it deems them necessary for task completion. While reviewing the diff for the requested feature, you discover changes to config files, test code, and sometimes entirely unrelated modules. The more the diff bloats, the higher the cost of deciding "where to focus."

It frequently ignores rules altogether. Project coding conventions, naming rules, architectural guidelines β€” you've communicated them to the AI, yet they're often not followed. LLM Instruction Following has structural limitations: the more instructions there are, and the deeper they sit in the context, the more likely they are to be ignored. Reviewers end up checking for rule violations from scratch every time.

Techniques for Getting AI to Write Reviewable Code

Image

Various approaches have emerged to control the quality of AI-generated code. Each has its merits, but all carry structural limitations.

Writing Rules in CLAUDE.md / AGENTS.md

The simplest approach is consolidating rules in a context file at the project root. You write coding conventions, architectural guidelines, and prohibitions in CLAUDE.md or AGENTS.md.

While easy to start with, the file inevitably bloats over time. CLAUDE.md files exceeding several hundred lines are common, but the longer they get, the more the LLM's attention disperses β€” rules written toward the end are more likely to be ignored. The file also occupies significant space in the context window, creating context pollution that crowds out room for the actual task instructions and implementation code.

Splitting Rules into a Rules Directory

Another approach uses separate rule files, like Cursor Rules or .claude/rules/, managed by category. Splitting improves human readability.

However, in many implementations, these rule files are loaded in their entirety every time, just like system prompts. Even if files are split, the total context volume fed to the LLM remains unchanged. As rules accumulate, you arrive at the same problem as CLAUDE.md. Splitting improves manageability for humans but doesn't reduce cognitive load on the LLM side.

Individual Files in docs/rules/*.md, Loaded on Demand

A more sophisticated approach places rules as individual files in a documentation directory, loaded by the AI only when needed. This is a just-in-time context delivery β€” providing only the necessary information at the right moment.

Sounds ideal, but the operational hurdle is high. Deciding "which rules to load when" must be done manually, and since this depends on task type and code context, it can't be fully systematized in advance. You can try having the AI automatically find relevant rules, but there's no guarantee an AI unaware of a rule's existence will locate the right file. In practice, humans end up spending effort on rule selection, and review cost reduction merely shifts to management cost.

That's Why We Built sqlew

The approaches above share a common challenge: they all try to solve the problem of "controlling the quantity and quality of information passed to the LLM" through file-based mechanisms.

sqlew addresses this as a database-driven MCP tool.

Register your project's design guidelines, coding conventions, and constraints in sqlew, and AI agents can selectively reference only what they need via MCP. No need to load all rules every time. Only the guidelines relevant to the current task are delivered at the right moment.

This prevents context pollution while improving rule compliance. The reviewer's recurring stress of "checking for the same rule violations again" is structurally reduced.

Furthermore, by configuring sqlew's backend as MySQL, information sharing across the team becomes seamless. Guidelines and constraints registered by any member are accessible to everyone's AI agents. Even with git worktree-based multi-branch development, you never deal with merge conflicts in rule files. Design guidelines are stored in a structured database, not in the same repository as code.

If the bottleneck in AI coding has shifted from "writing" to "reading," it's worth investing in mechanisms that get AI to write more readable code. sqlew is one such mechanism.


sqlew OSS

  • Retain your projects' Memories
  • No external transaction
  • Open source & free forever
View on GitHub

sqlew Cloud

  • Team collaboratiom ready
  • Easy to setup, including audit features
  • 14 days Free trial available
Try for Free