What Is Context Engineering?

If you've ever used AI coding tools and felt that no matter how you refine your prompts, the AI still doesn't quite do what you want — you're not alone.

Writing better prompts certainly helps. But from late 2025 into early 2026, a growing consensus has emerged among AI practitioners: prompt improvement alone has limits. From this realization, a new concept has gained traction: Context Engineering.

This article explains what context engineering is, how it differs from other approaches, and why it matters — written for those encountering the term for the first time.

What Is Context Engineering?

Context engineering is the discipline of filling the AI's context window with the right information, at the right time, in the right amount — so that the AI can generate optimal output for its next step.

The term gained widespread recognition in June 2025, when Andrej Karpathy (former Tesla AI Director and OpenAI co-founder) endorsed it in a post on X. He described context engineering as "the delicate art and science of filling the context window with just the right information for the next step," advocating its adoption over "prompt engineering." This sparked an industry-wide conversation, and by late 2025 through early 2026, context engineering had become established as a central concept in AI-assisted development.

Context isn't just task descriptions or instructions. It encompasses few-shot examples, RAG-retrieved data, tool definitions, conversation history, project design decisions, and everything else that influences the AI's reasoning. Too little context and the AI lacks understanding; too much and noise degrades output quality. Designing this balance has become a recognized discipline in its own right.

How It Differs from Prompt Engineering

Prompt engineering focuses on optimizing the instruction text — writing clearer directives, providing examples, encouraging step-by-step reasoning. It's about getting the best response from a single request.

Context engineering takes a broader view. Beyond the prompt itself, it designs the entire information environment the AI operates in: documents, past decisions, project constraints, external tool outputs — and determines what, when, and how much to provide.

Think of it this way: if prompt engineering is polishing the script, context engineering is designing the entire stage — sets, lighting, props, and sound. A brilliant script on an empty stage struggles to deliver. A simple script on a well-designed stage can produce remarkable results.

How It Differs from Spec-Driven Development

In the latter half of 2025, spec-driven development (SDD) also gained attention — an approach where formal specifications are written before AI generates code. Tools like GitHub Spec Kit and AWS Kiro emerged, offering structured workflows of Specify → Plan → Tasks → Implement.

Image

Spec-driven development excels at telling the AI "What" to build. It formalizes requirements into specifications that serve as precise guides for AI implementation.

Context engineering is a higher-level concept. It subsumes specs as one type of context, while also encompassing design rationale (why that spec exists), historical decision records, and project-specific constraints — the full set of information AI needs to make good judgments. Where specs define the "What," context engineering also covers the "Why."

Research demonstrates that this "Why" makes a tangible difference. In a controlled experiment by Kitayama (2026), identical CLAUDE.md instructions stated "Develop using TDD." When the rationale for TDD adoption was recorded as an ADR, the AI spontaneously generated up to 25 E2E tests. Without the rationale — with instruction only — the result was zero tests.

Kitayama, S. (2026). "Rediscovering Architectural Decision Records: How Persistent Design Context Improves LLM Code Generation" — https://doi.org/10.36227/techrxiv.177205025.54351571/v1

The Effectiveness of Context Engineering

Research confirms that context quality significantly impacts AI output.

Chroma Research's "Context Rot" study evaluated 18 LLM models and found that simply increasing input token count — with task complexity held constant — degrades output quality. Liu et al. (2024) further showed that even when relevant information exists in the context, excessive surrounding tokens degrade reasoning quality.

In other words, "having" the right information and "leveraging" it effectively are fundamentally different problems. Context engineering addresses this by delivering the right information, at the right time, in the right amount.

Anthropic's official technical blog also recommends a hybrid model of static context (such as CLAUDE.md) and dynamic context (just-in-time retrieval via MCP or CLI tools), positioning context engineering as the natural evolution of prompt engineering.

Anthropic. (2025). "Effective Context Engineering for AI Agents" — https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents

What Should We Actually Do?

Image

Here are practical steps to put context engineering into practice.

Start with a context audit. Review the information you're feeding your AI (CLAUDE.md, system prompts, etc.). Is every piece relevant to the current task? Look for stale instructions, contradictions, or bloated sections that add noise without value.

Next, separate static and dynamic context. Instead of stuffing everything into a single file, distinguish between always-needed baseline policies (static) and task-specific past decisions and design rationale (dynamic).

Finally, make recording "Why" a habit. Document not just what you decided, but why you decided it. As the research shows, rationale-backed context produces qualitatively different AI behavior compared to instruction alone.

The practical key to context engineering is building "external memory" outside the context window and supplying the right information at the right scope and volume. An effective method for this is ADR (Architecture Decision Records) — structuring and recording design decisions and their rationale so that AI can reference them across sessions. You can write ADRs manually, or automate the recording and retrieval process using an MCP tool like sqlew.

Context engineering doesn't require special tools to get started. What matters most is the mindset of consciously designing what information your AI sees.


References


sqlew OSS

  • Retain your projects' Memories
  • No external transaction
  • Open source & free forever
View on GitHub

sqlew Cloud

  • Team collaboratiom ready
  • Easy to setup, including audit features
  • 14 days Free trial available
Try for Free