A few years ago, many AI researchers (even the most reputable) predicted that prompt engineering would be a temporary skill that would quickly disappear. They were completely wrong. Not only has it not disappeared, but it has evolved into something much more sophisticated: Context Engineering.
And no, it’s not just another buzzword. It’s a natural evolution that reflects the real complexity of working with LLMs in production applications.
From prompt engineering to context engineering
The problem with the term “prompt engineering” is that many people confuse it with blind prompting - simply writing a question in ChatGPT and expecting a result. That’s not engineering, that’s using a tool.
Context Engineering encompasses the entire process of architecting the context that an LLM needs to function effectively. It’s not just writing a prompt; it’s designing a complete system of contextual information.
What Context Engineering includes
According to the definition that convinces me most, context engineering encompasses:
- Design of prompt chains for complex flows
- Optimization of instructions and system prompts
- Management of dynamic elements (user inputs, date/time, state)
- Search and preparation of relevant knowledge (RAG)
- Definition of tools and their instructions
- Structuring of inputs and outputs (delimiters, JSON schemas)
- Memory management short and long term
- Optimization of context to eliminate irrelevant information
In short: optimizing all the information you provide in the LLM’s context window.
A practical example: Research agent
I’ve been particularly interested in this practical example by Elvis Saravia about a research agent. Let’s break it down to understand the key principles.
The planner agent
The system includes a “Search Planner” that breaks down complex queries into search subtasks. Here’s the system prompt:
You are an expert research planner. Your task is to break down a complex research query (delimited by <user_query></user_query>) into specific search subtasks, each focusing on a different aspect or source type.
The current date and time is: {{ $now.toISO() }}
For each subtask, provide:
1. A unique string ID for the subtask (e.g., 'subtask_1', 'news_update')
2. A specific search query that focuses on one aspect of the main query
3. The source type to search (web, news, academic, specialized)
4. Time period relevance (today, last week, recent, past_year, all_time)
5. Domain focus if applicable (technology, science, health, etc.)
6. Priority level (1-highest to 5-lowest)
Create 2 subtasks that together will provide comprehensive coverage of the topic.
Context engineering dissection
This prompt has several layers of context engineering worth analyzing:
1. Clear and specific instructions
It doesn’t just say “plan searches”, but defines exactly what that means: breaking down into specific subtasks with different approaches.
2. Dynamic temporal context
The current date and time is: {{ $now.toISO() }}
This is crucial. Without the current date, the LLM cannot correctly interpret terms like “last week” or “recent”.
3. Specific output structure
It defines exactly what fields each subtask needs and what type of values it expects. Leaves nothing to interpretation.
4. Use of delimiters
<user_query></user_query>
Delimiters prevent confusion between different types of information in the prompt.
5. Structured data schema
{
"subtasks": [
{
"id": "openai_latest_news",
"query": "latest OpenAI announcements and news",
"source_type": "news",
"time_period": "recent",
"domain_focus": "technology",
"priority": 1,
"start_date": "2025-06-03T06:00:00.000Z",
"end_date": "2025-06-11T05:59:59.999Z"
}
]
}
The key components of context engineering
1. Instructions
The foundation of everything. But not just “do X”, but “do X in this specific way, considering Y and Z”.
// ❌ Basic instruction
"Generate a summary of the text"
// ✅ Context engineering
"Generate an executive summary of maximum 200 words that includes:
1) The 3 main key points
2) Business implications
3) Recommended next steps
The target audience is CTOs with 10+ years of experience."
2. Structured inputs and outputs
LLMs work better when they know exactly what format to use:
interface TaskResult {
success: boolean;
data?: any;
error?: string;
confidence: number; // 0-1
sources: string[];
}
3. Tool management
In agentic systems, clearly defining what tools to use and when:
Available tools:
- search_web(query: string, date_range?: string)
- search_academic(query: string, fields?: string[])
- get_company_data(company: string)
Use search_web for general information and recent news.
Use search_academic for research papers and studies.
Use get_company_data only when specifically asked about company metrics.
4. RAG and memory
Context engineering includes deciding what external information to inject:
Relevant context from previous conversations:
- User is working on a React project
- Prefers TypeScript over JavaScript
- Uses Tailwind CSS for styling
- Working in a team of 5 developers
5. State and history
For complex applications, managing previous state:
Current session state:
- Search queries executed: 3
- Results found: 47 documents
- User feedback: "Need more recent sources"
- Last successful query: "climate change policies 2024"
Practical application in real projects
Case 1: Documentation assistant
const systemPrompt = `
You are a technical documentation assistant for a TypeScript/Node.js project.
Current project context:
- Framework: ${framework}
- Database: ${database}
- Testing: ${testingFramework}
- Current file: ${currentFile}
When generating documentation:
1. Use JSDoc format for functions
2. Include practical examples
3. Note any breaking changes since v${lastVersion}
4. Reference related functions in the same module
Output format: markdown with code blocks using the appropriate language syntax.
`;
Case 2: Code reviewer
const codeReviewPrompt = `
You are a senior code reviewer with expertise in ${language}.
Review criteria:
- Security vulnerabilities (OWASP Top 10)
- Performance issues
- Code style adherence to ${styleGuide}
- Test coverage gaps
- Documentation completeness
For each issue found, provide:
{
"severity": "low|medium|high|critical",
"category": "security|performance|style|testing|documentation",
"line": number,
"description": string,
"suggestion": string,
"example": string?
}
`;
Common problems and solutions
1. Context window overflow
// ❌ Problem: Too much context
const hugPrompt = systemPrompt + allDocuments + fullHistory;
// ✅ Solution: Context management
const relevantContext = selectRelevantContext(
userQuery,
documents,
maxTokens: 4000
);
2. Obsolete information
// ✅ Context aging
const contextMetadata = {
timestamp: Date.now(),
source: 'user_documentation',
relevanceScore: 0.87,
lastValidated: '2025-01-15'
};
3. Irrelevant context
// ✅ Context filtering
const filteredContext = contextItems
.filter(item => item.relevanceScore > 0.7)
.slice(0, 10); // Top 10 most relevant
The future of context engineering
Context automation
We’re already starting to see tools that automatically optimize context:
- DSPy - Automatic prompt optimization
- Prompt flow (Microsoft) - Visual context management
- Context compression - Techniques to reduce tokens while maintaining information
Context engineering as a service
I imagine we’ll soon see:
- Context optimization APIs
- A/B testing tools specific to prompts
- Metrics dashboards for context engineering
- Context stores - Repositories of optimized contexts
Security considerations
Context engineering also implies risks:
Prompt injection
// ❌ Vulnerable
const prompt = `Summarize: ${userInput}`;
// ✅ Protected
const prompt = `
Summarize the following text (delimited by triple quotes):
"""
${sanitizeInput(userInput)}
"""
`;
Conclusion
Context Engineering is not just a terminological evolution of prompt engineering - it’s a more mature discipline that recognizes the real complexity of building productive LLM applications.
As developers, we need to treat context as an architectural system, not as a marginal note. This means:
- Systematic design of context
- Continuous evaluation of its effectiveness
- Data-driven iteration to optimize results
- Version management of context like any other code
The future belongs to those who master these skills. Not because it’s hype, but because it’s the difference between LLM applications that work and applications that really solve problems.
Are you already applying context engineering in your projects? What patterns have you found most effective? I’d love to know your experience.
Was this article useful to you? Share it with other developers working with LLMs. And if you have any questions or experience to share about context engineering, don’t hesitate to contact me.





