Skip to main content

2 posts tagged with "Agent Design Patterns"

View All Tags

Context Engineering: The Missing Layer in Agentic Applications

· 8 min read
Dinesh Gopal
Technology Leader, AI Enthusiast and Practitioner

In this post from The Agentic Advantage series, we explore one of the most important — and often overlooked — aspects of building AI agents: Context Engineering.

If you're new to agents, check out the previous post here. in the series to learn the basics of building agents.


Why Context Engineering Matters

Building AI agents has become significantly easier thanks to frameworks, SDKs, and ADKs that abstract much of the complexity involved in creating LLM-powered applications.

However, the hardest part of building reliable agentic systems isn't creating the agent itself — it's managing the context.

Unlike traditional chat applications, many AI agents are built to perform specific tasks. These systems often cannot rely on long conversational exchanges to gradually gather context. Instead, the agent must receive all necessary information at the right moment in order to make the correct decision and complete its task.

As AI systems grow in complexity, context becomes the primary engineering challenge.


As Andrej Karpathy said it, LLMs are like a new kind of operating system. The LLM is like the CPU and its context window is like the RAM, serving as the model's working memory.

What Is Context Engineering?

Context Engineering is the practice of designing systems that provide the right information, in the right format, at the right time to help an LLM complete a task.

Context can include:

  • System instructions
  • User messages
  • Retrieved documents
  • Tool definitions
  • Tool outputs
  • Memory
  • Skills
  • External data
  • Messages from other agents

Every LLM has a finite context window (8K, 32K, 128K tokens or more). All the information above must fit inside that window during reasoning.

Our job as AI engineers is to ensure that only the information relevant to the current task occupies that space. Components of an Context Window (Figure 1: Context Window Components)


Example Scenario: A Coffee-Making Agent

Imagine we are building a simple coffee-making agent.

The user says:

Make me a cappuccino.

At first glance, this seems simple. But the agent may need to consider:

  • Coffee type
  • Milk availability
  • Machine status
  • User preference history
  • Mug size
  • Temperature preference
  • Dietary restrictions

If we dump all of this information into a single prompt, the context window quickly becomes bloated.

Instead, a well-designed system retrieves only what is needed.

That is context engineering.


Getting Started with Building AI Agents

· 7 min read
Dinesh Gopal
Technology Leader, AI Enthusiast and Practitioner

Welcome to the The Agentic Advantage

We live in a world where we’re still speculating about who the next James Bond (Agent 007) might be. While we may not know who the next agent is, we’re increasingly confident about what the next generation of applications will look like: agentic applications.

A couple of years ago, everything revolved around building Retrieval-Augmented Generation (RAG) applications. Today, with rapid advancements in large language models and tooling, almost every company is talking about agents—or as some like to say, that agent life.

In a recent discussion, the Y Combinator team predicted that there will eventually be an agent for almost every SaaS application. That’s a bold claim—but if you’ve been watching how software is evolving, it doesn’t sound that far-fetched.


Starting a New Series: The Agentic Advantage

It’s been a while since I last published a blog post—and that’s something I want to correct.

I’m starting a new series called “The Agentic Advantage”, where I’ll explore:

  • Core agent concepts
  • Agentic design patterns
  • Practical proofs of concept (POCs)
  • Lessons learned moving from demos to production

If you’d like to catch up on my earlier posts, you can find them here.

Before we start building agents, it’s important to clearly understand what an agent is not.


What Is Not an AI Agent

Any system that performs a task once, end-to-end, in a single pass is not an agent.

For example:

PROMPT = """
Write me an essay of 100 words about different types of coffee beans.
"""

This is a one-shot prompt. The system receives an instruction, generates an output, and stops. There is:

  • No planning
  • No iteration
  • No reflection
  • No autonomy

This is useful—but it’s not agentic.


So, What Is an AI Agent?

An AI agent is autonomous. It can:

  • Break down a high-level objective into smaller tasks
  • Decide how to execute those tasks
  • Use tools or APIs when needed
  • Reflect on outcomes and improve its results

In simpler terms:

Agents don’t just respond — they decide, act, and adapt.

Unlike a traditional RAG application that responds directly to a user query, an agent:

  • Understands context
  • Plans a course of action
  • Executes within defined boundaries
  • Iterates until it reaches the goal

A fully independent agent can:

  • Determine the steps needed
  • Identify and use tools
  • Iterate without continuous human intervention