Skip to content

Sub Agents — Context Isolation

Context Perspective: A Sub Agent creates an isolated context to tackle a specific sub-task, preventing contamination of the main context.

The previous chapter covered orchestration patterns—how to organize steps. This chapter looks at the execution unit: when the main Agent needs a clean environment for a sub-task, it spawns a Sub Agent.

SUB AGENT Context Isolation Protocol MAIN AGENT (High Noise Level) Chat History... System Prompts... Complex Task Needs Isolation HANDOFF 1. Goal 2. Constraints 3. Context SUB AGENT Zero History Zone Inherited System Instructions Thinking... > tool_call: search < tool_result: found SUMMARY Compressed Insight + Merge Back + MERGED LEGEND Task Handoff Summary Result
SUB AGENT Context Isolation Protocol MAIN AGENT (High Noise Level) Chat History... System Prompts... Complex Task Needs Isolation HANDOFF 1. Goal 2. Constraints 3. Context SUB AGENT Zero History Zone Inherited System Instructions Thinking... > tool_call: search < tool_result: found SUMMARY Compressed Insight + Merge Back + MERGED LEGEND Task Handoff Summary Result

The Problem: Context Gets Dirty

Remember "context pollution" from The First Principle?

The longer a conversation goes, the longer the messages array gets. Early explorations, rejected solutions, irrelevant tool outputs… all piling up. When you ask the Agent to perform a precise sub-task in this noise—say, "write an integration test based on the latest API schema"—the LLM’s attention gets diluted. It might reference outdated code or follow a deprecated convention.

You need a clean room.

That said — not every task needs one. A single Command trigger enough? Use that — simpler. A Skill loads once and stays active. Sub-task is independent with a clear goal? Spawn a Sub Agent right away — no need to wait until the context gets noisy.

Spawning a Sub Agent

A Sub Agent is that clean room.

The main Agent can spawn one or more Sub Agents. Each Sub Agent’s messages history starts from zero—it can’t see what the main Agent discussed with you. But "clean" doesn’t mean "blank": Sub Agents typically inherit the main Agent’s System Instructions. The project standards, coding conventions, and safety rails you wrote in CLAUDE.md—the Sub Agent follows those too.

What’s isolated is the conversation history, not the project rules.

One more thing that’s easy to miss: the Sub Agent’s initial prompt is usually constructed by the main Agent automatically. You give the main Agent a big task. The main Agent analyzes it, decides "this sub-task needs isolated handling," and constructs an initial prompt for the Sub Agent. You can guide this through System Instructions—for example, "when delegating, always include file paths and constraints." The more precise your instructions, the better the prompt it constructs.

What a Good Initial Prompt Looks Like

The most common mistake when delegating to a Sub Agent is dumping the entire chat history.

The right approach — a focused task description with just three things:

  1. Goal: Be specific. "Fix the login bug in the auth module."
  2. Constraints: State the boundaries. "Do not touch the DB schema. Do not add new dependencies."
  3. Key Context: Provide only what's necessary. "Relevant files are A and B. Error logs are in C."

Dumping context is lazy. The Sub Agent will get lost in the noise.

Handoff note vs context dump Two side-by-side cards. Left: a recommended handoff note with goal, constraints, and key context. Right: an anti-pattern dumping the full transcript. Both feed a Sub Agent, but with very different signal-to-noise. Handoff Note vs Context Dump Send a self-contained note • Keep signal high and noise low RECOMMENDED: HANDOFF NOTE GOAL Fix login bug (auth module) CONSTRAINTS No DB schema changes • No new deps KEY CONTEXT Files: auth/* • Logs: server.log Sub Agent clean room Outcome: focused execution, usable summary ANTI-PATTERN: CONTEXT DUMP WHAT YOU SEND • Full transcript (200+ messages) • Rejected experiments and forks • Irrelevant tool output • Outdated assumptions (high noise, weak boundaries) Sub Agent clean room Outcome: diluted attention, drift, wasted tokens Rule: send only the minimum that makes the task executable.
Handoff note vs context dump Two side-by-side cards. Left: a recommended handoff note with goal, constraints, and key context. Right: an anti-pattern dumping the full transcript. Both feed a Sub Agent, but with very different signal-to-noise. Handoff Note vs Context Dump Send a self-contained note • Keep signal high and noise low RECOMMENDED: HANDOFF NOTE GOAL Fix login bug (auth module) CONSTRAINTS No DB schema changes • No new deps KEY CONTEXT Files: auth/* • Logs: server.log Sub Agent clean room Outcome: focused execution, usable summary ANTI-PATTERN: CONTEXT DUMP WHAT YOU SEND • Full transcript (200+ messages) • Rejected experiments and forks • Irrelevant tool output • Outdated assumptions (high noise, weak boundaries) Sub Agent clean room Outcome: diluted attention, drift, wasted tokens Rule: send only the minimum that makes the task executable.

How It Works

The main Agent delegating to a Sub Agent boils down to three steps:

Sub Agent Workflow: Delegate, Execute, Return Three-step workflow showing the main Agent delegating a task to a Sub Agent, the Sub Agent executing independently with tool calls, and returning a compressed summary back to the main Agent. Sub Agent Workflow Delegate with a focused prompt | Execute independently | Return compressed summary MAIN AGENT CONVERSATION HISTORY noise accumulates... ORCHESTRATION decision: spawn sub agent STEP 3: RECEIVED SUMMARY tests created, 201 + 400 covered file: tests/integration/createUser.test.ts SUB AGENT (ISOLATED CONTEXT) STEP 1: TASK RECEIVED goal + constraints + key context STEP 2: INDEPENDENT EXECUTION read_file(schema.json) done write_file(createUser.test.ts) done run_test FAIL expected 201... retry run_test PASS 2 tests passed done DELEGATE RETURN 1 2 3 Main Agent sees only the summary. Sub Agent's internal struggles stay in the audit log.
Sub Agent Workflow: Delegate, Execute, Return Three-step workflow showing the main Agent delegating a task to a Sub Agent, the Sub Agent executing independently with tool calls, and returning a compressed summary back to the main Agent. Sub Agent Workflow Delegate with a focused prompt | Execute independently | Return compressed summary MAIN AGENT CONVERSATION HISTORY noise accumulates... ORCHESTRATION decision: spawn sub agent STEP 3: RECEIVED SUMMARY tests created, 201 + 400 covered file: tests/integration/createUser.test.ts SUB AGENT (ISOLATED CONTEXT) STEP 1: TASK RECEIVED goal + constraints + key context STEP 2: INDEPENDENT EXECUTION read_file(schema.json) done write_file(createUser.test.ts) done run_test FAIL expected 201... retry run_test PASS 2 tests passed done DELEGATE RETURN 1 2 3 Main Agent sees only the summary. Sub Agent's internal struggles stay in the audit log.

── Inside the Sub Agent ──

The Sub Agent’s messages start from zero, but its system prompt inherits the project’s System Instructions:

Round 1: Receiving the task

json
// → REQUEST (Sub Agent → LLM API)
{
  "system": "You are a code assistant.\n\n[Project System Instructions]\n- TypeScript strict mode\n- Tests use Vitest\n- No any types\n...",
  "messages": [
    { "role": "user", "content": "You are a QA Engineer. Here is the API schema: {...}\nWrite an integration test for createUser..." }
  ]
}

Notice messages has exactly one entry—clean, focused, no baggage from the main Agent’s history.

Round 2: Context grows after tool calls

The Sub Agent reads the schema, writes a test file, runs the test, sees a failure, and corrects:

json
// → REQUEST (Sub Agent → LLM API, Round 2)
{
  "messages": [
    { "role": "user", "content": "You are a QA Engineer..." },
    { "role": "assistant", "tool_calls": [{ "name": "read_file", "arguments": { "path": "src/api/v2/schema.json" } }] },
    { "role": "tool", "content": "{ \"endpoints\": { \"createUser\": { ... } } }" },
    { "role": "assistant", "tool_calls": [{ "name": "write_file", "arguments": { "path": "tests/createUser.test.ts", "content": "..." } }] },
    { "role": "tool", "content": "File written." },
    { "role": "assistant", "tool_calls": [{ "name": "bash", "arguments": { "command": "vitest run createUser" } }] },
    { "role": "tool", "content": "FAIL: expected 201 but got 500..." },
    { "role": "assistant", "content": "Test failed with 500. Fixing test mock..." }
  ]
}

The Sub Agent might go through a dozen rounds of tool calls internally—reading specs, writing code, running tests, fixing bugs. All of this happens in the isolated context. The main Agent can’t see it and isn’t disturbed by it.

The Final Return

When the Sub Agent finishes, it returns a summary to the main Agent—not dozens of raw messages, but a compressed result. Think of it like git stash: stash your current complex context, do an atomic task on a clean branch, then switch back with the output.

What the main Agent receives is just: "Tests created, covering 201 and 400, file at tests/integration/createUser.test.ts." Whatever struggles the Sub Agent went through in between—the main Agent doesn’t need to know.

Isolation boundary: inherited rules, isolated history, returned summary Diagram of what crosses the boundary when spawning a Sub Agent. System instructions are inherited, conversation history is isolated, and only a summary is returned. A full audit trail is kept for drill-down. Isolation Boundary (Sub Agents) Inherited: rules • Isolated: chat history • Returned: summary • Stored: full audit trail MAIN AGENT (PARENT CONTEXT) SYSTEM INSTRUCTIONS project rules • safety rails • conventions CONVERSATION HISTORY messages[] grows over time exploration • rejects • irrelevant tool output ORCHESTRATION decide: spawn? handoff note? verify? SUB AGENT (ISOLATED CONTEXT) SYSTEM (INHERITED) same rules as the parent MESSAGES[] (FRESH) starts from zero INTERNAL WORK tool calls • retries • failures kept in the session log (do not erase evidence) context boundary inherited handoff note GOAL CONSTRAINTS KEY CONTEXT NOT SHARED raw transcript returned: summary (compressed) AUDIT TRAIL full session log: tool outputs, intermediate steps, failures • drill down when summary looks wrong
Isolation boundary: inherited rules, isolated history, returned summary Diagram of what crosses the boundary when spawning a Sub Agent. System instructions are inherited, conversation history is isolated, and only a summary is returned. A full audit trail is kept for drill-down. Isolation Boundary (Sub Agents) Inherited: rules • Isolated: chat history • Returned: summary • Stored: full audit trail MAIN AGENT (PARENT CONTEXT) SYSTEM INSTRUCTIONS project rules • safety rails • conventions CONVERSATION HISTORY messages[] grows over time exploration • rejects • irrelevant tool output ORCHESTRATION decide: spawn? handoff note? verify? SUB AGENT (ISOLATED CONTEXT) SYSTEM (INHERITED) same rules as the parent MESSAGES[] (FRESH) starts from zero INTERNAL WORK tool calls • retries • failures kept in the session log (do not erase evidence) context boundary inherited handoff note GOAL CONSTRAINTS KEY CONTEXT NOT SHARED raw transcript returned: summary (compressed) AUDIT TRAIL full session log: tool outputs, intermediate steps, failures • drill down when summary looks wrong

Connecting Back to the First Principle

A Sub Agent’s performance depends on two things:

  1. The quality of System Instructions: The project rules you wrote in CLAUDE.md—the Sub Agent consumes those too. Good rules mean the Sub Agent’s behavior aligns with project standards. This is the knowledge feeding rule layer at work inside Sub Agents.

  2. The quality of the initial prompt the main Agent constructs: This loops back to The First Principle—context quality determines output quality. A good initial prompt must be:

    • Self-contained: Not reliant on hidden information from the parent context.
    • Complete: Including all necessary background materials (code snippets, file paths, clear objectives).
    • Focused: Only information relevant to the sub-task, no noise.

What you can do: give the main Agent clear instructions and sufficient background. The better the raw material the main Agent has, the better the prompt it constructs for the Sub Agent.

Key Takeaways

  • Context flow: The main Agent extracts information from its own context to construct the initial prompt → Sub Agent executes independently in isolated context → returns a summary that gets injected back into the main Agent’s context. Runtime isolation and full logging coexist—they don’t conflict.
  • Risk: Isolation is a double-edged sword. If the main Agent omits a key constraint when constructing the prompt, the Sub Agent works without critical information and may produce non-compliant code. Over-splitting also has costs—each Sub Agent needs to rebuild context from scratch, and coordination overhead accumulates.
  • Auditability: Each Sub Agent’s full session log is saved independently and can be traced. When a summary looks wrong, you can drill down into the Sub Agent’s complete context to investigate. A summary is compression, not truth.

Next up: Human-in-the-Loop—your role in the workflow: when to let go, when to step in.