Claude Code Advanced Workflow: Subagents, Commands & Multi-Session
Master Claude Code subagents, custom slash commands, multi-session workflows, and AGENTS.md setup with production-tested patterns from a 14-agent AI company.
Claude Code Advanced Workflow: Subagents, Commands & Multi-Session
Most Claude Code tutorials stop at "write a good CLAUDE.md and let Claude handle the rest." That advice is fine for getting started, but it leaves the most powerful features untouched: subagents that run in isolated contexts, custom slash commands that encode your team's workflows, multi-session patterns that multiply your throughput, and prompting techniques that consistently produce better results.
At Effloow, we run a fully AI-powered content company with 14 agents orchestrated through Paperclip. Every agent runs Claude Code. We have been iterating on advanced workflow patterns for months, and the difference between basic usage and optimized usage is not incremental — it changes what is possible.
This guide covers the advanced patterns we use daily. If you have not set up your CLAUDE.md yet, start with our CLAUDE.md setup guide first, then come back here.
Why Context Management Is Everything
Before diving into specific features, you need to understand the single constraint that drives every advanced pattern: Claude's context window fills up fast, and performance degrades as it fills.
Every file Claude reads, every command output, every conversation turn consumes tokens from a fixed budget. When that budget runs low, Claude starts compacting — summarizing earlier parts of the conversation to free space. Important details get lost. Instructions from your CLAUDE.md compete with accumulated conversation for attention.
This is why every advanced technique in this guide exists: to keep your context clean, focused, and efficient. Subagents isolate exploration so your main context stays pristine. Custom commands encode workflows so Claude does not need lengthy explanations. Multi-session patterns let you split work across fresh contexts instead of cramming everything into one.
Subagents: Isolated Execution for Complex Tasks
Subagents are the single most underused feature in Claude Code. They run in their own context window with their own set of allowed tools, and they report back a summary — keeping your main conversation clean.
When to Use Subagents
Use subagents whenever a task requires reading many files or exploring a codebase. Without subagents, investigation tasks fill your context with file contents you will never reference again.
Use subagents to investigate how our authentication system handles token
refresh, and whether we have any existing OAuth utilities I should reuse.
The subagent explores the codebase, reads files, and reports findings — all without cluttering your main conversation with hundreds of lines of code.
You can also use subagents for verification after implementation:
Use a subagent to review the rate limiter I just wrote for edge cases
and race conditions.
This gives you a fresh perspective from a context that is not biased toward the code it just wrote.
Defining Custom Subagents
Beyond the built-in subagent types, you can define specialized agents in .claude/agents/. Each agent gets its own system prompt and tool restrictions:
# .claude/agents/security-reviewer.md
---
name: security-reviewer
description: Reviews code for security vulnerabilities
tools: Read, Grep, Glob, Bash
model: opus
---
You are a senior security engineer. Review code for:
- Injection vulnerabilities (SQL, XSS, command injection)
- Authentication and authorization flaws
- Secrets or credentials in code
- Insecure data handling
Provide specific line references and suggested fixes.
Once defined, tell Claude to use it: "Use the security-reviewer subagent to audit the new API endpoints."
Real Patterns from Effloow
At Effloow, we use custom subagents for several recurring patterns:
Content QA Agent: Before publishing any article, a subagent reviews the Markdown for broken links, missing frontmatter fields, SEO issues, and factual consistency. This runs in isolation so the publishing agent's context stays focused on the actual deployment.
Dependency Auditor: When updating packages, a subagent checks each dependency for breaking changes, security advisories, and compatibility issues. The main session only sees the summary: "3 packages updated safely, 1 requires manual migration."
Code Exploration: When onboarding a new agent to a part of the codebase, we use the Explore subagent type rather than having the main agent read dozens of files. The exploration report becomes a compact briefing that fits cleanly into context.
Custom Slash Commands with Skills
Skills are reusable workflows stored in .claude/skills/ that Claude loads on demand. Unlike CLAUDE.md instructions that load every session, skills only activate when relevant — keeping your baseline context lean.
Creating a Skill
Create a directory with a SKILL.md file:
# .claude/skills/fix-issue/SKILL.md
---
name: fix-issue
description: Fix a GitHub issue end-to-end
disable-model-invocation: true
---
Analyze and fix the GitHub issue: $ARGUMENTS.
1. Use `gh issue view` to get the issue details
2. Understand the problem described in the issue
3. Search the codebase for relevant files
4. Implement the necessary changes to fix the issue
5. Write and run tests to verify the fix
6. Ensure code passes linting and type checking
7. Create a descriptive commit message
8. Push and create a PR
Run it with /fix-issue 1234. The disable-model-invocation: true flag ensures this workflow only runs when you explicitly invoke it, not when Claude decides to use it autonomously.
Skills vs. CLAUDE.md
The distinction matters for context efficiency:
| Put in CLAUDE.md | Put in a Skill |
|---|---|
Build commands (pnpm test, npm run lint) |
Multi-step workflows (deploy, fix-issue) |
| Code style rules that apply everywhere | Domain knowledge for specific task types |
| Repo conventions (branch naming, PR format) | Integration-specific procedures |
| Things Claude needs every session | Things Claude needs sometimes |
Practical Skill Examples
Deploy skill for consistent deployment:
# .claude/skills/deploy/SKILL.md
---
name: deploy
description: Deploy the current branch to staging or production
disable-model-invocation: true
---
Deploy to $ARGUMENTS (default: staging).
1. Run the test suite and abort if any test fails
2. Build the project with `pnpm build`
3. If deploying to production, create a git tag with today's date
4. Push to the appropriate remote branch
5. Monitor deployment status and report results
Review skill for consistent code reviews:
# .claude/skills/review/SKILL.md
---
name: review
description: Review a pull request
disable-model-invocation: true
---
Review PR $ARGUMENTS.
1. Use `gh pr view $ARGUMENTS` to get PR details
2. Use `gh pr diff $ARGUMENTS` to read the changes
3. Check for: correctness, edge cases, test coverage, style consistency
4. Post a review comment with findings
Multi-Session Workflows
Running multiple Claude sessions in parallel is where productivity gains become dramatic. Instead of one session juggling implementation, testing, and review, you split concerns across fresh contexts.
The Writer/Reviewer Pattern
This is the most immediately useful multi-session pattern:
| Session A (Writer) | Session B (Reviewer) |
|---|---|
| "Implement rate limiting for API endpoints" | — |
| — | "Review the rate limiter in src/middleware/rateLimiter.ts. Check for edge cases, race conditions, and consistency with existing middleware." |
| "Address this review feedback: [paste Session B output]" | — |
Session B has clean context — it is not biased toward the implementation because it did not write it. This catches issues that single-session workflows miss.
The Test-First Pattern
Have one session write tests, then another write code to pass them:
- Session A: "Write comprehensive tests for a user registration endpoint. Cover validation, duplicate emails, password requirements, and error responses."
- Session B: "Make all tests in
tests/registration.test.tspass. Do not modify the test file."
This produces better test coverage because the test writer is not influenced by implementation shortcuts.
Non-Interactive Mode for Automation
claude -p "prompt" runs Claude without a session, which is how you integrate it into scripts, CI pipelines, and batch operations:
# Analyze a file
claude -p "Explain what this project does"
# Structured output for scripts
claude -p "List all API endpoints" --output-format json
# Batch migration across files
for file in $(cat files-to-migrate.txt); do
claude -p "Migrate $file from React class components to hooks. Return OK or FAIL." \
--allowedTools "Edit,Bash(git commit *)"
done
The --allowedTools flag restricts what Claude can do during unattended runs — essential for batch operations.
Fan-Out Pattern for Large Migrations
For tasks that touch hundreds of files, the fan-out pattern is transformative:
- Have Claude generate a task list: "List all Python files that need migrating from unittest to pytest"
- Write a script that loops through the list, calling
claude -pfor each file - Test on 2-3 files, refine your prompt based on results
- Run at scale
This turns a multi-day manual migration into an overnight batch job.
AGENTS.md for Team Setups
When multiple agents or team members work on the same codebase, AGENTS.md provides agent-specific instructions that layer on top of CLAUDE.md. Each agent gets context tailored to its role without bloating the shared configuration.
How AGENTS.md Works
AGENTS.md files live alongside your CLAUDE.md and provide role-specific instructions. While CLAUDE.md contains universal project rules, AGENTS.md carries instructions specific to one agent's responsibilities.
At Effloow, each of our 14 agents has its own instruction file that defines:
- Its role and responsibilities
- Which parts of the codebase it should focus on
- How it should communicate with other agents
- Specific workflows and checklists it must follow
Structuring Agent Instructions
A well-structured agent instruction file covers:
# Role Definition
You are the Publisher agent. You handle the final publishing pipeline.
# Responsibilities
1. Validate article frontmatter and content quality
2. Commit and push to the content repository
3. Cross-post to external platforms
# Checklists
## Pre-Publish Checklist
- [ ] Verify frontmatter is complete
- [ ] Confirm no [PLACEHOLDER] tags remain
- [ ] Check all internal cross-links
- [ ] Validate Markdown formatting
# Communication Protocol
- Report publishing results to Editor-in-Chief
- If blocked, escalate with specific blocker details
The key principle: each agent's instructions should be self-contained enough that it can operate independently, but aware enough of the team structure to collaborate effectively.
Advanced Prompting Techniques
Beyond workflow patterns, how you phrase prompts significantly impacts output quality.
The Explore-Plan-Implement Pattern
This three-phase approach, recommended in the official best practices, prevents Claude from jumping to implementation before understanding the problem:
- Explore (Plan Mode): "Read
/src/authand understand how we handle sessions and login. Also look at how we manage environment variables for secrets." - Plan (Plan Mode): "I want to add Google OAuth. What files need to change? What is the session flow? Create a plan."
- Implement (Normal Mode): "Implement the OAuth flow from your plan. Write tests for the callback handler, run the test suite and fix any failures."
Plan Mode (Ctrl+G to toggle) lets Claude explore and plan without making changes. This separation prevents wasted implementation effort on the wrong approach.
The Interview Pattern
For larger features where requirements are ambiguous, have Claude interview you instead of trying to specify everything upfront:
I want to build a notification system. Interview me in detail using
the AskUserQuestion tool.
Ask about technical implementation, UI/UX, edge cases, concerns,
and tradeoffs. Don't ask obvious questions — dig into the hard
parts I might not have considered.
Keep interviewing until we've covered everything, then write a
complete spec to SPEC.md.
Once the spec is complete, start a fresh session to implement it. The new session has clean context focused entirely on execution.
Scoped Investigation
Open-ended prompts like "investigate this" cause Claude to read hundreds of files, filling context. Always scope your investigations:
# Bad — unbounded exploration
Investigate the authentication system.
# Good — scoped investigation
Check how token refresh works in src/auth/refresh.ts and whether
it handles expired refresh tokens gracefully.
When you genuinely need broad investigation, delegate it to a subagent so the exploration does not consume your main context.
Course Correction Techniques
Claude is not always right on the first try. The key is correcting efficiently:
Esc: Stop Claude mid-action. Context is preserved so you can redirect.Esc + Escor/rewind: Restore to a previous checkpoint — conversation, code, or both./clear: Reset context between unrelated tasks. This is the most underused command.
If you have corrected Claude more than twice on the same issue, the context is polluted with failed approaches. Run /clear and start fresh with a better prompt that incorporates what you learned. A clean session with a better prompt almost always outperforms a long session with accumulated corrections.
Hooks: Guaranteed Actions
Unlike CLAUDE.md instructions which are advisory, hooks are deterministic scripts that run at specific points in Claude's workflow.
Write a hook that runs eslint after every file edit.
Claude can write hooks for you and configure them in .claude/settings.json. Use hooks when an action must happen every time with zero exceptions — linting after edits, running tests before commits, blocking writes to protected directories.
The distinction between CLAUDE.md instructions and hooks is important: instructions can be forgotten as context fills up, but hooks execute regardless.
Putting It All Together
Here is how these techniques combine in a real workflow at Effloow:
- Skills define our repeatable workflows:
/publish,/review,/deploy - Custom subagents handle specialized tasks: content QA, security review, dependency auditing
- Multi-session patterns separate writing from reviewing — our Writer agent produces articles in one context, while our Publisher agent validates and deploys in another
- AGENTS.md gives each of our 14 agents role-specific instructions without bloating the shared
CLAUDE.md - Hooks enforce non-negotiable rules: every article must pass frontmatter validation before publishing
The result is a system where each agent operates with a clean, focused context and clear responsibilities, using advanced Claude Code features to stay efficient rather than fighting context limits.
What to Try First
If you are coming from basic Claude Code usage, here is a progression:
- Start with subagents: Next time you need to investigate code, say "use subagents to investigate X" instead of asking Claude directly. Notice how much cleaner your context stays.
- Create one skill: Pick your most common multi-step workflow and encode it as a skill. You will use it more than you expect.
- Try the Writer/Reviewer pattern: Run two sessions for your next feature. The review quality from a fresh context is noticeably better.
- Add a hook: Pick one rule that Claude occasionally forgets and make it a hook instead.
Each of these is a small change that compounds over time. The teams that get the most from Claude Code are not using more advanced prompts — they are using these structural features to work with Claude's architecture instead of against it.
Related Resources
- The Perfect CLAUDE.md: How to Set Up Your Project for Agentic Coding — Start here if you have not configured your CLAUDE.md yet
- How to Build a Custom MCP Server for Claude Code — Extend Claude with external tool integrations
- What Is Vibe Coding? The Developer Trend Redefining How We Build Software — The broader trend behind these advanced workflows
- Official Claude Code Best Practices — Anthropic's own guide to effective usage