A workshop on building a research agent from scratch with the Deep Agents framework. The repo ships both an interactive notebook that walks through the concepts step by step and a standalone agent wired up for LangSmith Studio.
- Creating a basic Deep Agent with built-in filesystem and planning tools
- Adding custom tools (web search via Tavily)
- Understanding backends: StateBackend, FilesystemBackend, StoreBackend, CompositeBackend
- Delegating work to subagents for context isolation
- Human-in-the-loop approval for sensitive operations
- Long-term memory with
/memories/*routing across threads - AGENTS.md for persistent agent identity (always loaded)
- Skills (SKILL.md) for on-demand capabilities via progressive disclosure
1. Clone the repo
git clone https://github.com/langchain-ai/interrupt26-deepagents.git
cd interrupt26-deepagents2. Install dependencies
uv sync3. Configure environment
cp .env.example .envFill in your API keys in .env. At minimum:
ANTHROPIC_API_KEY(or swap providers — seeutils/models.py)TAVILY_API_KEY— free at tavily.com
Optional but recommended:
LANGSMITH_API_KEY+LANGSMITH_TRACING=truefor full trace observability
uv run jupyter notebookOpen deep_agent.ipynb and run the cells top to bottom. The 8 parts each take ~30s to a couple minutes to execute.
The repo ships a production-shaped agent at agent/agent.py, wired up via langgraph.json. Start the local LangGraph API + Studio with one command:
uv run langgraph devYou'll see something like:
- 🚀 API: http://127.0.0.1:2024
- 🎨 Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
Open the Studio URL in a browser. The Deep Agent graph appears in the sidebar. From there you can:
- Chat with the agent and watch each tool call land in real time
- Inspect intermediate state, the virtual filesystem, and the agent's todo list
- Step through threads, fork them, and edit messages mid-conversation
- See
/memories/*files persist across threads —langgraph devprovides the checkpointer + store automatically
When you're ready to deploy, langgraph.json is already shaped for LangSmith Deployments — you can use our langgraph-cli to deploy your agent directly from your terminal using uv run langgraph deploy
The default model is Anthropic (claude-haiku-4-5). To switch, edit utils/models.py — commented-out sections are included for OpenAI, Azure OpenAI, AWS Bedrock, and Google Vertex AI (Gemini). For non-default providers, install the matching extra:
uv sync --extra azure # Azure OpenAI
uv sync --extra bedrock # AWS Bedrock
uv sync --extra vertex # Google Vertex AI