Welcome to DeerFlow
DeerFlow (Deep Exploration and Efficient Research Flow) is an open-source super agent harness that orchestrates sub-agents, memory, and sandboxes to do almost anything — powered by extensible skills.DeerFlow 2.0 is a ground-up rewrite built on LangGraph and LangChain. It shares no code with v1, which is maintained on the
1.x branch.What is DeerFlow?
DeerFlow started as a Deep Research framework — but developers pushed it far beyond research. They’ve used it to build data pipelines, generate slide decks, spin up dashboards, and automate complex workflows. Things we never anticipated. DeerFlow 2.0 is no longer just a framework you wire together. It’s a super agent harness — batteries included, fully extensible. It ships with everything an agent needs out of the box:- Isolated sandbox environments with full filesystem access
- Persistent memory that learns your preferences and context
- Extensible skills for specialized workflows
- Sub-agent orchestration for complex, multi-step tasks
- Tool ecosystem including web search, file operations, and bash execution
Use it as-is for immediate productivity, or tear it apart and make it yours.
Core Capabilities
Agent Orchestration
Lead agent delegates complex tasks to specialized sub-agents that run in parallel, each with scoped context and tools
Sandbox Execution
Each task runs in an isolated environment with full filesystem, bash access, and code execution — all auditable and sandboxed
Extensible Skills
Structured capability modules for research, report generation, slide creation, web pages, and more — or add your own
Persistent Memory
Long-term memory across sessions that learns your profile, preferences, and workflows — stored locally under your control
Architecture Overview
DeerFlow is built on a modern, scalable architecture:System Components
LangGraph Server (Port 2024)
LangGraph Server (Port 2024)
The core agent runtime built on LangGraph for robust multi-agent workflow orchestration.Responsibilities:
- Agent creation and configuration
- Thread state management with checkpointing
- Middleware chain execution (10 middlewares)
- Tool execution orchestration
- SSE streaming for real-time responses
src/agents/lead_agent/agent.py:make_lead_agentGateway API (Port 8001)
Gateway API (Port 8001)
FastAPI REST application for non-agent operations and configuration management.Endpoints:
/api/models- List and configure LLM models/api/mcp- Manage MCP (Model Context Protocol) servers/api/skills- Skill discovery, installation, and management/api/memory- Memory system access and configuration/api/threads/{id}/uploads- File upload with document conversion/api/threads/{id}/artifacts- Serve generated artifacts
Frontend (Port 3000)
Frontend (Port 3000)
Next.js web application providing an intuitive chat interface.Features:
- Real-time streaming chat with SSE
- File upload and management
- Artifact preview and download
- Model and skill configuration
- Memory inspection
Nginx (Port 2026)
Nginx (Port 2026)
Unified reverse proxy entry point that routes traffic:
/api/langgraph/*→ LangGraph Server/api/*(other) → Gateway API/*(non-API) → Frontend
Key Features Deep Dive
Skills & Tools
Skills are what make DeerFlow do almost anything. A skill is a structured capability module — a Markdown file (SKILL.md) that defines a workflow, best practices, and references to supporting resources. DeerFlow ships with built-in skills for:
- Research - Deep web research with source verification
- Report Generation - Structured document creation
- Slide Creation - Presentation generation
- Web Pages - HTML/CSS site building
- Image & Video Generation - Media creation workflows
Sub-Agents
Complex tasks rarely fit in a single pass. DeerFlow decomposes them. The lead agent spawns sub-agents on the fly — each with its own scoped context, tools, and termination conditions. Sub-agents run in parallel when possible, report back structured results, and the lead agent synthesizes everything into a coherent output. Built-in Sub-Agent Types:general-purpose- Full toolset for complex multi-step tasksbash- Command execution specialist
Sandbox & File System
DeerFlow doesn’t just talk about doing things. It has its own computer. Each task runs inside an isolated environment with a full filesystem:Virtual Filesystem (Agent's View)
Host Filesystem
Context Engineering
DeerFlow manages context aggressively to stay sharp across long tasks:Isolated Sub-Agent Context
Each sub-agent runs in its own isolated context, unable to see the main agent or other sub-agents. This ensures focus on the task at hand.
Automatic Summarization
Within a session, DeerFlow summarizes completed sub-tasks when approaching token limits, compressing what’s no longer immediately relevant.
Long-Term Memory
Most agents forget everything when a conversation ends. DeerFlow remembers. Across sessions, DeerFlow builds a persistent memory of:- Your profile and work context
- Preferences and behaviors
- Accumulated knowledge and facts
- Recurring workflows and patterns
backend/.deer-flow/memory.json and stays under your control. The more you use DeerFlow, the better it knows you.
Recommended Models
DeerFlow is model-agnostic — it works with any LLM that implements the OpenAI-compatible API. That said, it performs best with models that support:Long Context
100k+ token windows for deep research and multi-step tasks
Reasoning
Adaptive planning and complex task decomposition capabilities
Multimodal
Image understanding and video comprehension support
Strong Tool Use
Reliable function calling and structured output generation
- OpenAI (GPT-4, GPT-4o)
- Anthropic (Claude 3.5 Sonnet)
- Google (Gemini 2.5 Pro)
- DeepSeek (V3 with thinking support)
- Volcengine (Doubao-Seed-1.8)
- Any OpenAI-compatible API (Novita AI, Kimi, etc.)
What’s Next?
Quick Start
Get DeerFlow running in minutes with our step-by-step guide
Configuration
Learn how to configure models, tools, and skills
Architecture
Deep dive into DeerFlow’s technical architecture
API Reference
Complete API documentation for Gateway and LangGraph