Configuration Structure
Models are configured in themodels section of config.yaml:
config.yaml
Required Fields
Unique identifier for the model. Used in API calls and UI selection.
Import path to the LangChain chat model class. Format:
module.path:ClassNameExamples:langchain_openai:ChatOpenAIlangchain_anthropic:ChatAnthropiclangchain_google_genai:ChatGoogleGenerativeAIsrc.models.patched_deepseek:PatchedChatDeepSeek
The model identifier expected by the provider (e.g.,
gpt-4, claude-3-5-sonnet-20241022)Optional Fields
Human-readable name shown in the UI. Defaults to the
name field if not specified.Optional description of the model’s capabilities or use case.
API key for the provider. Always use environment variables (e.g.,
$OPENAI_API_KEY) to avoid committing secrets.Maximum number of tokens to generate in responses.
Sampling temperature (0.0 to 2.0). Lower values are more deterministic, higher values more creative.
Custom API endpoint URL. Required for OpenAI-compatible providers.Example:
https://api.novita.ai/openaiAlternative field name for base URL (used by some providers like DeepSeek).
Advanced Features
Vision Support
Enable the model to process images. When enabled, the agent can use tools like
view_image.Thinking/Reasoning Support
Enable extended reasoning mode (for models like DeepSeek-R1, o1).
Additional parameters passed to the model when thinking mode is enabled.
The
when_thinking_enabled configuration varies by provider. Check your provider’s documentation for the correct format.Reasoning Effort
Enable adjustable reasoning effort levels (for models that support it).
Provider Examples
OpenAI
Anthropic Claude
Google Gemini
Google models use
google_api_key instead of api_keyDeepSeek (with Thinking)
Novita AI (OpenAI-Compatible)
Volcengine Doubao
Moonshot Kimi
Model Selection
The first model in themodels list is used as the default model for:
- Conversation responses
- Title generation (unless overridden)
- Summarization (unless overridden)
- Memory updates (unless overridden)
Programmatic Access
Access model configuration in Python:Best Practices
Use Environment Variables for API Keys
Use Environment Variables for API Keys
Never hardcode API keys in configuration files. Always use environment variables:
Configure Multiple Models
Configure Multiple Models
Set up multiple models for different use cases:
- Fast, cost-effective model as default
- Powerful model for complex tasks
- Vision-enabled model for image processing
- Thinking model for reasoning tasks
Set Appropriate Token Limits
Set Appropriate Token Limits
Configure
max_tokens based on your use case:- Lower values (1000-4000) for quick interactions
- Higher values (8000+) for code generation and long documents
Test Vision Support
Test Vision Support
Not all models support vision. Test image processing capabilities before enabling:
Environment Variables Reference
Common environment variables for model configuration:| Variable | Provider | Description |
|---|---|---|
OPENAI_API_KEY | OpenAI | OpenAI API key |
ANTHROPIC_API_KEY | Anthropic | Anthropic API key |
GOOGLE_API_KEY | Google AI API key | |
DEEPSEEK_API_KEY | DeepSeek | DeepSeek API key |
NOVITA_API_KEY | Novita AI | Novita AI API key |
VOLCENGINE_API_KEY | Volcengine | Volcengine (Doubao) API key |
MOONSHOT_API_KEY | Moonshot | Moonshot (Kimi) API key |
Troubleshooting
ValueError: Environment variable not found
ValueError: Environment variable not found
This error occurs when a referenced environment variable is not set:
ModuleNotFoundError: No module named 'langchain_xxx'
ModuleNotFoundError: No module named 'langchain_xxx'
Install the required LangChain provider package:
Model doesn't support vision/thinking
Model doesn't support vision/thinking
Check the provider’s documentation for supported features. Not all models support vision or extended reasoning.Set the flags correctly:
Next Steps
Sandbox Configuration
Configure execution environments
Tools Configuration
Set up tools and capabilities