ReasoningMixin

ReasoningMixin adds support for reasoning models. Set reasoning = ReasoningConfig(...) on an agent class to enable reasoning capabilities.

ReasoningConfig

from djangosdk.providers.schemas import ReasoningConfig

config = ReasoningConfig(
    effort="medium",              # "low" | "medium" | "high" — for o3/o4-mini
    budget_tokens=8000,           # Token budget for DeepSeek R1
    extended_thinking=False,      # Enable for Claude 3.7 Sonnet
    thinking_budget=10000,        # Token budget for Claude 3.7 extended thinking
    stream_thinking=False,        # Include thinking_delta chunks in streams
)

OpenAI o3 / o4-mini

from djangosdk.agents.base import Agent
from djangosdk.providers.schemas import ReasoningConfig

class ReasoningAgent(Agent):
    provider = "openai"
    model = "o4-mini"
    reasoning = ReasoningConfig(effort="high")

agent = ReasoningAgent()
response = agent.handle("What is the optimal solution to the travelling salesman problem for 10 cities?")
print(response.text)

The effort parameter maps to the reasoning_effort parameter in the OpenAI API.

Claude 3.7 Extended Thinking

DeepSeek R1

Streaming Thinking Content

To stream the reasoning trace along with the final response, set stream_thinking=True:

When streaming, chunks with chunk.type == "thinking_delta" will be emitted before the final text_delta chunks.

Accessing Thinking Content

Thinking content is available on the response:

Parameter Mapping

LiteLLMProvider injects the correct API parameters automatically:

Model family

ReasoningConfig field

API parameter

o3, o4-mini

effort

reasoning_effort

Claude 3.7

extended_thinking, thinking_budget

thinking.type, thinking.budget_tokens

DeepSeek R1

budget_tokens

budget_tokens

Gemini 2.5

effort

thinking_config.thinking_budget (via litellm)

Last updated

Was this helpful?