Skip to content

Orchestration modes

Supervisor vs Grounded topologies, mode_config settings, NodeConfig per-node customization, and resource_id for grounded context anchoring.

Intended audience: Stakeholders, Business analysts, Solution architects, Developers, Testers

Learning outcomes by role

Stakeholders

  • Contrast supervisor versus grounded modes for complexity and safety postures.

Business analysts

  • Capture mode_config and node-level customization in acceptance language.

Solution architects

  • Explain resource_id anchoring and topology constraints for integrations.

Developers

  • Apply mode_config and NodeConfig fields when building orchestrator configs.

Testers

  • Validate mode switches, grounded context, and failure handling per mode.

An orchestration mode defines the routing topology inside the agent graph. It determines how the orchestrator classifies intent, calls tools, validates outputs, and assembles the final response. Mode is set at creation alongside framework_type — both are immutable.

ModeFrameworksUse case
supervisorlanggraph, google_adkOpen-ended assistant with multi-step tool routing, planning, and synthesis
groundedlanggraphAnswers scoped to a single business record or anchor object (resource_id required per chat request)
coordinatorlanggraphMulti-agent coordination topology
handofflanggraphSequential agent handoffs

openai_agents uses flat tool invocation — no topology mode applies.

flowchart TD
IN[Incoming message] --> CL[Classifier node]
CL --> PL[Planner node]
PL --> AG[Agent / tool calls]
AG --> |tool results| PL
PL --> |max_agent_hops reached or done| SY[Synthesizer node]
SY --> VA{enabled_llm_validation ?}
VA -->|yes| VN[Validation node]
VA -->|no| OUT[Response]
VN --> OUT

The supervisor topology classifies intent, plans tool sequences, iterates up to max_agent_hops times, then synthesizes a final answer. Optional clarification (enabled_clarification_intent) intercepts ambiguous queries before planning.

mode_config is nested inside the orchestrator’s config JSON object:

orchestrator config.mode_config
{
"default_llm_config_id": "<uuid>",
"mode_config": {
"max_agent_hops": 10,
"parallel_tool_calls": true,
"invoke_timeout": 60,
"supervisor_timeout": 60,
"use_llm_validation": false
}
}

From BaseOrchestratorSettings — apply regardless of mode:

FieldDefaultDescription
node_execution_timeout60Per-node execution timeout in seconds
max_context_window16000Maximum token context window
message_context_window10Number of past messages included in context (supervisor overrides to 5)
enabled_auto_compactfalseAuto-compact context when it exceeds the window
enabled_suggestionfalseEnable follow-up suggestion generation after each response

From BaseSupervisorSettings:

FieldDefaultDescription
max_agent_hops10Maximum recursive tool-call cycles before forced synthesis
enabled_parallel_tool_callstrueAllow parallel tool calls in a single planning step
enabled_llm_validationfalseRun a validation LLM pass on the draft response
enabled_clarification_intenttrueIntercept and clarify ambiguous user intent before planning
message_context_window5Supervisor overrides the base default of 10

From BaseGroundedSettings:

FieldDefaultDescription
scope_rules""Free-text constraints injected into the agent’s system context
max_tool_rounds5Maximum tool-call rounds before forced synthesis
enabled_validatortrueRun a validator pass to check answer quality against the resource

Each logical node in the graph can have its own LLM config, model, and prompt override, independent of the orchestrator’s default_llm_config_id.

cadence/engine/base/node_config.py
class NodeConfig(BaseModel):
llm_config_id: Optional[str] = None # Override org-level default for this node
model_name: Optional[str] = None # Model override (e.g. "gpt-4o-mini" for fast nodes)
prompt: Optional[str] = None # System prompt override
temperature: Optional[float] = None
max_tokens: Optional[int] = None
timeout: Optional[int] = None # Per-node timeout in seconds

Available nodes per mode:

ModeNodes
Supervisorclassifier_node, planner_node, synthesizer_node, validation_node, clarifier_node, responder_node, autocompact, suggestion_node, error_handler_node
Groundedagent_node, synthesizer_node, autocompact, suggestion_node, error_handler_node

Example: cheap model for classification, stronger model for synthesis

Section titled “Example: cheap model for classification, stronger model for synthesis”
orchestrator config
{
"default_llm_config_id": "<gpt4o-config-uuid>",
"mode_config": {
"classifier_node": {
"model_name": "gpt-4o-mini"
},
"synthesizer_node": {
"model_name": "gpt-4o",
"temperature": 0.3
}
}
}

Grounded mode requires a resource_id on every chat request. This is the anchor object (a document ID, URL, database record ID, or any string) that constrains the agent’s scope.

cadence/schemas/chat.py
class ChatCompletionRequest(BaseModel):
message: str # Required; 1–1000 characters
conversation_id: Optional[str] # Resumes an existing thread
stream: bool = True
resource_id: Optional[str] # Required for grounded mode

The resource_id is written into the graph’s initial state (state.resource_id). Plugin tool nodes can read it to scope their queries:

plugin tool handler (example)
resource_id = state.get("resource_id")
results = await db.query(doc_id=resource_id, question=query)
POST /api/orgs/{org_id}/orchestrators
{
"name": "General Purpose Assistant",
"framework_type": "langgraph",
"mode": "supervisor",
"active_plugin_ids": ["com.example.web-search"],
"tier": "hot",
"config": {
"default_llm_config_id": "<uuid>",
"mode_config": {
"max_agent_hops": 8,
"enabled_parallel_tool_calls": true,
"enabled_clarification_intent": true,
"enabled_llm_validation": false
}
}
}
SymptomCauseFix
422 on create — mode not valid for frameworkgoogle_adk + grounded, or openai_agents + any modeCheck GET /api/frameworks/{type}/supported-providers
Agent loops without completingmax_agent_hops too high or tool always returns partial resultsReduce max_agent_hops; fix tool return logic
Grounded mode ignores resource_idPlugin tool doesn’t read state.resource_idUpdate the plugin to scope queries using state["resource_id"]
Chat fails in grounded mode without error messageresource_id missing from requestAlways pass resource_id in the chat body for grounded orchestrators
Classifier sends all queries to the same agentenabled_clarification_intent catching too broadlySet enabled_clarification_intent: false or tune the classifier prompt via classifier_node.prompt
Validation node rejects valid responsesenabled_llm_validation validation prompt too strictSet enabled_llm_validation: false or override validation_node.prompt
Need to change modeMode is immutableCreate a new orchestrator; update central points to reference it