Orchestration modes
Supervisor vs Grounded topologies, mode_config settings, NodeConfig per-node customization, and resource_id for grounded context anchoring.
Intended audience: Stakeholders, Business analysts, Solution architects, Developers, Testers
Learning outcomes by role
Stakeholders
- Contrast supervisor versus grounded modes for complexity and safety postures.
Business analysts
- Capture mode_config and node-level customization in acceptance language.
Solution architects
- Explain resource_id anchoring and topology constraints for integrations.
Developers
- Apply mode_config and NodeConfig fields when building orchestrator configs.
Testers
- Validate mode switches, grounded context, and failure handling per mode.
An orchestration mode defines the routing topology inside the agent graph. It determines
how the orchestrator classifies intent, calls tools, validates outputs, and assembles the final
response. Mode is set at creation alongside framework_type — both are immutable.
Mode overview
Section titled “Mode overview”| Mode | Frameworks | Use case |
|---|---|---|
supervisor | langgraph, google_adk | Open-ended assistant with multi-step tool routing, planning, and synthesis |
grounded | langgraph | Answers scoped to a single business record or anchor object (resource_id required per chat request) |
coordinator | langgraph | Multi-agent coordination topology |
handoff | langgraph | Sequential agent handoffs |
openai_agents uses flat tool invocation — no topology mode applies.
Architecture overview
Section titled “Architecture overview”flowchart TD
IN[Incoming message] --> CL[Classifier node]
CL --> PL[Planner node]
PL --> AG[Agent / tool calls]
AG --> |tool results| PL
PL --> |max_agent_hops reached or done| SY[Synthesizer node]
SY --> VA{enabled_llm_validation ?}
VA -->|yes| VN[Validation node]
VA -->|no| OUT[Response]
VN --> OUTThe supervisor topology classifies intent, plans tool sequences, iterates up to
max_agent_hops times, then synthesizes a final answer. Optional clarification
(enabled_clarification_intent) intercepts ambiguous queries before planning.
flowchart TD
IN[Incoming message + resource_id] --> AG[Agent node]
AG --> |tool calls scoped to resource| TL[Tool layer]
TL --> AG
AG --> |max_tool_rounds reached| SY[Synthesizer node]
SY --> VA{enabled_validator ?}
VA -->|yes| VN[Validator]
VA -->|no| OUT[Response]
VN --> OUTEvery grounded chat request must include resource_id. The mode constrains tool execution
and context to that anchor. scope_rules injects constraints into the agent’s system context.
Data model — mode_config
Section titled “Data model — mode_config”mode_config is nested inside the orchestrator’s config JSON object:
{ "default_llm_config_id": "<uuid>", "mode_config": { "max_agent_hops": 10, "parallel_tool_calls": true, "invoke_timeout": 60, "supervisor_timeout": 60, "use_llm_validation": false }}Base settings (all modes)
Section titled “Base settings (all modes)”From BaseOrchestratorSettings — apply regardless of mode:
| Field | Default | Description |
|---|---|---|
node_execution_timeout | 60 | Per-node execution timeout in seconds |
max_context_window | 16000 | Maximum token context window |
message_context_window | 10 | Number of past messages included in context (supervisor overrides to 5) |
enabled_auto_compact | false | Auto-compact context when it exceeds the window |
enabled_suggestion | false | Enable follow-up suggestion generation after each response |
Supervisor-specific settings
Section titled “Supervisor-specific settings”From BaseSupervisorSettings:
| Field | Default | Description |
|---|---|---|
max_agent_hops | 10 | Maximum recursive tool-call cycles before forced synthesis |
enabled_parallel_tool_calls | true | Allow parallel tool calls in a single planning step |
enabled_llm_validation | false | Run a validation LLM pass on the draft response |
enabled_clarification_intent | true | Intercept and clarify ambiguous user intent before planning |
message_context_window | 5 | Supervisor overrides the base default of 10 |
Grounded-specific settings
Section titled “Grounded-specific settings”From BaseGroundedSettings:
| Field | Default | Description |
|---|---|---|
scope_rules | "" | Free-text constraints injected into the agent’s system context |
max_tool_rounds | 5 | Maximum tool-call rounds before forced synthesis |
enabled_validator | true | Run a validator pass to check answer quality against the resource |
NodeConfig — per-node LLM customization
Section titled “NodeConfig — per-node LLM customization”Each logical node in the graph can have its own LLM config, model, and prompt override,
independent of the orchestrator’s default_llm_config_id.
class NodeConfig(BaseModel): llm_config_id: Optional[str] = None # Override org-level default for this node model_name: Optional[str] = None # Model override (e.g. "gpt-4o-mini" for fast nodes) prompt: Optional[str] = None # System prompt override temperature: Optional[float] = None max_tokens: Optional[int] = None timeout: Optional[int] = None # Per-node timeout in secondsAvailable nodes per mode:
| Mode | Nodes |
|---|---|
| Supervisor | classifier_node, planner_node, synthesizer_node, validation_node, clarifier_node, responder_node, autocompact, suggestion_node, error_handler_node |
| Grounded | agent_node, synthesizer_node, autocompact, suggestion_node, error_handler_node |
Example: cheap model for classification, stronger model for synthesis
Section titled “Example: cheap model for classification, stronger model for synthesis”{ "default_llm_config_id": "<gpt4o-config-uuid>", "mode_config": { "classifier_node": { "model_name": "gpt-4o-mini" }, "synthesizer_node": { "model_name": "gpt-4o", "temperature": 0.3 } }}Grounded mode — resource_id
Section titled “Grounded mode — resource_id”Grounded mode requires a resource_id on every chat request. This is the anchor object
(a document ID, URL, database record ID, or any string) that constrains the agent’s scope.
class ChatCompletionRequest(BaseModel): message: str # Required; 1–1000 characters conversation_id: Optional[str] # Resumes an existing thread stream: bool = True resource_id: Optional[str] # Required for grounded modeThe resource_id is written into the graph’s initial state (state.resource_id).
Plugin tool nodes can read it to scope their queries:
resource_id = state.get("resource_id")results = await db.query(doc_id=resource_id, question=query)Creating orchestrators in each mode
Section titled “Creating orchestrators in each mode”{ "name": "General Purpose Assistant", "framework_type": "langgraph", "mode": "supervisor", "active_plugin_ids": ["com.example.web-search"], "tier": "hot", "config": { "default_llm_config_id": "<uuid>", "mode_config": { "max_agent_hops": 8, "enabled_parallel_tool_calls": true, "enabled_clarification_intent": true, "enabled_llm_validation": false}}}{ "name": "Document QA", "framework_type": "langgraph", "mode": "grounded", "active_plugin_ids": ["com.example.doc-retrieval"], "tier": "warm", "config": { "default_llm_config_id": "<uuid>", "mode_config": { "scope_rules": "Answer only from the provided document. Do not use general knowledge.", "max_tool_rounds": 3, "enabled_validator": true}}}Chat requests must include "resource_id": " <document-id>".
{ "name": "Gemini Support Agent", "framework_type": "google_adk", "mode": "supervisor", "active_plugin_ids": ["com.example.crm-tools"], "tier": "cold", "config": { "default_llm_config_id": "<google-llm-config-uuid>", "mode_config": { "max_agent_hops": 6}}}Troubleshooting
Section titled “Troubleshooting”| Symptom | Cause | Fix |
|---|---|---|
422 on create — mode not valid for framework | google_adk + grounded, or openai_agents + any mode | Check GET /api/frameworks/{type}/supported-providers |
| Agent loops without completing | max_agent_hops too high or tool always returns partial results | Reduce max_agent_hops; fix tool return logic |
Grounded mode ignores resource_id | Plugin tool doesn’t read state.resource_id | Update the plugin to scope queries using state["resource_id"] |
| Chat fails in grounded mode without error message | resource_id missing from request | Always pass resource_id in the chat body for grounded orchestrators |
| Classifier sends all queries to the same agent | enabled_clarification_intent catching too broadly | Set enabled_clarification_intent: false or tune the classifier prompt via classifier_node.prompt |
| Validation node rejects valid responses | enabled_llm_validation validation prompt too strict | Set enabled_llm_validation: false or override validation_node.prompt |
| Need to change mode | Mode is immutable | Create a new orchestrator; update central points to reference it |