Orchestration backends
LangGraph, OpenAI Agents SDK, and Google ADK — framework capabilities, provider compatibility, orchestrator CRUD, and config reference.
Intended audience: Stakeholders, Business analysts, Solution architects, Developers, Testers
Learning outcomes by role
Stakeholders
- Compare LangGraph, OpenAI Agents SDK, and Google ADK for vendor alignment.
Business analysts
- Record backend-specific configuration fields for each orchestrator product.
Solution architects
- Map provider credentials and network egress for each backend option.
Developers
- Configure orchestrator instances per backend capabilities and limits.
Testers
- Run backend-specific integration tests and compatibility matrices.
An orchestration backend is the runtime engine that executes your agent graph — tool routing,
model calls, memory, and handoffs. Cadence supports three backends: langgraph, openai_agents,
and google_adk. The backend (framework_type) and mode are set at orchestrator creation and
cannot change — create a new orchestrator to switch engines.
Architecture overview
Section titled “Architecture overview”flowchart TD
CREATE["POST /api/orgs/org_id/orchestrators"] --> VALIDATE["Validate framework_type and mode"]
VALIDATE --> DB[(OrchestratorInstance in PostgreSQL)]
DB --> POOL["Pool: load instance into memory"]
POOL --> ENGINE{framework_type?}
ENGINE -->|langgraph| LG["LangGraph engine"]
ENGINE -->|openai_agents| OA["OpenAI Agents SDK engine"]
ENGINE -->|google_adk| GA["Google ADK engine"]
LG --> MODEL1["LangChain BaseChatModel via LLMModelFactory"]
OA --> MODEL2["OpenAIChatCompletionsModel / LitellmModel"]
GA --> MODEL3["Gemini model name / LiteLlm"]
MODEL1 --> CHAT["Chat request"]
MODEL2 --> CHAT
MODEL3 --> CHAT
Framework capabilities
Section titled “Framework capabilities”| Framework | Modes | Provider support | Notes |
|---|---|---|---|
langgraph | supervisor, grounded | All 10 providers | Full LangChain ecosystem; stateful graphs |
google_adk | supervisor | google, anthropic, litellm, bifrost | Google ADK native |
openai_agents | (none — flat tool use) | openai, litellm, bifrost | OpenAI Agents SDK; swarm-style handoffs |
Provider × framework compatibility
Section titled “Provider × framework compatibility”openai → langgraph ✓ openai_agents ✓ google_adk ✗anthropic → langgraph ✓ openai_agents ✗ google_adk ✓google → langgraph ✓ openai_agents ✗ google_adk ✓azure → langgraph ✓ openai_agents ✗ google_adk ✗groq → langgraph ✓ openai_agents ✗ google_adk ✗litellm → langgraph ✓ openai_agents ✓ google_adk ✓tensorzero → langgraph ✓ openai_agents ✗ google_adk ✗bifrost → langgraph ✓ openai_agents ✓ google_adk ✓Source: FRAMEWORK_SUPPORTED_PROVIDERS in cadence/core/constants/framework.py. The API
validates this at orchestrator creation — mismatches return 422.
Data model
Section titled “Data model”OrchestratorInstance
Section titled “OrchestratorInstance”| Field | Type | Notes |
|---|---|---|
instance_id | UUID7 | Primary key |
org_id | UUID7 | Owning org |
name | string | 5–200 characters |
framework_type | string | langgraph | openai_agents | google_adk — immutable |
mode | string | supervisor | grounded | coordinator | handoff — immutable |
status | string | active | suspended | inactive |
tier | string | hot | warm | cold — target pool tier |
whoami | string | null | System-prompt identity context for the orchestrator |
config | JSON | Mutable Tier 4 settings (see below) |
plugin_settings | JSON | Active plugin settings map (pid → {id, name, settings}) |
config_hash | string | null | SHA-256 of the effective config (cache invalidation key) |
is_ready | boolean | true if instance is loaded in the pool (computed at query time) |
is_deleted | boolean | Soft delete |
created_at | ISO-8601 | UTC |
updated_at | ISO-8601 | UTC |
config object
Section titled “config object”The config field is a JSON object with these common keys:
| Key | Type | Purpose |
|---|---|---|
default_llm_config_id | UUID string | Which LLM configuration to use |
mode_config | object | Mode-specific settings (see Orchestration modes doc) |
mode_config.max_agent_hops | int | Maximum recursive agent invocations |
mode_config.parallel_tool_calls | boolean | Allow parallel tool calls |
mode_config.invoke_timeout | int | Request timeout in seconds |
mode_config.supervisor_timeout | int | Supervisor-specific timeout |
mode_config.use_llm_validation | boolean | Validate outputs with LLM |
API reference
Section titled “API reference”| Method | Path | Permission | Description |
|---|---|---|---|
POST | /api/orgs/{org_id}/orchestrators | cadence:org:orchestrators:write | Create a new orchestrator instance |
GET | /api/orgs/{org_id}/orchestrators | cadence:org:orchestrators:read | List org orchestrators; members get reduced info |
GET | /api/orgs/{org_id}/orchestrators/{instance_id} | cadence:org:orchestrators:read | Get full orchestrator details |
PATCH | /api/orgs/{org_id}/orchestrators/{instance_id} | cadence:org:orchestrators:write | Update name, tier, whoami, default_llm_config_id |
PATCH | /api/orgs/{org_id}/orchestrators/{instance_id}/config | cadence:org:orchestrators:write | Replace the mutable config object |
PATCH | /api/orgs/{org_id}/orchestrators/{instance_id}/status | cadence:org:orchestrators:write | Set status to active or suspended |
GET | /api/orgs/{org_id}/orchestrators/{instance_id}/graph | cadence:org:orchestrators:read | Return Mermaid graph definition (loaded instances only) |
DELETE | /api/orgs/{org_id}/orchestrators/{instance_id} | cadence:org:orchestrators:write | Deactivate (org_admin) or soft-delete (sys_admin) |
DELETE | /api/orgs/{org_id}/orchestrators/{instance_id}/purge | cadence:system:admin | Permanently delete a soft-deleted orchestrator |
Creating an orchestrator
Section titled “Creating an orchestrator”-
Create an LLM configuration for the org with your provider credentials and note its
id. See LLM configuration. -
Check framework–provider and framework–mode compatibility at
GET /api/frameworks/{framework_type}/supported-providers. -
Post to
POST /api/orgs/{org_id}/orchestratorswith all required fields. -
Load the instance into the pool when ready to serve traffic. See Hot-reload and orchestrator pool.
class CreateOrchestratorRequest(BaseModel): name: str = Field(..., min_length=5, max_length=200) framework_type: str = Field( ..., pattern="^(langgraph|openai_agents|google_adk)$" ) mode: str = Field( ..., pattern="^(supervisor|coordinator|handoff|grounded)$" ) active_plugin_ids: List[str] = Field(..., min_length=1) tier: str = Field(default="cold", pattern="^(hot|warm|cold)$") whoami: Optional[str] = None config: Optional[Dict[str, Any]] = Field(default_factory=dict)Example create body
Section titled “Example create body”{ "name": "Customer Support Agent", "framework_type": "langgraph", "mode": "supervisor", "active_plugin_ids": ["com.example.support-tools"], "tier": "hot", "config": { "default_llm_config_id": "01936a8f-...", "mode_config": { "max_agent_hops": 5, "parallel_tool_calls": true, "invoke_timeout": 30, "use_llm_validation": false } }}How it works — is_ready flag
Section titled “How it works — is_ready flag”is_ready is computed at query time (not stored) by checking whether the pool has the
instance loaded:
def _check_is_ready(pool, instance_id: str, status: str) -> bool: if status != "active": return False if pool is None: return False return pool.is_loaded(instance_id)An orchestrator with is_ready = false will reject chat requests. Load it first via
POST /api/orgs/{org_id}/orchestrators/{instance_id}/load.
How it works — config_hash
Section titled “How it works — config_hash”Every time the orchestrator config is modified, a config_hash (SHA-256 of the effective
configuration JSON) is recomputed. The pool uses this hash to detect stale loaded instances:
if the hash of a loaded instance differs from the database row, the pool triggers a reload.
Updating an orchestrator
Section titled “Updating an orchestrator”Mutable fields (name, tier, whoami, default_llm_config_id) can be patched without reloading.
The full config object can be replaced via PATCH .../config. Neither operation affects
currently-running conversations — the new config takes effect on the next pool load.
class UpdateOrchestratorMetadataRequest(BaseModel): name: Optional[str] = Field(None, min_length=10, max_length=200) tier: Optional[str] = Field(None, pattern="^(hot|warm|cold)$") whoami: Optional[str] = None default_llm_config_id: Optional[str] = NoneTroubleshooting
Section titled “Troubleshooting”| Symptom | Cause | Fix |
|---|---|---|
422 on create — framework_type | Typo or unsupported backend | Use langgraph, openai_agents, or google_adk |
422 on create — mode | Mode not supported by the chosen framework | Check GET /api/frameworks/{framework_type}/supported-providers |
422 active_plugin_ids | Empty list | Provide at least one plugin ID |
is_ready = false after create | Instance was not loaded into pool | Call POST .../load; see Hot-reload |
| Need to switch framework | framework_type is immutable | Create a new orchestrator; update central points to reference it |
| Graph endpoint returns empty | Orchestrator not loaded or framework doesn’t expose a graph | Load first; google_adk supervisor may not expose a graph |
409 on delete | Orchestrator is referenced by a central point | Remove central point references before deleting |
410 Gone on get | Orchestrator has been deactivated | Reactivate via PATCH .../status with {"status": "active"} |