Skip to content

Orchestration backends

LangGraph, OpenAI Agents SDK, and Google ADK — framework capabilities, provider compatibility, orchestrator CRUD, and config reference.

Intended audience: Stakeholders, Business analysts, Solution architects, Developers, Testers

Learning outcomes by role

Stakeholders

  • Compare LangGraph, OpenAI Agents SDK, and Google ADK for vendor alignment.

Business analysts

  • Record backend-specific configuration fields for each orchestrator product.

Solution architects

  • Map provider credentials and network egress for each backend option.

Developers

  • Configure orchestrator instances per backend capabilities and limits.

Testers

  • Run backend-specific integration tests and compatibility matrices.

An orchestration backend is the runtime engine that executes your agent graph — tool routing, model calls, memory, and handoffs. Cadence supports three backends: langgraph, openai_agents, and google_adk. The backend (framework_type) and mode are set at orchestrator creation and cannot change — create a new orchestrator to switch engines.

flowchart TD
    CREATE["POST /api/orgs/org_id/orchestrators"] --> VALIDATE["Validate framework_type and mode"]
    VALIDATE --> DB[(OrchestratorInstance in PostgreSQL)]
    DB --> POOL["Pool: load instance into memory"]
    POOL --> ENGINE{framework_type?}
    ENGINE -->|langgraph| LG["LangGraph engine"]
    ENGINE -->|openai_agents| OA["OpenAI Agents SDK engine"]
    ENGINE -->|google_adk| GA["Google ADK engine"]
    LG --> MODEL1["LangChain BaseChatModel via LLMModelFactory"]
    OA --> MODEL2["OpenAIChatCompletionsModel / LitellmModel"]
    GA --> MODEL3["Gemini model name / LiteLlm"]
    MODEL1 --> CHAT["Chat request"]
    MODEL2 --> CHAT
    MODEL3 --> CHAT
FrameworkModesProvider supportNotes
langgraphsupervisor, groundedAll 10 providersFull LangChain ecosystem; stateful graphs
google_adksupervisorgoogle, anthropic, litellm, bifrostGoogle ADK native
openai_agents(none — flat tool use)openai, litellm, bifrostOpenAI Agents SDK; swarm-style handoffs
openai → langgraph ✓ openai_agents ✓ google_adk ✗
anthropic → langgraph ✓ openai_agents ✗ google_adk ✓
google → langgraph ✓ openai_agents ✗ google_adk ✓
azure → langgraph ✓ openai_agents ✗ google_adk ✗
groq → langgraph ✓ openai_agents ✗ google_adk ✗
litellm → langgraph ✓ openai_agents ✓ google_adk ✓
tensorzero → langgraph ✓ openai_agents ✗ google_adk ✗
bifrost → langgraph ✓ openai_agents ✓ google_adk ✓

Source: FRAMEWORK_SUPPORTED_PROVIDERS in cadence/core/constants/framework.py. The API validates this at orchestrator creation — mismatches return 422.

FieldTypeNotes
instance_idUUID7Primary key
org_idUUID7Owning org
namestring5–200 characters
framework_typestringlanggraph | openai_agents | google_adkimmutable
modestringsupervisor | grounded | coordinator | handoffimmutable
statusstringactive | suspended | inactive
tierstringhot | warm | cold — target pool tier
whoamistring | nullSystem-prompt identity context for the orchestrator
configJSONMutable Tier 4 settings (see below)
plugin_settingsJSONActive plugin settings map (pid → {id, name, settings})
config_hashstring | nullSHA-256 of the effective config (cache invalidation key)
is_readybooleantrue if instance is loaded in the pool (computed at query time)
is_deletedbooleanSoft delete
created_atISO-8601UTC
updated_atISO-8601UTC

The config field is a JSON object with these common keys:

KeyTypePurpose
default_llm_config_idUUID stringWhich LLM configuration to use
mode_configobjectMode-specific settings (see Orchestration modes doc)
mode_config.max_agent_hopsintMaximum recursive agent invocations
mode_config.parallel_tool_callsbooleanAllow parallel tool calls
mode_config.invoke_timeoutintRequest timeout in seconds
mode_config.supervisor_timeoutintSupervisor-specific timeout
mode_config.use_llm_validationbooleanValidate outputs with LLM
MethodPathPermissionDescription
POST/api/orgs/{org_id}/orchestratorscadence:org:orchestrators:writeCreate a new orchestrator instance
GET/api/orgs/{org_id}/orchestratorscadence:org:orchestrators:readList org orchestrators; members get reduced info
GET/api/orgs/{org_id}/orchestrators/{instance_id}cadence:org:orchestrators:readGet full orchestrator details
PATCH/api/orgs/{org_id}/orchestrators/{instance_id}cadence:org:orchestrators:writeUpdate name, tier, whoami, default_llm_config_id
PATCH/api/orgs/{org_id}/orchestrators/{instance_id}/configcadence:org:orchestrators:writeReplace the mutable config object
PATCH/api/orgs/{org_id}/orchestrators/{instance_id}/statuscadence:org:orchestrators:writeSet status to active or suspended
GET/api/orgs/{org_id}/orchestrators/{instance_id}/graphcadence:org:orchestrators:readReturn Mermaid graph definition (loaded instances only)
DELETE/api/orgs/{org_id}/orchestrators/{instance_id}cadence:org:orchestrators:writeDeactivate (org_admin) or soft-delete (sys_admin)
DELETE/api/orgs/{org_id}/orchestrators/{instance_id}/purgecadence:system:adminPermanently delete a soft-deleted orchestrator
  1. Create an LLM configuration for the org with your provider credentials and note its id. See LLM configuration.

  2. Check framework–provider and framework–mode compatibility at GET /api/frameworks/{framework_type}/supported-providers.

  3. Post to POST /api/orgs/{org_id}/orchestrators with all required fields.

  4. Load the instance into the pool when ready to serve traffic. See Hot-reload and orchestrator pool.

cadence/schemas/orchestrator.py
class CreateOrchestratorRequest(BaseModel):
name: str = Field(..., min_length=5, max_length=200)
framework_type: str = Field(
..., pattern="^(langgraph|openai_agents|google_adk)$"
)
mode: str = Field(
..., pattern="^(supervisor|coordinator|handoff|grounded)$"
)
active_plugin_ids: List[str] = Field(..., min_length=1)
tier: str = Field(default="cold", pattern="^(hot|warm|cold)$")
whoami: Optional[str] = None
config: Optional[Dict[str, Any]] = Field(default_factory=dict)
POST /api/orgs/{org_id}/orchestrators
{
"name": "Customer Support Agent",
"framework_type": "langgraph",
"mode": "supervisor",
"active_plugin_ids": ["com.example.support-tools"],
"tier": "hot",
"config": {
"default_llm_config_id": "01936a8f-...",
"mode_config": {
"max_agent_hops": 5,
"parallel_tool_calls": true,
"invoke_timeout": 30,
"use_llm_validation": false
}
}
}

is_ready is computed at query time (not stored) by checking whether the pool has the instance loaded:

cadence/api/orchestrator/crud.py
def _check_is_ready(pool, instance_id: str, status: str) -> bool:
if status != "active":
return False
if pool is None:
return False
return pool.is_loaded(instance_id)

An orchestrator with is_ready = false will reject chat requests. Load it first via POST /api/orgs/{org_id}/orchestrators/{instance_id}/load.

Every time the orchestrator config is modified, a config_hash (SHA-256 of the effective configuration JSON) is recomputed. The pool uses this hash to detect stale loaded instances: if the hash of a loaded instance differs from the database row, the pool triggers a reload.

Mutable fields (name, tier, whoami, default_llm_config_id) can be patched without reloading. The full config object can be replaced via PATCH .../config. Neither operation affects currently-running conversations — the new config takes effect on the next pool load.

cadence/schemas/orchestrator.py
class UpdateOrchestratorMetadataRequest(BaseModel):
name: Optional[str] = Field(None, min_length=10, max_length=200)
tier: Optional[str] = Field(None, pattern="^(hot|warm|cold)$")
whoami: Optional[str] = None
default_llm_config_id: Optional[str] = None
SymptomCauseFix
422 on create — framework_typeTypo or unsupported backendUse langgraph, openai_agents, or google_adk
422 on create — modeMode not supported by the chosen frameworkCheck GET /api/frameworks/{framework_type}/supported-providers
422 active_plugin_idsEmpty listProvide at least one plugin ID
is_ready = false after createInstance was not loaded into poolCall POST .../load; see Hot-reload
Need to switch frameworkframework_type is immutableCreate a new orchestrator; update central points to reference it
Graph endpoint returns emptyOrchestrator not loaded or framework doesn’t expose a graphLoad first; google_adk supervisor may not expose a graph
409 on deleteOrchestrator is referenced by a central pointRemove central point references before deleting
410 Gone on getOrchestrator has been deactivatedReactivate via PATCH .../status with {"status": "active"}