Skip to content

LLM configuration

Per-org LLM provider settings and the settings cascade (overview).

Intended audience: Stakeholders, Business analysts, Solution architects, Developers, Testers

Learning outcomes by role

Stakeholders

  • Relate per-org LLM settings to cost control and data residency narratives.

Business analysts

  • Document cascade levels (global, org, user) for configuration acceptance tests.

Solution architects

  • Map provider credentials and secrets handling to enterprise key management.

Developers

  • Apply TenantSetting keys and LLM config APIs when wiring models.

Testers

  • Verify fallback and override behavior across cascade tiers.

Each organization can choose which LLM providers and models to use (within tier and platform rules). Settings merge with global defaults and apply when orchestrators run workloads. Validate org-admin versus member access and tier limits on /api/orgs/{org_id}/llm-configs. Merged settings flow through SettingsService and OrganizationLLMConfigRepository at runtime.

flowchart LR
  G[Global defaults] --> M[Merge]
  O[Org LLM configs] --> M
  M --> I[Orchestrator instance]
  1. Ensure the caller has cadence:org:llm-configs:read or :write (see router docstrings for BYOK-style changes).
  2. Use GET/PATCH/... under /api/orgs/{org_id}/llm-configs as documented in OpenAPI for your deployment.
  3. Create or update orchestrator instances so they pick up the merged configuration (see Orchestrator instances).
TermMeaning
CascadeOrg-level entries override global defaults when the merge rules allow it.
BYOKBring-your-own API keys for providers, stored and validated per policy.
TierSubscription tier may cap which models or providers an org may use.

Permissions: cadence:org:llm-configs:read and :write; the router docstrings describe who may change BYOK-style settings.

For request/tenant context, see Multi-tenancy. For orchestrator validation at create time, see Orchestrator instances.