What is Cadence?
Product scope, main components, and how the Python service fits in.
Intended audience: Stakeholders, Business analysts, Solution architects, Developers, Testers
Learning outcomes by role
Stakeholders
- Describe Cadence as a multi-tenant AI orchestration backend and its main cost and risk trade-offs (shared infra, isolation model).
Business analysts
- Explain tenant boundaries, orchestrator instances, and chat/API surfaces to stakeholders without implementation detail.
Solution architects
- Position Cadence alongside Postgres, Redis, and optional RabbitMQ in an enterprise reference architecture.
Developers
- Name major API surface areas and where orchestration, plugins, and tenancy logic live at a high level.
Testers
- Identify major capability areas to map test suites (tenancy, auth, chat, plugins, orchestrators).
Cadence is the backend for multi-company AI agent setups: organizations, users, configurable AI runtimes (“orchestrator instances”), plugins, and chat-style HTTP APIs—enforced with tenant scope, RBAC, and rate limits.
Summary for stakeholders
Section titled “Summary for stakeholders”- One service, many customers — Organizations share a deployment but data and settings are isolated by design; scope is enforced in middleware and services (see Multi-tenancy).
- Composable AI runtimes — Teams plug in agent frameworks via orchestrator instances and plugins instead of forking the core for every product line.
- Operational reality — Uptime stories tie to PostgreSQL, Redis, and optionally RabbitMQ; Redis loss degrades sessions and limits—plan capacity accordingly.
Business analysis
Section titled “Business analysis”- Problem space — Teams need a production API that combines multi-tenancy, pluggable agent frameworks, and operational controls (sessions, quotas, observability) without rebuilding that stack per product.
- Primary capabilities — Tenant orgs, orchestrator lifecycle, plugin catalog, chat/completion HTTP APIs, RBAC, rate limits—each maps to documented feature pages for acceptance decomposition.
Architecture and integration
Section titled “Architecture and integration”How the pieces fit
Section titled “How the pieces fit”
Logical view; router modules are registered in cadence.core.router.register_api_routers.
flowchart TB
subgraph clients [Clients]
B[Browser / BFF]
S[Scripts / integrations]
end
subgraph cadence [Cadence API]
A[Auth and tenant middleware]
T[Tenant APIs]
C[Chat / engine]
end
subgraph data [Infrastructure]
P[(Postgres)]
R[(Redis)]
Q[RabbitMQ optional]
end
B --> A
S --> A
A --> T
A --> C
T --> P
C --> P
A --> R
T --> Q
Cadence delivers:
- Multi-tenant organizations — Each org has isolated data, settings, LLM configs, and orchestrator instances. Scope is carried on requests (paths and headers) and enforced in middleware and domain services.
- Orchestrator instances — Configurable agent runtimes (e.g. LangGraph, Google ADK, OpenAI Agents) with a pool that can load hot-tier instances at startup and react to events.
- Plugins — System and tenant plugin catalogs, storage on disk or S3, validation and dependency resolution before orchestrators run.
- Chat and completion — HTTP APIs that route to the engine layer and respect org context, rate limits, and quotas.
The service is a FastAPI application. Persistent state lives primarily in PostgreSQL; Redis backs sessions, rate limiting, and some stats; RabbitMQ drives orchestrator-related events when available.
Implementation notes
Section titled “Implementation notes”At code level, HTTP routers are registered in cadence.core.router.register_api_routers; the FastAPI app is built in cadence.main. Startup wiring (pools, services, orchestrator pool, optional broker) runs in create_lifespan_handler—see How the platform works.
Verification and quality
Section titled “Verification and quality”- Not a frontend — Configure
CADENCE_CORS_ORIGINSfor browser clients; the API does not ship a UI. - Redis-dependent features — Sessions and rate limits expect Redis; behavior degrades by design when Redis is unavailable—cover in staging drills.
- RabbitMQ optional — Without the broker, orchestrator event messaging is off; the rest of the API can still run (startup logs a warning).