OpenRouter Integration
OpenRouter uses the OpenAI-compatible API, so you can use wrap_openai() to add automatic policy enforcement. Your dashboard policies control which models and providers agents can access through OpenRouter.
Setup
pip install controlzero openai
import controlzero
from controlzero.integrations.openai import wrap_openai
import openai
cz = controlzero.init()
# Point OpenAI SDK at OpenRouter and wrap it
client = wrap_openai(
openai.OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="sk-or-your-openrouter-key",
),
cz,
)
# Use normally -- all calls are governed by your dashboard policies
response = client.chat.completions.create(
model="anthropic/claude-sonnet-4-20250514",
messages=[{"role": "user", "content": "Hello"}],
)
Since OpenRouter uses the OpenAI SDK format, wrap_openai() works identically. The model name (including the provider prefix like anthropic/ or openai/) is used as the resource in policy checks.
Example Policy
Allow specific models through OpenRouter while blocking expensive ones:
{
"name": "openrouter-model-policy",
"description": "Control which models can be used via OpenRouter",
"rules": [
{
"effect": "allow",
"action": "llm.generate",
"resource": "model/anthropic/claude-sonnet-4-20250514"
},
{ "effect": "allow", "action": "llm.generate", "resource": "model/openai/gpt-4" },
{
"effect": "allow",
"action": "llm.generate",
"resource": "model/meta-llama/llama-3-70b-instruct"
},
{
"effect": "deny",
"action": "llm.generate",
"resource": "model/anthropic/claude-opus-4-20250514"
},
{ "effect": "deny", "action": "llm.generate", "resource": "model/*" }
]
}
Runtime behavior:
# ALLOWED -- matches rule 1
client.chat.completions.create(model="anthropic/claude-sonnet-4-20250514", ...)
# BLOCKED -- matches rule 4
client.chat.completions.create(model="anthropic/claude-opus-4-20250514", ...)
# Raises PolicyDeniedError
# BLOCKED -- matches catch-all deny (rule 5)
client.chat.completions.create(model="google/gemini-pro", ...)
# Raises PolicyDeniedError
Why Use Control Zero with OpenRouter?
OpenRouter aggregates many providers, which increases the surface for cost overruns and unauthorized model usage. Control Zero adds governance:
- Model allowlists: Only approved models can be used, even if OpenRouter supports hundreds more.
- Cost control: Block expensive models (Opus, GPT-4 Turbo) while allowing cost-effective ones.
- Per-agent policies: Different agents get different model access through conditions.
- Audit trail: Every model call is logged with agent, model, and decision.
Node.js
import { ControlZero } from '@controlzero/sdk';
import OpenAI from 'openai';
const cz = new ControlZero();
await cz.initialize();
const client = new OpenAI({
baseURL: 'https://openrouter.ai/api/v1',
apiKey: process.env.OPENROUTER_API_KEY,
});
// Manual enforcement for Node.js
async function generate(prompt: string, model: string) {
await cz.enforce({
action: 'llm.generate',
resource: `model/${model}`,
});
const response = await client.chat.completions.create({
model,
messages: [{ role: 'user', content: prompt }],
});
return response.choices[0].message.content;
}
Next Steps
- OpenAI Integration -- Same
wrap_openai()pattern with more details. - LiteLLM Integration -- Another multi-provider approach with callback enforcement.
- Policies -- How to construct dashboard policies.