OpenAI Integration
Add governance to your OpenAI calls with one line of code. The SDK wraps your OpenAI client and automatically enforces whatever policies you have defined in the Control Zero dashboard.
Setup
pip install controlzero openai
import controlzero
from controlzero.integrations.openai import wrap_openai
import openai
cz = controlzero.init()
client = wrap_openai(openai.OpenAI(), cz)
# Use client exactly as before -- all calls are now governed by your dashboard policies
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "What is the capital of France?"}],
)
print(response.choices[0].message.content)
That is it. Two lines added (cz = controlzero.init() and wrap_openai()). Your application code stays the same.
What Gets Enforced Automatically
The wrapper intercepts these API methods and checks them against your policies:
| API Method | Policy Action | Policy Resource |
|---|---|---|
chat.completions.create(model="gpt-4") | llm.generate | model/gpt-4 |
embeddings.create(model="text-embedding-3-small") | embedding.generate | model/text-embedding-3-small |
The wrapper extracts the model parameter from every call, maps it to the corresponding policy action and resource, and checks it against your active policies. If denied, PolicyDeniedError is raised before the request reaches OpenAI.
All other OpenAI client methods (models, files, assistants, etc.) pass through unchanged.
Example Policy: Model Allowlist
Define this policy in the Control Zero dashboard:
{
"name": "openai-model-governance",
"description": "Control which OpenAI models agents can use",
"rules": [
{ "effect": "allow", "action": "llm.generate", "resource": "model/gpt-4" },
{ "effect": "allow", "action": "llm.generate", "resource": "model/gpt-3.5-turbo" },
{ "effect": "deny", "action": "llm.generate", "resource": "model/gpt-4-turbo*" },
{
"effect": "allow",
"action": "embedding.generate",
"resource": "model/text-embedding-3-small"
}
]
}
What happens at runtime with this policy:
# ALLOWED -- matches "allow llm.generate model/gpt-4"
client.chat.completions.create(model="gpt-4", ...)
# ALLOWED -- matches "allow llm.generate model/gpt-3.5-turbo"
client.chat.completions.create(model="gpt-3.5-turbo", ...)
# BLOCKED -- matches "deny llm.generate model/gpt-4-turbo*"
client.chat.completions.create(model="gpt-4-turbo", ...)
# Raises PolicyDeniedError
# BLOCKED -- no matching allow rule (default deny)
client.chat.completions.create(model="gpt-4o", ...)
# Raises PolicyDeniedError
# ALLOWED -- matches "allow embedding.generate model/text-embedding-3-small"
client.embeddings.create(model="text-embedding-3-small", input="hello")
Handling Denied Actions
Catch PolicyDeniedError to provide a graceful fallback:
try:
response = client.chat.completions.create(
model="gpt-4-turbo",
messages=[{"role": "user", "content": "Hello"}],
)
except controlzero.PolicyDeniedError:
# Fall back to an approved model
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}],
)
Per-Agent Wrapping
If you have multiple agents with different privileges, pass agent_id when wrapping:
# Premium support agent -- gets access to GPT-4
premium_client = wrap_openai(openai.OpenAI(), cz, agent_id="premium-support")
# Basic agent -- restricted to GPT-3.5
basic_client = wrap_openai(openai.OpenAI(), cz, agent_id="basic-support")
Then in your dashboard policy, use conditions to differentiate:
{
"rules": [
{
"effect": "allow",
"action": "llm.generate",
"resource": "model/gpt-4",
"conditions": { "agent_id": "premium-*" }
},
{
"effect": "allow",
"action": "llm.generate",
"resource": "model/gpt-3.5-turbo"
},
{
"effect": "deny",
"action": "llm.generate",
"resource": "model/*"
}
]
}
Node.js
npm install @controlzero/sdk openai
import { ControlZero } from '@controlzero/sdk';
import OpenAI from 'openai';
const cz = new ControlZero();
await cz.initialize();
const openai = new OpenAI();
// Use openai client with manual enforcement for now
// Node.js auto-wrapping coming soon
async function ask(prompt: string, model: string = 'gpt-4') {
await cz.enforce({
action: 'llm.generate',
resource: `model/${model}`,
});
const response = await openai.chat.completions.create({
model,
messages: [{ role: 'user', content: prompt }],
});
return response.choices[0].message.content;
}
Using the Secrets Vault
Let Control Zero manage your OpenAI API key instead of environment variables:
cz = controlzero.ControlZero(
api_key="cz_live_your_key_here",
secrets_enabled=True,
)
cz.initialize()
openai_key = cz.get_secret("openai") # in-memory lookup, no network call
client = wrap_openai(openai.OpenAI(api_key=openai_key), cz)
Next Steps
- Anthropic Integration -- Same pattern for Claude API calls.
- LangChain Integration -- Callback handler for LangChain chains and agents.
- Policies -- Learn how to construct policies in the dashboard.
- RAG Guide -- Build a policy-enforced RAG pipeline.