Advanced Playbooks
This guide covers advanced patterns for using Control Zero in production environments: multi-agent systems, environment-based policies, gradual rollouts, custom action conventions, and error handling strategies.
Playbook 1: Multi-Agent System with Per-Agent Policies
When you have multiple agents with different trust levels, use conditions to apply different rules per agent.
The Setup
import controlzero
from controlzero.integrations.openai import wrap_openai
import openai
cz = controlzero.init()
# Each agent gets its own wrapped client with a unique agent_id
premium_client = wrap_openai(openai.OpenAI(), cz, agent_id="premium-support")
basic_client = wrap_openai(openai.OpenAI(), cz, agent_id="basic-support")
admin_client = wrap_openai(openai.OpenAI(), cz, agent_id="admin-internal")
The Policy
{
"name": "multi-agent-governance",
"description": "Different model access per agent tier",
"rules": [
{
"effect": "allow",
"action": "llm.generate",
"resource": "model/gpt-4",
"conditions": { "agent_id": "premium-*" }
},
{
"effect": "allow",
"action": "llm.generate",
"resource": "model/gpt-4",
"conditions": { "agent_id": "admin-*" }
},
{
"effect": "allow",
"action": "llm.generate",
"resource": "model/gpt-3.5-turbo"
},
{
"effect": "deny",
"action": "llm.generate",
"resource": "model/gpt-4",
"conditions": { "agent_id": "basic-*" }
},
{
"effect": "deny",
"action": "llm.generate",
"resource": "model/*"
}
]
}
Behavior:
premium-supportcan use GPT-4 and GPT-3.5 Turbo.basic-supportcan only use GPT-3.5 Turbo (GPT-4 explicitly denied forbasic-*).admin-internalcan use GPT-4 and GPT-3.5 Turbo.- All agents are denied any other model.
Playbook 2: Environment-Based Policies
Use different policies for development, staging, and production. The same code runs everywhere -- only the policies change.
Your Code (Same Everywhere)
import controlzero
from controlzero.integrations.openai import wrap_openai
import openai
cz = controlzero.init() # reads CONTROLZERO_PROJECT_ID from env
client = wrap_openai(openai.OpenAI(), cz)
# This code is identical in dev, staging, and production
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}],
)
Your Policies (Different per Environment)
Development project (proj_dev_001):
{
"name": "dev-permissive",
"rules": [{ "effect": "allow", "action": "*", "resource": "*" }]
}
Staging project (proj_staging_001):
{
"name": "staging-moderate",
"rules": [
{ "effect": "allow", "action": "llm.generate", "resource": "model/*" },
{ "effect": "deny", "action": "data.write", "resource": "*" },
{ "effect": "deny", "action": "mcp.tool.call", "resource": "mcp://shell/*" }
]
}
Production project (proj_prod_001):
{
"name": "production-strict",
"rules": [
{ "effect": "allow", "action": "llm.generate", "resource": "model/gpt-4" },
{ "effect": "deny", "action": "llm.generate", "resource": "model/*" },
{ "effect": "deny", "action": "mcp.tool.call", "resource": "mcp://shell/*" },
{ "effect": "deny", "action": "data.write", "resource": "*" },
{ "effect": "deny", "action": "*", "resource": "*" }
]
}
How to switch: Change CONTROLZERO_PROJECT_ID in your environment variables. The code stays the same.
Playbook 3: Gradual Rollout with Log-Only Mode
When deploying Control Zero for the first time, start in log-only mode to see what would be blocked without actually blocking anything.
Step 1: Enable Log-Only Mode
In the dashboard, go to your project settings and set Log-Only Mode to true.
Step 2: Deploy with Governance
cz = controlzero.init()
client = wrap_openai(openai.OpenAI(), cz)
# All calls are evaluated against policies and logged,
# but NOTHING is blocked. PolicyViolationError is never raised.
response = client.chat.completions.create(
model="gpt-4-turbo",
messages=[{"role": "user", "content": "Hello"}],
)
# Works fine even if the policy would deny it
Step 3: Review the Audit Log
In the dashboard, check the audit log. You will see entries like:
| Action | Resource | Decision | Mode |
|---|---|---|---|
| llm.generate | model/gpt-4 | allow | log-only |
| llm.generate | model/gpt-4-turbo | deny (logged only) | log-only |
Step 4: Tune Policies
Adjust your policies based on the audit data until you are confident nothing critical will be blocked.
Step 5: Switch to Enforcement Mode
In the dashboard, set Log-Only Mode to false. Now denials are enforced for real.
Playbook 4: Custom Action Conventions
You are not limited to the built-in action names. Define your own action conventions for your domain:
Financial Services Example
# Custom actions for a trading agent
cz.enforce(action="trade.execute", resource="asset/AAPL")
cz.enforce(action="trade.execute", resource="asset/BTC")
cz.enforce(action="report.generate", resource="report/portfolio-summary")
cz.enforce(action="notification.send", resource="channel/slack-alerts")
{
"name": "trading-agent-policy",
"rules": [
{ "effect": "allow", "action": "trade.execute", "resource": "asset/AAPL" },
{ "effect": "allow", "action": "trade.execute", "resource": "asset/MSFT" },
{ "effect": "deny", "action": "trade.execute", "resource": "asset/BTC" },
{ "effect": "allow", "action": "report.generate", "resource": "report/*" },
{ "effect": "allow", "action": "notification.send", "resource": "channel/slack-*" },
{ "effect": "deny", "action": "notification.send", "resource": "channel/email-*" }
]
}
Healthcare Example
cz.enforce(action="patient.read", resource="record/demographics")
cz.enforce(action="patient.read", resource="record/lab-results")
cz.enforce(action="patient.write", resource="record/notes")
{
"name": "healthcare-agent-policy",
"rules": [
{ "effect": "allow", "action": "patient.read", "resource": "record/demographics" },
{ "effect": "allow", "action": "patient.read", "resource": "record/lab-results" },
{ "effect": "deny", "action": "patient.read", "resource": "record/billing" },
{ "effect": "allow", "action": "patient.write", "resource": "record/notes" },
{ "effect": "deny", "action": "patient.write", "resource": "record/*" }
]
}
Playbook 5: Error Handling Patterns
Pattern A: Fail Fast (Default)
Let PolicyViolationError propagate. The action stops immediately:
# If denied, the error propagates up and the request stops
response = client.chat.completions.create(model="gpt-4-turbo", ...)
Pattern B: Fallback to Approved Alternative
try:
response = client.chat.completions.create(model="gpt-4-turbo", ...)
except controlzero.PolicyViolationError:
response = client.chat.completions.create(model="gpt-3.5-turbo", ...)
Pattern C: Check Before Calling
Use check() instead of enforce() to avoid exceptions entirely:
decision = cz.check(action="llm.generate", resource="model/gpt-4-turbo")
if decision.allowed:
response = client.chat.completions.create(model="gpt-4-turbo", ...)
else:
# Use a different model or skip the action
print(f"Not allowed: {decision.reason}")
Pattern D: Graceful Degradation with Logging
import logging
logger = logging.getLogger("agent")
def safe_generate(prompt: str, preferred_model: str, fallback_model: str) -> str:
try:
response = client.chat.completions.create(
model=preferred_model,
messages=[{"role": "user", "content": prompt}],
)
return response.choices[0].message.content
except controlzero.PolicyViolationError as e:
logger.warning(
"Model %s denied by policy, falling back to %s: %s",
preferred_model, fallback_model, e.message,
)
response = client.chat.completions.create(
model=fallback_model,
messages=[{"role": "user", "content": prompt}],
)
return response.choices[0].message.content
Playbook 6: Combining Auto-Wrap with Manual Enforcement
In a real application, you will use both patterns:
import controlzero
from controlzero.integrations.openai import wrap_openai
import openai
cz = controlzero.init()
client = wrap_openai(openai.OpenAI(), cz, agent_id="research-agent")
def research(query: str) -> str:
# AUTOMATIC: LLM calls are governed by the wrapper
# The policy controls which models this agent can use
# MANUAL: Vector store access is a custom action
cz.enforce(action="data.read", resource="vectorstore/research-papers")
# MANUAL: External API access is a custom action
cz.enforce(action="api.request", resource="https://api.scholar.google.com/*")
# Proceed with the research pipeline
# (LLM calls within this pipeline are auto-enforced)
results = search_vector_store(query)
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Summarize: {results}"}],
)
return response.choices[0].message.content
The rule of thumb:
- Auto-wrap for LLM API calls (OpenAI, Anthropic, LangChain).
- Manual
enforce()for everything else (data sources, APIs, tools, custom actions).
Next Steps
- Policies -- Full reference for constructing policies.
- MCP Tool Control -- Govern MCP tools in detail.
- Projects -- Organize agents into separate governance domains.