LiteLLM Integration
Enforce Control Zero policies when using LiteLLM as a unified LLM interface or proxy.
Overview
LiteLLM provides a unified API to call 100+ LLMs using the same interface. Control Zero integrates with LiteLLM through a callback handler that enforces policies on every LLM call, regardless of the underlying provider.
Installation
pip install controlzero litellm
Quick Setup
import controlzero
from controlzero.integrations.litellm import ControlZeroLiteLLMCallback
import litellm
cz = controlzero.init()
# Register the callback -- all LiteLLM calls are now enforced
litellm.callbacks = [ControlZeroLiteLLMCallback(cz, agent_id="analyst-agent")]
The callback intercepts log_pre_api_call to enforce the llm.generate action before each request. It also detects embedding calls and enforces embedding.generate for those.
Basic Usage
Once the callback is registered, all LiteLLM calls are automatically enforced:
import litellm
# This call goes through policy enforcement automatically
response = litellm.completion(
model="gpt-4",
messages=[{"role": "user", "content": "Summarize the quarterly report"}],
)
# Switch providers -- still enforced
response = litellm.completion(
model="claude-sonnet-4-20250514",
messages=[{"role": "user", "content": "Analyze the data trends"}],
)
# If the policy denies this model, a PolicyDeniedError is raised
try:
response = litellm.completion(
model="gpt-4-turbo",
messages=[{"role": "user", "content": "Write a long document"}],
)
except controlzero.PolicyDeniedError as e:
print(f"Blocked: {e.message}")
LiteLLM Proxy Integration
If you run LiteLLM as a proxy server, add policy enforcement to the proxy configuration:
# litellm_proxy_config.py
import controlzero
cz = controlzero.ControlZero()
cz.initialize()
class ControlZeroProxyCallback(litellm.CustomLogger):
def log_pre_api_call(self, model, messages, kwargs):
# Extract agent/user info from proxy request metadata
metadata = kwargs.get("litellm_params", {}).get("metadata", {})
agent_id = metadata.get("agent_id", "unknown")
cz.enforce(
action="llm.generate",
resource=f"model/{model}",
context={"agent_id": agent_id},
)
Example Policy
{
"name": "litellm-model-governance",
"rules": [
{
"effect": "allow",
"action": "llm.generate",
"resource": "model/gpt-4"
},
{
"effect": "allow",
"action": "llm.generate",
"resource": "model/claude-sonnet-4-20250514"
},
{
"effect": "deny",
"action": "llm.generate",
"resource": "model/gpt-4-turbo"
}
]
}
Next Steps
- See the OpenRouter integration for another multi-provider setup.
- Explore the Python SDK for the full API reference.