Skip to main content

AI Governance for the Agentic Era

Wrap your LLM client. Define policies in the dashboard. Every action -- from model calls to custom domain operations -- is governed automatically.

main.py
import controlzero
from controlzero.integrations.openai import wrap_openai
import openai

cz = controlzero.init()
client = wrap_openai(openai.OpenAI(), cz)

# Every LLM call is now governed by your dashboard policies
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello"}],
)

How It Works

1

Wrap Your Client

Install the SDK and wrap your LLM client in one line. LLM calls are automatically governed from that point on.

pip install controlzero
2

Define Policies in the Dashboard

Create rules that control which models, tools, and custom actions your agents can use. Policies sync to the SDK automatically.

3

Ship with Confidence

Every LLM call and custom action is checked against your policies before execution. Violations are blocked and logged. No code changes needed.

Built for Production AI

W

Auto-Wrapping SDKs

Wrap your OpenAI, Anthropic, or LangChain client in one line. Every LLM call is automatically checked against your dashboard policies before execution. No code changes to your business logic.

C

Custom Actions

Go beyond LLM governance. Define your own action conventions for vector stores, databases, APIs, MCP tools, or any domain-specific operation. Use enforce() to gate any action with policy checks.

P

Policy Engine

Define granular, composable policies in the dashboard. Specify allowed actions, target resources, conditions, and effects. Policies are encrypted, cached locally, and tamper-proof.

A

Audit Trail

Every policy decision is recorded with full context -- the requested action, the matching rule, and the outcome. Gain complete, searchable visibility into how your AI agents operate.

Works With Your Stack

Python -- Auto-Wrapping
import controlzero
from controlzero.integrations.openai import wrap_openai
import openai

cz = controlzero.init()
client = wrap_openai(openai.OpenAI(), cz)

# Use normally -- all calls governed by dashboard policies
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello"}],
)
Python -- Custom Actions
import controlzero

cz = controlzero.init()

# Define your own action conventions for any domain
cz.enforce(action="data.read", resource="vectorstore/docs")
cz.enforce(action="trade.execute", resource="asset/AAPL")
cz.enforce(action="patient.read", resource="record/labs")

# Each maps to a rule in the dashboard policy:
# { "effect": "allow", "action": "data.read",
#   "resource": "vectorstore/docs" }
Node.js
import { ControlZero } from "@controlzero/sdk";
import OpenAI from "openai";

const cz = new ControlZero();
await cz.initialize();

// Manual enforcement for Node.js
await cz.enforce({
  action: "llm.generate",
  resource: "model/gpt-4",
});

// Custom actions work the same way
await cz.enforce({
  action: "mcp.tool.call",
  resource: "mcp://filesystem/read_file",
});

Ready to govern your AI agents?

Get started with Control Zero in under five minutes.

Get Started