Govern an existing AI app without code changes
Surfaces used: gateway proxy Modes supported: Hosted Hybrid (self-hosted gateway) Tiers: Free Solo Teams
What you'll do
Change one environment variable. Every LLM request your app makes now flows through the Control Zero gateway, which enforces your policy on prompts and tool calls in flight, and writes an audit trail -- without you touching a line of application code.
Why this is the right path for you
- If you have an existing AI app in production (any language, any framework) and you cannot -- or do not want to -- modify it, this is the fastest path.
- If your agent calls OpenAI, Anthropic, Google, Cohere, Mistral, DeepSeek, or an OpenAI-compatible endpoint (Ollama, TGI, vLLM), the gateway handles it.
- If you are building a new app and want finer control, prefer the Python SDK or Node SDK.
When NOT to use this approach
The gateway enforces at the LLM boundary. It does not see tool calls your app makes directly (outside the LLM round-trip). For that, use the SDK. It also does not cover browser-based usage (claude.ai, ChatGPT) -- use the browser extension.
5-minute setup
Get your gateway URL and API key from the dashboard -> Settings -> Gateway.
Anthropic (Claude)
# Before:
export ANTHROPIC_API_KEY=sk-ant-...
# After:
export ANTHROPIC_BASE_URL=https://gateway.controlzero.ai/anthropic
export ANTHROPIC_API_KEY=sk-ant-... # your real key
export CONTROLZERO_API_KEY=cz_live_... # sent as X-ControlZero-API-Key
OpenAI
export OPENAI_BASE_URL=https://gateway.controlzero.ai/openai/v1
export OPENAI_API_KEY=sk-...
export CONTROLZERO_API_KEY=cz_live_...
If your SDK does not pick up CONTROLZERO_API_KEY automatically, add the header:
X-ControlZero-API-Key: cz_live_...
Restart your app. That's it.
Test it
curl https://gateway.controlzero.ai/anthropic/v1/messages \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "X-ControlZero-API-Key: $CONTROLZERO_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "content-type: application/json" \
-d '{
"model": "claude-sonnet-4-6",
"max_tokens": 64,
"messages": [{"role": "user", "content": "hello"}]
}'
Expected: a normal Anthropic response. If the policy denies something, you see an error body with [Control Zero] and the reason.
Verifying it's working
- Open the dashboard -> Audit. You should see every request with provider, model, token counts, and decision.
- Add a DLP rule (e.g., deny
SSN). Send a prompt containing a test SSN. Confirm the request is denied and the audit row shows the matched rule. - Remove the gateway URL temporarily, send the same request, confirm it goes direct and nothing lands in Audit. Put the gateway URL back.
Common follow-ups
- "I want to set DLP rules" -> Set up DLP rules
- "I want to run the gateway myself" -> Self-Hosted
- "I want cost caps and model allowlists" -> Advanced policy playbooks
- "I want to alert Slack on denies" -> Alert channels
Reference
- Surface page: Gateway proxy
- Concepts: Policies, Projects
- API: API reference