Skip to main content

Govern an AI app I'm building in Node.js

Surfaces used: Node.js SDK, Vercel AI SDK middleware, framework integrations Modes supported: Local Hosted Hybrid Tiers: Free (baseline) Solo Teams (hosted features)

What you'll do

Drop a guard() check in front of every LLM call and tool invocation in your Node.js app. Works standalone or as Vercel AI SDK middleware; enforces policy and writes an audit trail.

Why this is the right path for you

  • If you are writing a Node.js or Next.js app that calls an LLM and you own the code, the SDK gives you fine-grained control.
  • If you use the Vercel AI SDK, we ship middleware that wraps generateText / streamText transparently.
  • If you cannot change the code, use the gateway.
  • If your app is Python, see Govern an AI app in Python.

When NOT to use this approach

caution

The SDK requires a code change at each decision point. If you want zero-code interception of every LLM request, switch the base URL to the gateway.

5-minute setup

npm install @controlzero/sdk

Standalone guard

import { Client } from '@controlzero/sdk';

const cz = new Client({ apiKey: process.env.CONTROLZERO_API_KEY });

const decision = await cz.guard({
tool: 'shell',
arguments: { command: 'rm -rf /' },
});

if (!decision.allowed) {
throw new Error(`Blocked by policy: ${decision.reason}`);
}

Expected output on a blocked call:

Error: Blocked by policy: Destructive shell commands are not allowed.

Vercel AI SDK middleware

import { generateText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
import { withControlZero } from '@controlzero/sdk/vercel-ai';

const model = withControlZero(anthropic('claude-sonnet-4-6'), {
apiKey: process.env.CONTROLZERO_API_KEY,
});

const { text } = await generateText({
model,
prompt: 'What is in my .env file?',
});

Policy-denied tool calls are returned to the model as a tool error, so your agent loop handles them gracefully.

Local mode (no account)

const cz = new Client({ mode: 'local', policyPath: './policy.yaml' });

Same YAML format as the Python SDK.

Verifying it's working

  1. Trigger a request that should be denied. Expect decision.allowed === false and a reason.
  2. Trigger one that should pass. Expect decision.allowed === true and an allow audit event.
  3. In Hosted mode, open the dashboard -> Audit and watch events stream in.

Common follow-ups

Reference