Google Gemini Integration
Add governance to your Google Gemini calls with the Control Zero SDK. The wrapper intercepts requests and enforces your dashboard policies on every interaction.
Overview
Google Gemini provides multimodal models via the Generative AI API. Control Zero supports both direct SDK wrapping and gateway proxy routing for Gemini, giving you policy enforcement, DLP scanning, and full audit logging.
SDK Setup (Python)
pip install controlzero google-genai
from controlzero import Client
from controlzero.integrations.google import wrap_google
from google import genai
cz = Client(api_key="cz_live_your_api_key_here")
google_client = genai.Client(api_key="GEMINI_API_KEY")
# Wrap the client's `.models` surface (not a GenerativeModel instance)
models = wrap_google(google_client.models, cz, agent_id="my-agent")
# Use models exactly as before. All calls are now governed
response = models.generate_content(
model="gemini-2.0-flash",
contents="Explain quantum computing in simple terms.",
)
print(response.text)
Two lines added. Zero changes to your application logic.
SDK Setup (Node.js)
npm install @controlzero/sdk @google/genai
import { Client } from '@controlzero/sdk';
import { wrapGoogle } from '@controlzero/sdk/integrations/google';
import { GoogleGenAI } from '@google/genai';
const cz = new Client({ policyFile: './controlzero.yaml' });
const googleClient = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
// Wrap the client's `.models` surface, not a single model instance
const models = wrapGoogle(googleClient.models, cz, { agentId: 'my-agent' });
const result = await models.generateContent({
model: 'gemini-2.0-flash',
contents: 'Explain quantum computing.',
});
console.log(result.text);
Streaming
Both generate_content and generate_content_stream are governed:
# Non-streaming (governed)
response = models.generate_content(
model="gemini-2.0-flash",
contents="Summarize this report.",
)
# Streaming (also governed)
for chunk in models.generate_content_stream(
model="gemini-2.0-flash",
contents="Summarize this report.",
):
print(chunk.text, end="")
Gateway Proxy
Route Gemini traffic through the Governance Gateway for centralized policy enforcement and audit logging:
# Environment variables for the gateway
CZ_GATEWAY_GOOGLE_ENABLED=true
CZ_GATEWAY_GOOGLE_API_KEY=your_google_api_key
The gateway exposes Gemini endpoints at:
| Gateway Endpoint | Upstream Route |
|---|---|
POST /google/v1beta/models/{model}:generateContent | generativelanguage.googleapis.com |
POST /google/v1beta/models/{model}:streamGenerateContent | generativelanguage.googleapis.com (streaming) |
What Gets Enforced Automatically
| API Method | Policy Action | Policy Resource |
|---|---|---|
generate_content(model=...) | llm.generate | model/gemini-pro |
stream_generate_content(model=...) | llm.generate | model/gemini-pro |
The wrapper extracts the model name, checks it against your policies, and blocks the call if denied.
DLP Scanning
DLP scanning works on the Google Gemini request format. The gateway inspects contents[].parts[].text fields for sensitive data patterns before forwarding the request upstream.
Tool Call Governance
When Gemini models use function calling, Control Zero enforces policies on function_declarations. Each tool invocation is checked against your tool policies before execution:
{
"name": "gemini-tool-restrictions",
"rules": [
{
"effect": "allow",
"action": "tool:call",
"resource": "tool/search_products"
},
{
"effect": "deny",
"action": "tool:call",
"resource": "tool/delete_records"
}
]
}
Google ADK (Agent Development Kit)
For agents built with the Google Agent Development Kit, use GovernedAgent
and GovernedTool from the dedicated google_adk integration module:
from controlzero.integrations.google_adk import GovernedAgent, GovernedTool
agent = GovernedAgent(
name="research-agent",
model="gemini-2.0-flash",
tools=[GovernedTool(search_web), GovernedTool(summarize)],
cz=cz,
agent_id="research-agent",
)
Each tool call and model invocation within the ADK agent is governed by your dashboard policies.
Example Policy
The Google integration tags each call with provider: "google" and the
target model. You can match on either the provider condition (any Gemini
model) or a specific resource (one model).
version: '1'
rules:
# Allow Gemini 2.0 Flash for any call tagged provider=google
- effect: allow
action: 'llm:generate'
conditions:
provider: 'google'
model: 'gemini-2.0-flash*'
# Deny expensive Pro models
- effect: deny
action: 'llm:generate'
conditions:
provider: 'google'
model: 'gemini-*-pro*'
reason: 'Gemini Pro tier not approved'
# Default deny for any other Google call
- effect: deny
action: 'llm:generate'
conditions:
provider: 'google'
What happens at runtime:
# ALLOWED: gemini-2.0-flash matches the first rule
models.generate_content(model="gemini-2.0-flash", contents="Hello")
# BLOCKED: gemini-1.5-pro matches the second (deny) rule
models.generate_content(model="gemini-1.5-pro", contents="Hello")
# Raises PolicyDeniedError
Handling Denied Actions
import controlzero
try:
response = models.generate_content(
model="gemini-1.5-pro",
contents="Analyze this data.",
)
except controlzero.PolicyDeniedError:
# Fall back to an approved model
response = models.generate_content(
model="gemini-2.0-flash",
contents="Analyze this data.",
)
Audit Logging
All Gemini requests (via SDK wrapper or gateway) are logged in the audit trail with the provider identified as google. View them in the dashboard under Audit Log with the provider filter.
Related
- OpenAI Integration: Same wrapper pattern for OpenAI API calls.
- Anthropic Integration: Same wrapper pattern for Claude API calls.
- Gateway Guide: Full gateway configuration reference.
- API Reference: Full API documentation.