Mistral AI Integration
Add governance to Mistral AI API calls. Control Zero intercepts requests through the Governance Gateway and enforces your dashboard policies on every interaction.
Overview
Mistral AI provides high-performance language models via an OpenAI-compatible API. When routed through the Control Zero Governance Gateway, every request is governed by your policies and logged for audit.
Mistral uses the OpenAI-compatible wire format, so the integration reuses the same interceptors and policy enforcement as the OpenAI integration.
Setup
1. Configure the Gateway
Enable the Mistral provider in your Governance Gateway configuration:
# Environment variables for the gateway
CZ_GATEWAY_MISTRAL_ENABLED=true
CZ_GATEWAY_MISTRAL_API_KEY=your_mistral_api_key
2. Route Requests Through the Gateway
Point your client at the gateway's Mistral endpoint:
import openai
# Route through Control Zero gateway instead of direct Mistral
client = openai.OpenAI(
base_url="http://localhost:8000/mistral/v1",
api_key="cz_live_your_key_here",
)
response = client.chat.completions.create(
model="mistral-large-latest",
messages=[{"role": "user", "content": "Explain the difference between L1 and L2 regularization."}],
)
print(response.choices[0].message.content)
3. Define Policies
Create policies in the dashboard to govern Mistral usage:
{
"name": "mistral-usage-policy",
"rules": [
{
"effect": "allow",
"action": "llm:chat.completions",
"resource": "mistral/*",
"conditions": { "model": ["mistral-large-latest", "mistral-medium-latest"] }
},
{
"effect": "deny",
"action": "llm:chat.completions",
"resource": "mistral/*",
"conditions": { "max_tokens_gt": 8192 }
}
]
}
What Gets Governed
| API Endpoint | Policy Action | Description |
|---|---|---|
POST /mistral/v1/chat/completions | llm:chat.completions | Chat completions (streaming and non-streaming) |
POST /mistral/v1/completions | llm:completions | Text completions |
GET /mistral/v1/models | llm:models.list | Model listing |
SDK Wrapping
Since Mistral uses the OpenAI wire format, use wrap_openai() in the Python SDK or the standard OpenAI client wrapper in Node.js:
from controlzero import Client
from controlzero.integrations.openai import wrap_openai
import openai
cz = Client(api_key="cz_live_your_key_here")
client = wrap_openai(
openai.OpenAI(
base_url="http://localhost:8000/mistral/v1",
api_key="cz_live_your_key_here",
),
cz,
)
response = client.chat.completions.create(
model="mistral-large-latest",
messages=[{"role": "user", "content": "Hello"}],
)
Streaming Support
The gateway fully supports streaming responses from Mistral. Server-sent events are forwarded transparently while policy enforcement and audit logging occur on the initial request.
Tool Call Governance
Mistral models that support tool use (function calling) are fully governed. The gateway intercepts tool call responses and evaluates each tool invocation against your policies before forwarding to the client.
Audit Logging
All Mistral requests routed through the gateway are logged in the audit trail with the provider identified as mistral. View them in the dashboard under Audit Log with the provider filter.
Related
- OpenAI Integration: Same wire format, same governance patterns.
- DeepSeek Integration: Another OpenAI-compatible provider.
- Gateway Guide: Full gateway configuration reference.
- API Reference: Full API documentation.