Skip to main content

Customer Support Application with Control Zero

This guide shows how to build a customer support chatbot with governance policies that control what the bot can do, which tools it can use, and what data it can access.

What You Will Build

A customer support agent that:

  • Answers customer questions using an LLM
  • Can look up order status and account information
  • Has policy guardrails preventing unauthorized actions
  • Cannot modify orders, issue refunds, or access sensitive data without policy approval
  • Logs every action for compliance

Architecture

Customer Message
|
v
[Control Zero: enforce session policy]
|
v
LLM Processing (understand intent, select tool)
|
v
[Control Zero: enforce tool call policy]
|
v
Tool Execution (order lookup, account info, etc.)
|
v
[Control Zero: enforce response policy]
|
v
Response to Customer

Implementation

Setup

pip install controlzero openai
import controlzero
import openai

cz = controlzero.ControlZero()
cz.initialize()

openai_client = openai.OpenAI()

Define Support Tools

# Simulated support tools
def lookup_order(order_id: str) -> dict:
"""Look up order status."""
return {"order_id": order_id, "status": "shipped", "eta": "2026-03-05"}

def lookup_account(customer_id: str) -> dict:
"""Look up customer account info."""
return {"customer_id": customer_id, "name": "Jane Doe", "tier": "premium"}

def issue_refund(order_id: str, amount: float) -> dict:
"""Issue a refund for an order."""
return {"order_id": order_id, "refund_amount": amount, "status": "processed"}

def escalate_to_human(reason: str) -> dict:
"""Escalate the conversation to a human agent."""
return {"escalated": True, "reason": reason}

TOOLS = {
"lookup_order": lookup_order,
"lookup_account": lookup_account,
"issue_refund": issue_refund,
"escalate_to_human": escalate_to_human,
}

Enforce Tool Calls

def execute_tool(tool_name: str, arguments: dict, agent_id: str) -> dict:
"""Execute a support tool with policy enforcement."""

# Enforce: is this agent allowed to call this tool?
cz.enforce(
action="tool.call",
resource=f"tool/{tool_name}",
context={
"agent_id": agent_id,
"arguments": str(arguments),
},
)

# Tool call allowed -- execute it
tool_fn = TOOLS.get(tool_name)
if not tool_fn:
raise ValueError(f"Unknown tool: {tool_name}")

return tool_fn(**arguments)

Support Agent Loop

def handle_customer_message(
message: str,
conversation_history: list[dict],
agent_id: str = "support-bot",
) -> str:
"""Handle a customer message with full policy enforcement."""

# Enforce: is this agent allowed to generate responses?
cz.enforce(
action="llm.generate",
resource="model/gpt-4",
context={"agent_id": agent_id},
)

# Build messages with conversation history
messages = [
{
"role": "system",
"content": (
"You are a customer support assistant. "
"You can look up orders, check account info, and escalate to humans. "
"Be helpful and professional. Never share sensitive financial data."
),
},
*conversation_history,
{"role": "user", "content": message},
]

# Define available tools for function calling
tools = [
{
"type": "function",
"function": {
"name": "lookup_order",
"description": "Look up order status by order ID",
"parameters": {
"type": "object",
"properties": {
"order_id": {"type": "string"},
},
"required": ["order_id"],
},
},
},
{
"type": "function",
"function": {
"name": "lookup_account",
"description": "Look up customer account information",
"parameters": {
"type": "object",
"properties": {
"customer_id": {"type": "string"},
},
"required": ["customer_id"],
},
},
},
{
"type": "function",
"function": {
"name": "issue_refund",
"description": "Issue a refund for an order",
"parameters": {
"type": "object",
"properties": {
"order_id": {"type": "string"},
"amount": {"type": "number"},
},
"required": ["order_id", "amount"],
},
},
},
{
"type": "function",
"function": {
"name": "escalate_to_human",
"description": "Escalate the conversation to a human agent",
"parameters": {
"type": "object",
"properties": {
"reason": {"type": "string"},
},
"required": ["reason"],
},
},
},
]

# Call the LLM
response = openai_client.chat.completions.create(
model="gpt-4",
messages=messages,
tools=tools,
)

choice = response.choices[0]

# Handle tool calls with policy enforcement
if choice.message.tool_calls:
for tool_call in choice.message.tool_calls:
import json
tool_name = tool_call.function.name
arguments = json.loads(tool_call.function.arguments)

try:
result = execute_tool(tool_name, arguments, agent_id)
return f"Tool result ({tool_name}): {result}"
except controlzero.PolicyViolationError as e:
return (
"I am not authorized to perform that action. "
"Let me connect you with a human agent who can help."
)

return choice.message.content

Example Policy

Allow read-only operations but block refunds (require human approval):

{
"name": "support-bot-policy",
"rules": [
{
"effect": "allow",
"action": "llm.generate",
"resource": "model/gpt-4"
},
{
"effect": "allow",
"action": "tool.call",
"resource": "tool/lookup_order"
},
{
"effect": "allow",
"action": "tool.call",
"resource": "tool/lookup_account"
},
{
"effect": "deny",
"action": "tool.call",
"resource": "tool/issue_refund"
},
{
"effect": "allow",
"action": "tool.call",
"resource": "tool/escalate_to_human"
}
]
}

With this policy, the support bot can look up orders and accounts, and escalate to humans, but cannot issue refunds. When a customer asks for a refund, the bot is automatically blocked and should escalate.

Why This Matters

Customer support bots handle sensitive operations and interact directly with customers. Without governance:

  • A bot could issue unauthorized refunds, costing the business money
  • A bot could access account data it should not have
  • A bot could use an expensive model when a cheaper one would suffice
  • There is no audit trail of what the bot did and why

Control Zero provides the enforcement layer that ensures the bot operates within its approved boundaries.

Next Steps