LangChain Integration
Add governance to your LangChain chains and agents with a callback handler. Drop it into any LangChain component and all LLM calls and tool invocations are automatically enforced by your dashboard policies.
Setup
pip install controlzero langchain langchain-openai
import controlzero
from controlzero.integrations.langchain import ControlZeroCallbackHandler
from langchain_openai import ChatOpenAI
cz = controlzero.init()
handler = ControlZeroCallbackHandler(cz)
# Add the handler to any LangChain LLM -- all calls are now governed
llm = ChatOpenAI(model="gpt-4", callbacks=[handler])
One extra line: create the handler and pass it to your LLM or chain. Every LLM call and tool invocation is now checked against your dashboard policies automatically.
What Gets Enforced Automatically
The callback handler intercepts two types of events:
| LangChain Event | Policy Action | Policy Resource |
|---|---|---|
LLM call (on_llm_start) | llm.generate | model/{model_name} |
Tool call (on_tool_start) | tool.call | tool/{tool_name} |
When LangChain triggers an LLM call, the handler extracts the model name and checks it against your policies. When a tool is invoked, it extracts the tool name and checks that too. If any action is denied, the handler raises PolicyDeniedError and the chain stops.
Example: Research Agent with Policy Enforcement
import controlzero
from controlzero.integrations.langchain import ControlZeroCallbackHandler
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool
# --- Setup Control Zero ---
cz = controlzero.init()
handler = ControlZeroCallbackHandler(cz, agent_id="research-agent")
# --- Define tools ---
@tool
def search_web(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
@tool
def read_database(query: str) -> str:
"""Query the internal database."""
return f"DB results for: {query}"
# --- Create agent with governance ---
llm = ChatOpenAI(model="gpt-4", callbacks=[handler])
prompt = ChatPromptTemplate.from_messages([
("system", "You are a research assistant."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_tool_calling_agent(llm, [search_web, read_database], prompt)
executor = AgentExecutor(
agent=agent,
tools=[search_web, read_database],
callbacks=[handler],
)
# --- Run it ---
try:
result = executor.invoke({"input": "Find recent sales data"})
print(result["output"])
except controlzero.PolicyDeniedError as e:
print(f"Blocked by policy: {e.message}")
Example Policy for the Research Agent
Define this in the Control Zero dashboard:
{
"name": "research-agent-policy",
"description": "Allow web search but block database access",
"rules": [
{ "effect": "allow", "action": "llm.generate", "resource": "model/gpt-4" },
{ "effect": "allow", "action": "tool.call", "resource": "tool/search_web" },
{ "effect": "deny", "action": "tool.call", "resource": "tool/read_database" }
]
}
What happens at runtime:
- When the agent calls GPT-4 for reasoning: ALLOWED (matches rule 1).
- When the agent tries to use
search_web: ALLOWED (matches rule 2). - When the agent tries to use
read_database: BLOCKED (matches rule 3). The chain stops and raisesPolicyDeniedError.
The agent never sees the database. The policy is enforced before the tool runs.
Using with Chains
The handler works with any LangChain component that accepts callbacks:
from langchain_core.output_parsers import StrOutputParser
chain = llm | StrOutputParser()
# Handler is already attached to the llm, so all calls in this chain are governed
result = chain.invoke("Summarize this report")
Next Steps
- CrewAI Integration -- Govern multi-agent orchestration.
- RAG Guide -- Build a policy-enforced RAG pipeline with LangChain.
- Policies -- Learn how to construct dashboard policies.