Every tool call your autonomous Claude agent makes, policy-evaluated, human-approved when needed, and cryptographically signed in a tamper-proof audit trail. Three lines of code. Zero infrastructure.
const al = new AgentLattice({
apiKey: process.env.AL_API_KEY,
anthropicApiKey: process.env.ANTHROPIC_API_KEY,
});
const result = await al.govern("agent_011Ca...", "Analyze this earnings report");That's it. That's the entire integration.
Anthropic Managed Agents ship with always_ask permission policies. Set it on a tool, and the agent pauses before executing. Somebody has to approve or deny.
In the Anthropic console, that somebody is you, clicking buttons. For one agent, fine. For fifty agents running 24/7, clicking approve 200 times a day? Not fine.
You need a policy engine. You need org-wide rules that apply across every agent. You need a human-in-the-loop workflow that doesn't require a human to be in the loop for every low-risk action. You need an audit trail your CISO can actually read.
That's what govern() does.
Your Managed Agent wants to run bash, fetch a URL, or write a file. Anthropic's runtime pauses and emits a requires_action event.
govern() maps the tool to an action type (bash --> code.execute) and calls gate() against your workspace policies.
Low-risk tools auto-approve in milliseconds. High-risk tools escalate to your team via the AgentLattice dashboard. The agent waits until a human decides.
Every decision lands in a tamper-proof audit trail. ECDSA-signed. Chain-verified. Exportable. Your compliance team will buy you lunch.
All 8 Anthropic built-in tools map to AgentLattice action types automatically. MCP server tools are covered too. Zero configuration.
bash-->code.executewrite-->file.writeedit-->file.writeweb_fetch-->web.readweb_search-->web.queryread-->file.readglob-->file.readgrep-->file.readThis is the output from a real governed session. One agent, five action types, policies enforced, bash denied by a human reviewer.
When a policy requires human approval, the agent pauses on that tool call. But here's the thing other governance tools get wrong: while the human reviews the bash command, every other tool call keeps flowing. Web searches auto-approve. File reads auto-approve. Only the dangerous action waits.
We handle this with concurrent approval via Promise.allSettled. Anthropic times out unconfirmed tool calls after 60-90 seconds. If your governance layer blocks everything while one human reviews one bash command, every other tool call dies. Ours doesn't.
If AgentLattice is unreachable, tool calls are denied. A security product that fails open is a liability, not a feature.
Set failOpen: true for availability-critical workloads. Your call, not ours.
The agent never knows AgentLattice exists. Governance happens at the infrastructure layer, not in the system prompt. Can't be prompt-injected away.
Today it's Anthropic. Tomorrow it's OpenAI's agent runtime, Bedrock, or whatever ships next. The policy engine is vendor-agnostic. Only the SSE adapter changes.
Install the SDK. Add three lines. Every tool call is governed, audited, and signed. Your CISO gets an audit trail. Your engineers get a one-liner.
npm install @agentlattice/sdk @anthropic-ai/sdkTypeScript and Python. Both SDKs. Same API. Same three lines.