NEW IN v0.10.0April 2026

Govern Anthropic Managed Agents
in One Line of Code

Every tool call your autonomous Claude agent makes, policy-evaluated, human-approved when needed, and cryptographically signed in a tamper-proof audit trail. Three lines of code. Zero infrastructure.

your-orchestrator.ts
const al = new AgentLattice({
  apiKey: process.env.AL_API_KEY,
  anthropicApiKey: process.env.ANTHROPIC_API_KEY,
});

const result = await al.govern("agent_011Ca...", "Analyze this earnings report");

That's it. That's the entire integration.

3
lines of code
8
tool types governed
0
infrastructure to manage
< 50ms
policy evaluation

Anthropic built the agent runtime.
Nobody built the governance layer.

Anthropic Managed Agents ship with always_ask permission policies. Set it on a tool, and the agent pauses before executing. Somebody has to approve or deny.

In the Anthropic console, that somebody is you, clicking buttons. For one agent, fine. For fifty agents running 24/7, clicking approve 200 times a day? Not fine.

You need a policy engine. You need org-wide rules that apply across every agent. You need a human-in-the-loop workflow that doesn't require a human to be in the loop for every low-risk action. You need an audit trail your CISO can actually read.

That's what govern() does.

How it works

Step 1

Agent tries to use a tool

Your Managed Agent wants to run bash, fetch a URL, or write a file. Anthropic's runtime pauses and emits a requires_action event.

Step 2

AgentLattice evaluates the policy

govern() maps the tool to an action type (bash --> code.execute) and calls gate() against your workspace policies.

Step 3

Verdict returned instantly

Low-risk tools auto-approve in milliseconds. High-risk tools escalate to your team via the AgentLattice dashboard. The agent waits until a human decides.

Step 4

Signed and sealed

Every decision lands in a tamper-proof audit trail. ECDSA-signed. Chain-verified. Exportable. Your compliance team will buy you lunch.

Every tool. Mapped. Governed.

All 8 Anthropic built-in tools map to AgentLattice action types automatically. MCP server tools are covered too. Zero configuration.

bash-->code.execute
always_ask
write-->file.write
always_ask
edit-->file.write
always_ask
web_fetch-->web.read
always_allow
web_search-->web.query
always_allow
read-->file.read
always_allow
glob-->file.read
always_allow
grep-->file.read
always_allow

What governance actually looks like

This is the output from a real governed session. One agent, five action types, policies enforced, bash denied by a human reviewer.

$ govern("agent_011Ca...", "Run the analysis")
file.read LOW --> ALLOW
file.write MEDIUM --> ALLOW
web.read MEDIUM --> ALLOW
code.execute HIGH --> DENY (human reviewer)
web.query LOW --> ALLOW
5 actions governed. 4 auto-approved. 1 human-denied. Chain verified.

Human-in-the-loop that doesn't block the loop

When a policy requires human approval, the agent pauses on that tool call. But here's the thing other governance tools get wrong: while the human reviews the bash command, every other tool call keeps flowing. Web searches auto-approve. File reads auto-approve. Only the dangerous action waits.

We handle this with concurrent approval via Promise.allSettled. Anthropic times out unconfirmed tool calls after 60-90 seconds. If your governance layer blocks everything while one human reviews one bash command, every other tool call dies. Ours doesn't.

Built for production

Fail-closed by default

If AgentLattice is unreachable, tool calls are denied. A security product that fails open is a liability, not a feature.

Configurable for availability

Set failOpen: true for availability-critical workloads. Your call, not ours.

Agent-proof governance

The agent never knows AgentLattice exists. Governance happens at the infrastructure layer, not in the system prompt. Can't be prompt-injected away.

Cross-vendor ready

Today it's Anthropic. Tomorrow it's OpenAI's agent runtime, Bedrock, or whatever ships next. The policy engine is vendor-agnostic. Only the SSE adapter changes.

Your agents are running.
Who decides what they're allowed to do?

Install the SDK. Add three lines. Every tool call is governed, audited, and signed. Your CISO gets an audit trail. Your engineers get a one-liner.

terminal
npm install @agentlattice/sdk @anthropic-ai/sdk

TypeScript and Python. Both SDKs. Same API. Same three lines.