Skip to content
EISBERG
Trust / Agents

Agent governance defined at creation, not after the incident.

The 2026 question is not 'can your agents query my data?' — it is 'can I let your agents run unattended in my regulated environment and survive an audit?' Our answer is six things working together, structurally.

Birth Certificates

Every autonomous agent receives a signed, immutable creation record at the moment of issuance. The certificate encodes the agent's mandated intent scope, allowed data classes, action ceiling in dollars, expiry, human owner, and the policy-bundle hash that was authoritative when the agent was created. The Job spine refuses to compile a step for an agent whose certificate fails verification. Guardrails are suggestions. This is law.

Risk-graded approval gates

Action value × risk classification × autonomy level produces an approval decision before a step ever runs. Low-risk reads pass; moderate-risk writes notify; high-value or irreversible actions block until a named human approves over an HMAC-signed webhook. No 'soft constraint' that an agent can argue around — the gate is enforced at compile time, not asked at run time.

Durable execution

An agent mid-way through a 50-step approval chain that survives an infrastructure failure is the difference between trustable autonomy and a 24/7 ops war room. Every Job is durable, every step is replayable, every state transition is journalled. If the platform crashes, the agent resumes from the last committed step — not from the top, not abandoned.

Identity-bound, scope-bound

Agents authenticate with workspace-scoped bearer keys, not with the credentials of the human who created them. An agent cannot impersonate a user. An agent cannot escalate its scope. The keys are short-lived, rotatable, and tied to the agent's Birth Certificate — revoke the certificate, every active session is invalid within seconds.

Revoke globally in one call

If an agent misbehaves — or you simply want it gone — one POST revokes the Birth Certificate, terminates every in-flight job, and writes one audit-log entry capturing who revoked it, when, and why. The agent's actions before the revoke remain replayable; new actions are refused at the door.

Replayable from the bill alone

Every agent action is a journalled transaction with the certificate id, the policy bundle hash, the approval chain, and the resulting state change pinned together. A regulator asking 'show me what authorised this action on October 17th' can replay the entire decision path from the audit log alone — no joins against a separate system, no '...we'll have to ask engineering'.

The Birth Certificate

What gets signed at the moment an agent is born.

A signed, immutable creation record. Eleven fields. Every one of them a question a regulator will ask. Designed so a relying party can verify the certificate offline, replay any agent decision against the policy plane that was authoritative at issue time, and prove tamper-freeness with a single HMAC check.

FieldWhat it bounds
agent_idStable identifier for the autonomous agent.
workspace_idCross-tenant boundary. RLS-enforced everywhere.
allowed_intentsASK, MONITOR, REORDER — the agent cannot run an intent not on this list.
allowed_data_classesPUBLIC, INTERNAL, PII, PHI, PCI, FINANCIAL — touching any class outside this set blocks compile.
allowed_schemasNamespace scope. Empty = no schema restriction; populated = strict allow-list.
max_action_value_usdSingle-action dollar ceiling. Exceeding it requires human approval, not bigger budget.
max_actions_per_dayDaily quota. Runaway agents stop themselves before ops has to.
human_owner_idSomeone is accountable. Not 'the team'. A named person.
policy_bundle_hashPins the policy-plane bundle that was authoritative at issuance — so a regulator can replay against the right rules.
expires_atCertificates are not eternal. Default 90 days, configurable per agent.
signatureHMAC-SHA256 over the canonical body. Tamper detection is one verify call.
The frame shift

Why this is structural, not a checklist.

Compare the two ways of governing an autonomous agent. The difference is whether your audit story is reactive or anticipatory.

Guardrails

The 2024 model.

  • Bind a role to the agent at runtime, from the invoking user.
  • Log actions to a separate audit pipeline.
  • Reconcile after the incident — read logs, replay context.
  • Trust the agent within the role; investigate after harm.

Governance

The Eisberg model.

  • Bind scope, ceiling, expiry, and owner at the moment of creation.
  • Refuse the action at compile time if any bound is violated.
  • Replay any decision against the certificate that authorised it.
  • Trust the boundary, not the agent.
Where this matters most

The three industries this is non-negotiable for.

Financial services

SR 11-7, BCBS 239, MNPI handling. An agent that reclassifies data or unblocks access is the first thing a model-risk committee will ask to replay. The certificate is the artifact.

Healthcare

HIPAA minimum-necessary, 42 CFR Part 2, state privacy regimes. Agents touching PHI must be scoped at the data-class boundary, not the role boundary — exactly what allowed_data_classes enforces at compile time.

EU AI Act / NIST AI RMF

High-risk AI systems must produce an audit trail that survives independent verification. policy_bundle_hash pins the policy plane at issue time; the HMAC signature proves no tampering after the fact. Both replayable offline.

Agent governance — direct answers

The questions a CISO or auditor asks first.

Reading this with your CISO?

The full Birth Certificate specification, the verifying-party protocol, and a worked example of an agent refused at compile time are available under NDA. We send them in advance of every regulated-industry evaluation so your security team has the receipts before the demo.