An agent-friendly data platform has a chat layer. An agent-native data platform has identity, governance, audit, and a sub-100ms API surface designed for autonomous use. Here are the eight questions that separate them.
1. What is the API latency target on the read path? If the answer is 'depends' or 'usually under a second,' the platform was designed for humans. Agents need sub-100ms or they cannot fit a multi-step plan into one user-facing turn.
2. Does each agent have its own identity? Shared identity means one compromised agent compromises every workflow. Per-agent identity is a prerequisite for trust.
3. Are agent actions audit-logged at the same fidelity as human actions? 'Agent took action' is not enough. The regulator wants 'agent X took action Y under policy Z because of input W.'
4. Can you revoke an agent's permissions in one call? If the answer involves filing a ticket or restarting something, the platform was not designed for the operational reality of running agents.
5. Does the platform meter per-action? Agents do many things per query. Per-action metering is the only way to make agent cost legible.
6. Are there risk-graded approval gates on high-stakes actions? Advisory means the agent will eventually do the thing the advisory said not to.
7. Is governance enforced at the planner — or at the consumer? Planner-enforced means an agent that bypasses the recommended client cannot bypass the policy. Consumer-enforced is theater.
8. Does the audit trail support replay? If you cannot replay an agent's decision exactly — same input, same policy, same output — you cannot dispute it. Dispute is the only way trust survives a regulator's first hard question.
Most platforms claiming agent-readiness fail by question three. The honest test is whether the architecture was designed for autonomous use, or retrofitted to look like it was.