OpenTerms makes legal terms machine-readable. So every AI agent knows what it's allowed to do before it does it.
Billions of AI agents are being deployed to browse, buy, post, and transact across the web. None of them can read a terms of service agreement. They don't know what's allowed. They don't know what's not. And the companies deploying them are carrying all the liability.
Robots.txt told crawlers where they could go. OpenTerms tells agents what they can do.
A standardized protocol that translates terms of service, privacy policies, and usage restrictions into structured data any AI agent can parse.
Websites host an openterms.json file alongside their existing robots.txt and llms.txt. Structured, versioned, machine-readable.
Before an AI agent takes any action on a service, it checks the OpenTerms endpoint. Permissions come back as structured data, not legalese.
Audit trails, permission logs, and compliance documentation generate themselves. Every agent action has a legal basis on record.
The infrastructure for AI agent governance doesn't exist yet. The window to define the standard is open.
Compliance documentation is now mandatory for AI systems operating in regulated sectors across Europe. Agent developers need structured legal data.
Every autonomous agent that interacts with external services will need to understand legal boundaries. That's not optional, it's infrastructure.
LLMS.txt handles content discoverability. Robots.txt handles crawling permissions. Nothing handles legal terms for agent actions. Until now.
Courts and regulators are actively debating who's liable when an AI agent violates terms of service. The companies with audit trails will win.
Built by a lawyer who builds technology. For the agents that are building everything else.