MCP, ai.txt, and agents.json: The State of Emerging AI Standards in 2026
Published 2026-04-20 · PROGEOLAB Research
Three emerging standards promise to extend AI visibility beyond robots.txt and llms.txt: Model Context Protocol (MCP), ai.txt, and agents.json. Each proposes a different mechanism for making a website's AI-facing capabilities discoverable and invokable. All three are at different points on the specification-to-production curve. None has Fortune 500 adoption.
When PROGEOLAB probed /ai.txt, /agents.json, and /mcp.json paths across the Fortune 500 in April 2026:
| Standard | HTTP 200 responses | Real implementations (body-validated) |
|---|---|---|
| ai.txt | 357 (71.4%) | 0 |
| agents.json | 329 (65.8%) | 0 |
| mcp.json | 298 (59.6%) | 0 |
Every single HTTP 200 response is a soft-404 page — HTML error templates served with 200 status. No Fortune 500 company has a real implementation of any of these three standards. Tools that measure AI-standard adoption by status code alone report phantom implementations at 60-70% adoption rates.
Model Context Protocol (MCP) — the most mature
MCP is an open specification from Anthropic, published November 2024. It defines how AI models and external tools exchange context through a standardized message format. An MCP server exposes tools (functions the AI can call), resources (content the AI can read), and prompts (templates the AI can use).
Vendor adoption has been rapid at the client side: Anthropic's Claude, OpenAI's ChatGPT, Microsoft Copilot, Google's Gemini, and multiple IDE integrations (Cursor, Windsurf, Zed) support MCP as of Q1 2026. The server side is sparser — most production MCP servers today are either internal tooling for developer workflows (code search, database access) or community-built connectors to popular SaaS APIs.
What a Fortune 500 MCP server would look like: a discoverable endpoint at a known path (the spec currently debates the canonical location) that exposes company-specific tools — customer support lookup, product configurator, account status. An AI assistant could then call these tools directly when a user asks questions within the assistant's context. The value proposition is legitimate; the specification is stable enough; the missing piece is the path convention and authentication pattern for public MCP servers.
ai.txt — access policy
The ai.txt proposal extends robots.txt semantics to AI-specific categories (training vs retrieval vs agent). Multiple drafts exist, none converged. The training-retrieval split problem ai.txt aims to solve is currently handled inside robots.txt by naming individual bots; ai.txt would promote that to a first-class distinction.
Implementation is blocked on specification. Without a canonical format, publishing ai.txt produces a file that different AI crawlers may interpret differently — worse than not publishing at all.
agents.json — capability manifest
The agents.json proposal describes what AI agents can and cannot do on a site — which forms can be submitted, which APIs can be called, which user-facing actions are permitted for agent automation. It overlaps with both OpenAPI (for API descriptions) and Web App Manifest (for capability declarations).
Adoption is blocked on the same specification-stability problem. The proposal is not mature enough for production deployment in April 2026.
What to do in 2026
For enterprises: watch, don't build. Implement robots.txt with the training-retrieval split template and llms.txt for content discovery today; these have Fortune 500 precedent and vendor support. Revisit ai.txt and agents.json in 12-18 months when specs stabilize.
For MCP specifically: if your company already publishes a well-documented public API (customer support, product catalog, account management), wrapping that API as an MCP server is a reasonable Q3 2026 project. The technical lift is small because MCP servers are thin translation layers; the business value depends on how many AI assistants actually connect to public MCP servers (vs only connecting to enterprise-internal ones).
For teams running AI-visibility audits today: the correct measurement for ai.txt and agents.json is "not implemented anywhere" — ignore any tool that reports 60-70% adoption based on status code alone.