We scanned 16 popular websites across AI, payments, developer tools, and productivity to see how ready the web is for AI agents.
7 live scans ยท 9 estimated ยท WebMCP adoption: 0%
OpenAI scores 53/100. Anthropic scores 56/100. Both build tools for AI agents but haven't made their own sites agent-accessible. OpenAI's llms.txt returns 403 โ it actively blocks AI crawlers from reading its own AI documentation.
Stripe (63/100, Grade C) is the highest-scoring site we tested. They have llms.txt, proper semantic HTML, fast response times, and well-structured forms. This isn't accidental โ Stripe has long prioritised developer experience, and agent-readiness is its natural extension.
Not a single site we tested has implemented WebMCP โ the W3C-incubating standard that shipped in Chrome 146 in January 2026. Sites that move first will have a significant advantage when AI agents start preferring agent-native interfaces over scraped HTML.
ChatGPT plugins, Claude computer use, and browser automation agents are already in production. Sites that aren't ready won't be found, won't be usable, and will lose traffic as AI-mediated browsing grows. The window to get ahead is now.
Scores are based on 6 weighted categories: Agent Usability (30pts), WebMCP (25pts), Semantic HTML (20pts), Structured Data (15pts), AI Discoverability (5pts), Content Quality (5pts).
Live scans (7 sites) were run via scanner.v1be.codes in February 2026. These fetch the live HTML and run all checks server-side.
Estimates (9 sites) are based on manual review of publicly known characteristics (robots.txt, llms.txt availability, framework type, observable structured data). Marked with "est." on each row.
All scores are point-in-time โ sites change continuously. Scan any site yourself at scanner.v1be.codes.
Free scan ยท No signup ยท Under 10 seconds