AI agents don't browse like humans. They read your DOM, call your forms as tools, and index your structured data. Most websites fail silently.
Scan any site. Get a score, a readiness level, and a precise fix list — in seconds.
Or scan a known site:
The web was built for human eyes. AI agents — tools like ChatGPT plugins, Claude's computer use, browser automation agents — read your HTML directly. They can't see images, they can't solve CAPTCHAs, and they navigate by semantic landmarks.
AI agents parse HTML, not pixels. A beautiful website with semantic structure is invisible to an agent. Raw DOM is what gets analysed.
WebMCP (Chrome 146+) lets agents call your forms as tools. llms.txt lets you curate what LLMs know about your site. These standards are being adopted now.
Without structured data, semantic HTML, and agent-friendly forms, agents either misunderstand your site or skip it entirely — no error, just silence.
Most websites are at Level 2. The top tier is within reach for any developer willing to spend a day on it.
AI agents can't meaningfully access or understand the site. Crawlers may be blocked, semantic structure is absent, content is inaccessible.
Agents can read raw text but can't understand page structure, purpose, or how to navigate. Like reading a book with no chapters.
Structured data and semantic HTML help agents understand content. But they still can't take actions — search, submit forms, or navigate flows.
Agents can discover, understand, and take actions. Forms are accessible, navigation is clear, content is machine-readable.
Fully optimised for AI agents. WebMCP exposes site capabilities as callable tools. llms.txt gives LLMs an accurate site overview. The top ~3% of the web.
Every check has a clear fix with a code example. No vague scores — just precise, actionable improvements.
Chrome 146+ · W3C standard
Detects mcp-tool, mcp-param, and mcp-description attributes that expose your site's forms and actions as callable tools for AI agents.
View spec ↗HTML5 landmarks · heading hierarchy
Checks for <main>, <header>, <nav>, <article>, language attributes, ARIA roles — the vocabulary agents use to navigate your pages.
View spec ↗Schema.org · JSON-LD · Microdata
Validates JSON-LD blocks and checks for high-value types: WebSite, SearchAction, Product, FAQPage, BreadcrumbList.
View spec ↗Forms · buttons · CAPTCHA · pagination
Checks whether agents can actually interact with your site — label coverage, semantic buttons, CAPTCHA walls, and infinite scroll patterns.
robots.txt · sitemap · llms.txt · HTTPS
Checks whether AI crawlers can find and access your site, including the new llms.txt standard adopted by 844,000+ sites.
View spec ↗Alt text · readable prose · link clarity
Agents are blind to images without alt text. Checks image alt coverage, link descriptiveness, and content readability for NLP models.
Free, no login required. Results in under 10 seconds.
AI Agent Readiness Scanner · Free, open source · GitHub ↗