Free · No signup · Open source

Is your website ready
for AI agents?

AI agents don't browse like humans. They read your DOM, call your forms as tools, and index your structured data. Most websites fail silently.

Scan any site. Get a score, a readiness level, and a precise fix list — in seconds.

Or scan a known site:

What is AI Agent Readiness?

The web was built for human eyes. AI agents — tools like ChatGPT plugins, Claude's computer use, browser automation agents — read your HTML directly. They can't see images, they can't solve CAPTCHAs, and they navigate by semantic landmarks.

👁️
Step 01

Agents read your DOM

AI agents parse HTML, not pixels. A beautiful website with semantic structure is invisible to an agent. Raw DOM is what gets analysed.

Step 02

Standards are emerging fast

WebMCP (Chrome 146+) lets agents call your forms as tools. llms.txt lets you curate what LLMs know about your site. These standards are being adopted now.

🔇
Step 03

Most sites fail silently

Without structured data, semantic HTML, and agent-friendly forms, agents either misunderstand your site or skip it entirely — no error, just silence.

The 5 Levels of AI Agent Readiness

Most websites are at Level 2. The top tier is within reach for any developer willing to spend a day on it.

1
🔴 Level 1: Invisible0–20 pts

AI agents can't meaningfully access or understand the site. Crawlers may be blocked, semantic structure is absent, content is inaccessible.

2
🟠 Level 2: Crawlable21–40 pts

Agents can read raw text but can't understand page structure, purpose, or how to navigate. Like reading a book with no chapters.

3
🟡 Level 3: Discoverable41–60 pts

Structured data and semantic HTML help agents understand content. But they still can't take actions — search, submit forms, or navigate flows.

4
🟢 Level 4: Operable61–80 pts

Agents can discover, understand, and take actions. Forms are accessible, navigation is clear, content is machine-readable.

5
🔵 Level 5: AI-Native81–100 pts

Fully optimised for AI agents. WebMCP exposes site capabilities as callable tools. llms.txt gives LLMs an accurate site overview. The top ~3% of the web.

What we check (100 points total)

Every check has a clear fix with a code example. No vague scores — just precise, actionable improvements.

🤖

WebMCP

/25 pts

Chrome 146+ · W3C standard

Detects mcp-tool, mcp-param, and mcp-description attributes that expose your site's forms and actions as callable tools for AI agents.

View spec ↗
🏗️

Semantic HTML

/20 pts

HTML5 landmarks · heading hierarchy

Checks for <main>, <header>, <nav>, <article>, language attributes, ARIA roles — the vocabulary agents use to navigate your pages.

View spec ↗
📋

Structured Data

/15 pts

Schema.org · JSON-LD · Microdata

Validates JSON-LD blocks and checks for high-value types: WebSite, SearchAction, Product, FAQPage, BreadcrumbList.

View spec ↗

Agent Usability

/30 pts

Forms · buttons · CAPTCHA · pagination

Checks whether agents can actually interact with your site — label coverage, semantic buttons, CAPTCHA walls, and infinite scroll patterns.

🔍

AI Discoverability

/5 pts

robots.txt · sitemap · llms.txt · HTTPS

Checks whether AI crawlers can find and access your site, including the new llms.txt standard adopted by 844,000+ sites.

View spec ↗
📝

Content Quality

/5 pts

Alt text · readable prose · link clarity

Agents are blind to images without alt text. Checks image alt coverage, link descriptiveness, and content readability for NLP models.

Find out where you stand

Free, no login required. Results in under 10 seconds.

AI Agent Readiness Scanner · Free, open source · GitHub ↗