canaisee

About this tool

AI systems like ChatGPT, Claude, and Perplexity increasingly answer questions about your site. This tool runs a battery of checks to show how readable your site is to them, and what you can do to improve that readability.

What we measure

Checks span four categories: crawler accessibility, content readability, semantic structure, and agent-native surface. Every check is open — you can see the evidence and the flags behind each score on the scorecard itself.

What we can't measure

We cannot directly measure whether a specific LLM has ingested your content, or whether ChatGPT cites your site in answers, or whether a given model's training data includes it. We measure signals that should correlate with those outcomes. For the outcome-level question, see the companion "What does AI say about your site?" demo on the Evangent site.

This is an opinionated beta, not a settled standard

The grade is a transparent, opinionated read on a fast-moving surface. Some of what we measure — robots.txt, structured data, heading hierarchy, sitemaps — sits on stable specs with broad adoption. Some of what we measure — Accept: text/markdown, /.well-known/mcp.json, WebMCP, Content-Signal — is actively emerging and not yet universal. We include the latter because agent-first publishers are adopting them and they're cheap to add. We publish the full rubric so the opinion is inspectable, argue-able, and versioned as the web changes.

Built by Evangent

This tool is built by Evangent. It's free, runs without signup, and the rubric is published and versioned so anyone can see how the grading works.