The AI Visibility Score is a 0-100 metric that measures how accurately and prominently AI engines describe and recommend your product. It's calculated by querying 6 AI engines with structured prompts and analyzing their responses across four dimensions.
Transparency matters. If you're going to make business decisions based on a score, you should understand exactly how it's calculated. This page documents our complete methodology.
The 6 Engines We Query
We chose these 6 AI engines because they represent the highest-traffic AI assistants used for product discovery and recommendation:
- ChatGPT (OpenAI) — The largest consumer AI assistant. Uses Bing for real-time search.
- Perplexity — AI-native search engine with real-time web access. Growing fast among researchers and professionals.
- Gemini (Google) — Google's AI assistant, integrated with Google Search.
- Claude (Anthropic) — Widely used by developers and analysts. Strong factual reasoning.
- DeepSeek — Leading Chinese AI model with strong English capability. Growing international user base.
- Mistral — European AI leader. Growing enterprise adoption.
The Four Scoring Dimensions
Brand Visibility (0-25 points)
Does the AI know your product exists? When asked directly ('What is [Product Name]?'), does it give an accurate description? Brand Visibility measures whether the AI has a correct mental model of your product — name, category, and core function.
Category Discovery (0-30 points)
Does the AI recommend your product when users ask about your category? This is the highest-weighted dimension because it directly maps to new customer acquisition. We query each engine with category-level questions ('What are the best [category] tools?') and check whether your product appears.
Citation Quality (0-20 points)
When the AI does mention your product, is the information accurate? Does it correctly state your features, pricing, and differentiators? Citation Quality catches cases where a product is mentioned but with outdated or incorrect information — which can be worse than not being mentioned at all.
On-Page Readiness (0-25 points)
Does your website provide the structured, machine-readable information that AI engines need? This includes Schema.org markup, answer-first content structure, FAQ pages, and freshness signals. On-Page Readiness is the dimension you have the most direct control over.
How Queries Work
For each product, we run a structured set of prompts across all 6 engines. The prompts fall into four categories that mirror buyer behavior:
- Brand queries: 'What is [Product Name]?' — Tests direct brand recognition.
- Category queries: 'What are the best [category] tools?' — Tests organic recommendation.
- Comparison queries: '[Product] vs [Competitor]' — Tests competitive positioning.
- Purchase intent queries: 'Should I use [Product] for [use case]?' — Tests recommendation confidence.
Scoring Methodology
Each engine's response is analyzed for: (1) whether the product appears, (2) accuracy of the description, (3) positioning relative to competitors, and (4) confidence of the recommendation. These factors are weighted and combined into the four dimension scores, which sum to the final 0-100 AI Visibility Score.
The score levels are: Critical (0-24), Low (25-49), Moderate (50-74), High (75-100). Most SaaS products score in the Low to Moderate range. Scoring High requires strong structured data, multi-source corroboration, and consistent AI-ready content.
Why Variance Across Engines Matters
It's common for a product to score well on one engine and poorly on another. ChatGPT might recommend you while Perplexity doesn't, or vice versa. This variance reveals which engines have better data about your product — and which need targeted optimization.
The per-engine breakdown in EurekaNav's audit helps you prioritize: fix the engines where you're weakest first, since that's where the biggest gain potential is.
Run Your Free Audit
See your AI Visibility Score across all 6 engines in 60 seconds. The free audit at eurekanav.com/aeo/free-audit shows your total score, per-engine breakdown, and specific recommendations for improvement.