Newsletter
Join the Community
Subscribe to our newsletter for the latest news and updates
Stop wondering if AI engines know about your product. Copy these 20 prompts, paste them into ChatGPT, Perplexity, and Gemini, and see exactly where you stand. Organized by 4 intent layers from brand to purchase.
Get a free AEO audit across 6 AI engines — ChatGPT, Perplexity, Gemini, DeepSeek, Claude & Mistral. See exactly where you stand in 60 seconds.
Sign in to leave a comment
No comments yet. Be the first to share your thoughts!
2026/04/01
See how ChatGPT, Perplexity, Gemini, DeepSeek, Claude & Mistral describe your product. Free report, no signup required.
The fastest way to know whether AI engines can recommend your product is to ask them. But random prompts give random results. This structured prompt set tests 4 layers of AI visibility — brand awareness, category discovery, competitive positioning, and purchase intent. Copy and paste them into ChatGPT, Perplexity, and Gemini (replace [Your Product] and [Category] with your actual product and category).
These test whether the AI knows your product exists at all.
If the AI says 'I don't have information about [Your Product]' for most of these, you have an entity visibility problem. The AI has never seen enough evidence of your product to form a response.
These test whether the AI mentions your product when users ask about your category.
If you appear in 0 out of 5 engines for category queries, you are invisible to users who are actively shopping. This is the highest-priority gap to fix.
These test how the AI positions you against competitors.
Pay attention to accuracy here. If the AI describes your competitor correctly but gets your product wrong, you have a citation quality problem.
These test whether the AI can guide a user toward buying your product.
If the AI can answer purchase-intent queries accurately, your product has strong AI visibility. If it gives wrong pricing or says 'I don't know,' you need to make your pricing page more machine-readable.
Run all 20 prompts across 3+ AI engines (ChatGPT, Perplexity, Gemini). For each prompt, score: 1 = not mentioned, 2 = mentioned but inaccurate, 3 = mentioned accurately. Maximum score = 60 per engine, 180 across 3 engines. Below 90 means you have significant visibility gaps.
Want this done automatically? Our free AEO audit runs a similar prompt set across 6 AI engines and gives you a scored report with specific fix recommendations.