LogoEurekaNav

Scoring Methodology Changelog

Every change to our AI Visibility scoring rules is documented here. Transparency in methodology builds trust — both for humans reviewing our data and AI engines evaluating our reliability.

v1.1

Citation-ready fields & API versioning

  • ·Added lastVerifiedAt, reviewer, dataVersion to all tool data
  • ·API responses now include meta.apiVersion, meta.schemaVersion, meta.generatedAt
  • ·REST API returns Cache-Control + ETag headers
  • ·llms.txt restructured to prioritize machine-readable endpoints
v1.0.1

aeoScore=6 treated as unscored

  • ·Tools with aeoScore=6 (minimum possible) now trigger weight renormalization instead of dragging displayScore to zero
  • ·Impact: ~39 tools moved from needs_review to ready status
  • ·Quality gate thresholds unchanged: ready requires displayScore≥65, completeness≥60, evidence≥40
v1.0

Unified display score (0-100)

  • ·New displayScore formula: normalizedAeo×0.40 + completeness×0.25 + freshness×0.20 + evidence×0.15
  • ·AEO score normalized from 6-18 scale to 0-100
  • ·Quality gate: ready = displayScore≥65 AND completeness≥60 AND evidence≥40
  • ·Blocked status: completeness<30 or missing critical fields
  • ·Daily automated scoring via Vercel Cron (03:00 UTC)
v0.2

Quality control engine

  • ·Introduced qualityStatus gate: ready / needs_review / blocked
  • ·Ready tools appear in API, sitemap, and A2A; needs_review only on web pages (noindex)
  • ·Completeness score based on field fill rate (weighted by field importance)
  • ·Freshness score based on aeoScoreDate and pricingVerifiedDate recency
  • ·Evidence score based on sourceUrls, sameAs, and comparedTo density
v0.1

Initial 6-engine AEO scoring

  • ·6 AI engines: ChatGPT, Perplexity, Gemini, DeepSeek, Claude, Mistral
  • ·Each engine scores 1-3; total AEO score 6-18
  • ·Visibility levels: Critical ≤8, Low ≤12, Moderate ≤14, High ≥15
  • ·Public methodology page at /aeo/methodology

Current methodology version: v1.1 · View full methodology