TL;DR — Each AI provider uses different training data and retrieval methods, so divergent scores reveal which content channels are working and which are not. Go to Overview > GEO Matrix with all providers visible to spot prompt-level gaps, then check Strategy > Brand Perception per provider to see if they even agree on what your brand does. Pro tip: optimize for the provider your target audience actually uses most, not the one where you already score highest.
The Question
“Why does my brand score differently across Perplexity, Gemini, and Claude?”You check Qwairy and notice your brand scores 74 on Claude, 51 on Gemini, and 31 on Perplexity — on the exact same set of prompts. It looks inconsistent, but it isn’t arbitrary. Each provider has a fundamentally different architecture: Claude emphasizes long-form document understanding, Gemini integrates with Google’s knowledge graph and Search signals, and Perplexity performs live web retrieval with explicit citation. A gap between their scores tells you something specific about where your brand’s credibility lives — and where it doesn’t. Understanding this divergence is more actionable than chasing a single average score. It tells you which channels need reinforcement and which are already working. You might also be wondering:
- “Should I optimize for the provider where I score worst, or double down on where I score best?”
- “Does my score on Gemini reflect how I appear in Google Search?”
- “Is there a provider where all brands in my category score low?”
Where to Go in Qwairy
Start here: Overview > GEO Matrix
Navigate to Overview > GEO Matrix — leave the provider filter set to All Providers so every column is visible.
Scan each row (prompt) horizontally. A prompt where you score high on Claude but low on Perplexity indicates that the long-form web content supporting you is strong, but your real-time web presence (the pages Perplexity retrieves today) is weaker.
Go deeper: Overview > Performance
Navigate to Overview > Performance and use the Provider breakdown view.
This aggregates your scores across all prompts per provider, showing trend lines over time. Use the Period filter to check whether the divergence is stable or whether one provider’s score is moving while others stay flat — a moving score often indicates a recent content or algorithm change.
Analyze narrative: Strategy > Brand Perception
Navigate to Strategy > Brand Perception and switch between providers using the provider toggle.
This view shows how each provider describes your brand: which attributes it emphasizes, which it ignores, and where sentiment differs. A high Claude score paired with low Perplexity score often manifests here as Claude attributing premium positioning to your brand while Perplexity describes you neutrally or omits you from the category narrative.
What to Look For
GEO Matrix — Multi-Provider Column Comparison
The matrix is the fastest way to see cross-provider divergence at the prompt level. Do not average your scores — look at the distribution pattern.| Pattern | What it tells you |
|---|---|
| High Claude, low Perplexity | Your long-form content and documentation are strong, but your live web footprint (indexed pages, recent articles, live citations) is weak |
| High Gemini, low Claude | Google’s knowledge graph signals (structured data, Google Business Profile, authoritative backlinks) favor you, but document-level training data is thinner |
| Low across all providers | Structural authority gap — no single channel is driving recognition |
| High Perplexity, low Gemini | Recent web content and active citation networks are strong, but Google’s historical data trails behind |
| Consistent across all providers | Your brand signal is stable and multi-channel — this is the goal |
Performance Dashboard — Provider Trend Lines
The dashboard reveals whether gaps are widening or narrowing over time. If your Gemini score has been climbing for three months while Perplexity stays flat, that’s a signal that Google-ecosystem content improvements (structured data, Search Console-friendly pages) are working, but web retrieval-facing content hasn’t caught up.Pro Tip: Combine the Performance Dashboard trend view with a Topic filter to isolate whether the divergence is category-wide or specific to a product area. A brand that scores well on Claude for “enterprise security” but poorly on Perplexity for the same topic likely lacks recent press, reviews, or case studies on that topic.
Brand Perception — Convergence vs. Divergence
The most important qualitative signal is whether providers agree on what your brand is. If Claude describes you as “an enterprise-grade solution” and Perplexity describes you as “a small business tool,” you have a positioning divergence that content strategy needs to resolve — not just a scoring gap.| Perception signal | Interpretation |
|---|---|
| Providers agree on category and positioning | Strong, consistent brand signal |
| Providers agree on category, disagree on tier | Authority gap in specific channels |
| Providers disagree on what your product does | Messaging or content architecture problem |
| One provider mentions competitors, another doesn’t | That provider’s training data is more competitive in your category |
Filters That Help
| Filter | How to use it for this question |
|---|---|
| Provider | Compare two providers side-by-side by toggling between them in the Matrix or Performance views |
| Period | Use a 90-day window to see trends, not just a snapshot — divergence often closes gradually |
| Topic / Tag | Identify whether divergence is universal or confined to a specific product area or use case |
How to Interpret the Results
Good result
Your brand scores within 15 points of each other across all major providers on the same prompt set, with consistent positive sentiment in Brand Perception across providers. Any remaining gap is explainable by a specific channel (e.g., you have no Google Business Profile, which suppresses Gemini but not Claude).Needs attention
A spread of more than 30 points between your highest and lowest provider score on the same prompts, especially if Perplexity is the low-scorer, suggests your real-time web presence is significantly weaker than your historical content presence. This is urgent because Perplexity and similar retrieval-augmented providers are gaining query share rapidly.Example
Scenario: A wealth management platform scores 71 on Claude, 55 on Gemini, and 24 on Perplexity across prompts related to “best digital tools for portfolio rebalancing.”
- Open GEO Matrix with all providers visible. The Claude and Gemini columns show solid scores; the Perplexity column is consistently low. The team suspects Perplexity’s live retrieval is pulling pages that don’t mention them.
- Open Strategy > Brand Perception, toggle to Perplexity. The perception summary shows the brand is either absent from answers or mentioned only as an aside after three competitors. Claude and Gemini describe the brand as “a trusted wealth management platform for independent advisors.”
- Open Visibility > Citation Sources filtered to Perplexity. The cited URLs are a Kitces.com comparison article, a Barron’s “best of” list for advisor tools, and a NerdWallet review — none of which mention the company.
- Action: publish a detailed “portfolio rebalancing for financial advisors” use case page, pursue coverage in industry publications like Kitces or InvestmentNews, and update directory profiles on Barron’s and NerdWallet to explicitly address rebalancing and portfolio management workflows.
Go Further
Export per-provider score breakdown
Export your visibility scores broken down by AI provider as CSV or XLSX for cross-platform analysis
Cross-provider comparison chart
Build a cross-provider score comparison chart in Looker Studio using the performance-overview data source
Understand score calculation
Read the Performance Dashboard documentation to understand how visibility scores are calculated per provider
Related Questions
What is my overall AI visibility score and what drives it?
Understand the components behind your score before diagnosing cross-provider gaps.
Which AI provider gives my brand the best sentiment and positioning?
Go beyond scores to understand which provider frames your brand most favorably.
How does Google AI Overview treat my brand compared to traditional Google Search?
Understand the specific Gemini and AI Overview divergence in detail.

