Skip to main content
TL;DR — Each AI provider uses different training data and retrieval methods, so divergent scores reveal which content channels are working and which are not. Go to Overview > GEO Matrix with all providers visible to spot prompt-level gaps, then check Strategy > Brand Perception per provider to see if they even agree on what your brand does. Pro tip: optimize for the provider your target audience actually uses most, not the one where you already score highest.

The Question

“Why does my brand score differently across Perplexity, Gemini, and Claude?”
You check Qwairy and notice your brand scores 74 on Claude, 51 on Gemini, and 31 on Perplexity — on the exact same set of prompts. It looks inconsistent, but it isn’t arbitrary. Each provider has a fundamentally different architecture: Claude emphasizes long-form document understanding, Gemini integrates with Google’s knowledge graph and Search signals, and Perplexity performs live web retrieval with explicit citation. A gap between their scores tells you something specific about where your brand’s credibility lives — and where it doesn’t. Understanding this divergence is more actionable than chasing a single average score. It tells you which channels need reinforcement and which are already working. You might also be wondering:
  • “Should I optimize for the provider where I score worst, or double down on where I score best?”
  • “Does my score on Gemini reflect how I appear in Google Search?”
  • “Is there a provider where all brands in my category score low?”

Where to Go in Qwairy

1

Start here: Overview > GEO Matrix

Navigate to Overview > GEO Matrix — leave the provider filter set to All Providers so every column is visible. Scan each row (prompt) horizontally. A prompt where you score high on Claude but low on Perplexity indicates that the long-form web content supporting you is strong, but your real-time web presence (the pages Perplexity retrieves today) is weaker.
2

Go deeper: Overview > Performance

Navigate to Overview > Performance and use the Provider breakdown view. This aggregates your scores across all prompts per provider, showing trend lines over time. Use the Period filter to check whether the divergence is stable or whether one provider’s score is moving while others stay flat — a moving score often indicates a recent content or algorithm change.
3

Analyze narrative: Strategy > Brand Perception

Navigate to Strategy > Brand Perception and switch between providers using the provider toggle. This view shows how each provider describes your brand: which attributes it emphasizes, which it ignores, and where sentiment differs. A high Claude score paired with low Perplexity score often manifests here as Claude attributing premium positioning to your brand while Perplexity describes you neutrally or omits you from the category narrative.
4

Share findings: Workspace > Shares

Use Workspace > Shares to create a locked shared view filtered to the specific provider comparison for sharing with leadership or your content team — no Qwairy account required for the recipient.

What to Look For

GEO Matrix — Multi-Provider Column Comparison

The matrix is the fastest way to see cross-provider divergence at the prompt level. Do not average your scores — look at the distribution pattern.
PatternWhat it tells you
High Claude, low PerplexityYour long-form content and documentation are strong, but your live web footprint (indexed pages, recent articles, live citations) is weak
High Gemini, low ClaudeGoogle’s knowledge graph signals (structured data, Google Business Profile, authoritative backlinks) favor you, but document-level training data is thinner
Low across all providersStructural authority gap — no single channel is driving recognition
High Perplexity, low GeminiRecent web content and active citation networks are strong, but Google’s historical data trails behind
Consistent across all providersYour brand signal is stable and multi-channel — this is the goal

Performance Dashboard — Provider Trend Lines

The dashboard reveals whether gaps are widening or narrowing over time. If your Gemini score has been climbing for three months while Perplexity stays flat, that’s a signal that Google-ecosystem content improvements (structured data, Search Console-friendly pages) are working, but web retrieval-facing content hasn’t caught up.
Pro Tip: Combine the Performance Dashboard trend view with a Topic filter to isolate whether the divergence is category-wide or specific to a product area. A brand that scores well on Claude for “enterprise security” but poorly on Perplexity for the same topic likely lacks recent press, reviews, or case studies on that topic.

Brand Perception — Convergence vs. Divergence

The most important qualitative signal is whether providers agree on what your brand is. If Claude describes you as “an enterprise-grade solution” and Perplexity describes you as “a small business tool,” you have a positioning divergence that content strategy needs to resolve — not just a scoring gap.
Perception signalInterpretation
Providers agree on category and positioningStrong, consistent brand signal
Providers agree on category, disagree on tierAuthority gap in specific channels
Providers disagree on what your product doesMessaging or content architecture problem
One provider mentions competitors, another doesn’tThat provider’s training data is more competitive in your category

Filters That Help

FilterHow to use it for this question
ProviderCompare two providers side-by-side by toggling between them in the Matrix or Performance views
PeriodUse a 90-day window to see trends, not just a snapshot — divergence often closes gradually
Topic / TagIdentify whether divergence is universal or confined to a specific product area or use case

How to Interpret the Results

Good result

Your brand scores within 15 points of each other across all major providers on the same prompt set, with consistent positive sentiment in Brand Perception across providers. Any remaining gap is explainable by a specific channel (e.g., you have no Google Business Profile, which suppresses Gemini but not Claude).

Needs attention

A spread of more than 30 points between your highest and lowest provider score on the same prompts, especially if Perplexity is the low-scorer, suggests your real-time web presence is significantly weaker than your historical content presence. This is urgent because Perplexity and similar retrieval-augmented providers are gaining query share rapidly.
Do not optimize exclusively for the provider where you already score highest. A 74 on Claude may feel good, but if Perplexity is capturing the majority of your target audience’s AI queries, a 31 on Perplexity is the number that matters most for actual traffic and leads. Score distribution should follow your audience’s provider distribution.

Example

Scenario: A wealth management platform scores 71 on Claude, 55 on Gemini, and 24 on Perplexity across prompts related to “best digital tools for portfolio rebalancing.”
  1. Open GEO Matrix with all providers visible. The Claude and Gemini columns show solid scores; the Perplexity column is consistently low. The team suspects Perplexity’s live retrieval is pulling pages that don’t mention them.
  2. Open Strategy > Brand Perception, toggle to Perplexity. The perception summary shows the brand is either absent from answers or mentioned only as an aside after three competitors. Claude and Gemini describe the brand as “a trusted wealth management platform for independent advisors.”
  3. Open Visibility > Citation Sources filtered to Perplexity. The cited URLs are a Kitces.com comparison article, a Barron’s “best of” list for advisor tools, and a NerdWallet review — none of which mention the company.
  4. Action: publish a detailed “portfolio rebalancing for financial advisors” use case page, pursue coverage in industry publications like Kitces or InvestmentNews, and update directory profiles on Barron’s and NerdWallet to explicitly address rebalancing and portfolio management workflows.

Go Further