Skip to main content
TL;DR — Different AI providers describe your brand with different sentiment because they draw from different training data and retrieval sources. Go to Overview > Performance with the provider breakdown view to rank providers by sentiment score, then dive into Insights > Sentiment Analysis per provider to see which specific attributes are praised or criticized. Pro tip: export your Brand Perception data for each provider side by side — the overlap is your stable brand signal, and the divergence is your content gap to close.

The Question

“Which AI provider gives my brand the best sentiment and positioning?”
Not all AI providers describe your brand the same way. Claude might frame you as a premium, enterprise-grade solution. Perplexity might describe you neutrally as “one option among several.” ChatGPT might position you primarily as a cost-effective alternative to a larger competitor. Gemini might rarely mention you at all. These differences are not random — they reflect where and how each provider has encountered information about your brand. Knowing which provider treats you most favorably tells you two things: where to direct traffic from AI channels, and which content signals are working so you can replicate them on providers where you underperform. You might also be wondering:
  • “Can I influence how a specific AI provider describes my brand?”
  • “Is positive sentiment from one provider more valuable than another based on market share?”
  • “Why does one provider call me ‘enterprise-grade’ while another calls me ‘affordable’?”

Where to Go in Qwairy

1

Start here: Overview > Performance

Navigate to Overview > Performance and activate the provider breakdown view. This gives you a ranked comparison of your average visibility score and sentiment indicator per provider across all tracked prompts. Sort providers by sentiment score (positive/neutral/negative ratio) rather than visibility score alone — a provider where you score 65 but sentiment is mostly negative is worse than a provider where you score 50 with consistently positive sentiment.
2

Go deep on sentiment: Insights > Sentiment Analysis

Navigate to Insights > Sentiment Analysis and toggle between providers. This view breaks down positive, neutral, and negative sentiment ratios per provider, identifies which specific attributes drive sentiment in each direction, and surfaces the exact phrases AI providers use when describing your brand. Look for asymmetry: the attribute that Claude praises (“robust API documentation”) may be something Perplexity never mentions.
3

Understand narrative: Strategy > Brand Perception

Navigate to Strategy > Brand Perception and compare provider-by-provider. Brand Perception shows the full narrative each provider constructs around your brand: positioning tier (premium / mid-market / budget), primary use case attribution, competitive framing (standalone recommendation vs. “also consider”), and which competitors appear alongside you in each provider’s answers. The provider where you have the best perception score is the one whose training data and retrieval sources most closely match the content signals you’ve built.
4

Validate with raw data: Visibility > Responses

Navigate to Visibility > Responses and sample responses from the provider that scores highest for sentiment. Read the actual text. Confirm that the positive sentiment in the dashboard reflects genuine favorable positioning — not just the absence of negative language. There is a meaningful difference between “Brand X is an excellent choice for teams that prioritize security” and “Brand X can be used for this use case.”
5

Complete the picture: MCP + Looker Studio

Connect the Qwairy MCP server for:
  • Automated provider sentiment alerts when a provider’s sentiment drops below a threshold
  • Programmatic access to per-provider sentiment scores for integration into marketing dashboards
Connect Looker Studio for:
  • Long-term provider sentiment trend tracking across quarters
  • Cross-referencing with marketing campaign timelines to measure content influence on sentiment

What to Look For

Performance Dashboard — Provider Sentiment Ranking

The dashboard gives you the comparative view across all providers simultaneously. The key is to look at sentiment in combination with visibility, not in isolation.
Provider patternWhat it tells you
High visibility + positive sentimentThis provider is your best current AI channel — prioritize driving traffic from it
High visibility + neutral sentimentYou’re present but not differentiated — refine your positioning content
High visibility + negative sentimentDamaging association — this is urgent; investigate which claims are negative and why
Low visibility + positive sentimentUntapped opportunity — the provider is favorable but rarely surfaces you; increase topical coverage
Low visibility + negative or neutral sentimentLowest priority unless this provider has high market share in your target audience

Sentiment Analysis — Attribute-Level Breakdown

The attribute breakdown is where the real strategic insight lives. Different providers often praise or ignore entirely different aspects of your brand.
Attribute patternInterpretation
Praised by Claude, ignored by PerplexityThat attribute comes from long-form content or documentation; Perplexity’s retrieval isn’t surfacing it
Praised by Gemini, criticized by ChatGPTGoogle’s ecosystem signals (reviews, structured data) support this attribute; ChatGPT training data tells a different story
Negative sentiment concentrated in one providerInvestigate what sources that provider is drawing from — often a critical review or forum discussion
Consistent attribute praised across providersYour core differentiator — lead with it in content and messaging

Brand Perception — Positioning Tier Comparison

One of the most strategically significant outputs of Brand Perception is the positioning tier each provider assigns to your brand (implicitly or explicitly). A brand described as “enterprise-grade” on one provider and “a good option for small teams” on another has a positioning consistency problem.
Pro Tip: Export your Brand Perception data for each provider and place them side by side. The overlap (attributes all providers agree on) represents your stable brand signal. The divergence represents your content gap — areas where the story you want to tell hasn’t reached one or more providers’ training data or retrieval corpus.

Filters That Help

FilterHow to use it for this question
ProviderCompare providers individually in Sentiment Analysis and Brand Perception — do not average them
Period: 90 daysSentiment shifts are gradual; a 90-day window reveals trends that a 30-day snapshot misses
Topic / TagA provider might be positive about your security features but neutral about your integrations — topic-level analysis reveals this
CompetitorCompare your sentiment vs. a competitor’s sentiment on the same provider to understand relative positioning

How to Interpret the Results

Good result

At least two major providers (e.g., Claude and Gemini) show predominantly positive sentiment for your brand with a clear and consistent positioning narrative. Your best-performing provider has a positive sentiment ratio above 65% and describes you using the attributes you’ve intentionally built content around. The gap between your best and worst provider is less than 25 sentiment points, suggesting a broadly coherent brand signal.

Needs attention

Only one provider shows positive sentiment, and it’s not the provider with the highest market share in your category. Two or more providers show neutral or negative sentiment, with different attributes triggering the negativity — suggesting multiple disconnected content problems rather than one fixable issue. Your best-sentiment provider describes you with attributes you’d prefer not to lead with (e.g., “cheapest option” when you’re trying to move upmarket).
A single provider with very positive sentiment can create false confidence. If that provider represents a small fraction of AI query volume in your target market, optimizing exclusively for it has limited commercial impact. Always weight sentiment results by the estimated query share each provider holds for your audience. A neutral score on Perplexity may matter more than a perfect score on a niche provider if Perplexity is where your buyers ask their questions.

Example

Scenario: A clean energy company tracks sentiment across five providers. Performance Dashboard shows: Claude (70 score, 68% positive sentiment), Gemini (61 score, 56% positive), ChatGPT (45 score, 35% positive), Perplexity (38 score, 30% positive), and Mistral (31 score, 44% positive).
  1. Open Insights > Sentiment Analysis, toggle to Claude. Positive attributes: “industry-leading solar panel efficiency,” “transparent carbon offset reporting,” “strong utility partnerships.” Negative mentions: rare, mostly about installation lead times.
  2. Toggle to ChatGPT. Positive attributes: almost none surfaced. Neutral phrasing dominates: “one option for commercial solar,” “also offers battery storage.” One negative cluster: references to a 2024 Glassdoor thread about rapid scaling challenges.
  3. Toggle to Perplexity. Neutral with a negative skew. Perplexity is surfacing a GreenTech Media article comparing clean energy providers where the company ranked fourth behind three incumbents. Perplexity cites this article in multiple responses.
  4. Action plan: Claude’s positive signal comes from detailed technical documentation and sustainability reports — replicate this structure on landing pages that ChatGPT and Perplexity are more likely to retrieve. Address the Glassdoor thread with a public company response. Push for updated coverage in GreenTech Media or Canary Media to dilute the outdated comparison in Perplexity’s citation pool.

Go Further