Skip to main content
TL;DR — AI sentiment can drop sharply after a crisis and take weeks or months to recover, even after the issue is resolved. Go to Insights > Sentiment Analysis with a custom date range bracketing the event to see the drop and recovery curve, then compare before/after snapshots in Strategy > Brand Perception to identify which themes the crisis introduced. Check the Provider breakdown to find which AI models are still propagating the old narrative. Pro tip: export a Brand Perception snapshot before any major launch or expected PR moment so you have a clean baseline to measure against.

The Question

“How has AI sentiment changed after a PR crisis or product launch?”
Major brand events — a public controversy, a viral product release, a funding announcement, a data breach, a CEO departure — leave traces in AI-generated content. Unlike search rankings that update within days, AI perception can lag by weeks or persist for months after the event is long resolved. Understanding whether sentiment shifted, when it shifted, and whether it has recovered is essential for crisis communications teams, PR agencies, and brand managers. This question is also relevant for positive events: a successful launch, a major press feature, or an award. You want to confirm that the AI narrative has absorbed the positive signal and is reflecting it back to users asking about your brand. You might also be wondering:
  • “Has AI sentiment recovered since our data breach was resolved?”
  • “Did our product launch create a measurable shift in how AI describes us?”
  • “Are specific AI providers still referencing the crisis narrative after others have moved on?”

Where to Go in Qwairy

1

Start here: Insights > Sentiment Analysis

Navigate to Insights > Sentiment Analysis — your primary view for tracking sentiment over time. Set the Period selector to span the event: start 4 weeks before the event date and extend to today. The time-series chart will show whether sentiment shifted at, immediately after, or gradually following the event.
2

Compare snapshots: Strategy > Brand Perception

Cross-reference with Strategy > Brand Perception to compare attribute themes before and after the event. Use the Period filter to create two snapshots: one ending on the event date and one starting from it. Compare the dominant theme clusters — a crisis will typically introduce new negative themes; a launch will introduce new product attributes.
3

Trace evolution: Overview > Performance

Open Overview > Performance to overlay sentiment with visibility and position metrics. A crisis often causes visibility to spike (your brand is mentioned more) while sentiment drops — the performance view makes that combination visible.
4

Export for reporting: Workspace > Exports

Use Workspace > Exports to pull a CSV of sentiment scores by date and provider for the relevant period. This is the data you need for a post-crisis report, a board update, or agency reporting.

What to Look For

Sentiment Analysis — Time-Series View

The time-series chart in Sentiment Analysis plots your average sentiment score (0–100) across all collected responses, broken down by date. Each data point represents the average sentiment of all responses collected on that day or in that week.
ElementWhat it tells you
Sentiment trend lineThe direction of travel — recovering, stable, declining
Drop dateWhen the negative shift first appeared in AI-generated content
Recovery curveHow quickly (or slowly) sentiment is returning to the pre-event baseline
Provider breakdownWhether the shift is uniform across ChatGPT, Claude, Perplexity, or localized to one
Score floorThe lowest point reached — useful for calibrating severity

Brand Perception — Comparison Snapshots

Brand Perception is not natively a time-series tool, but using the period filter to create before/after snapshots gives you a structured theme comparison.
ElementWhat it tells you
New themes post-eventAttributes that appeared after the event (e.g., “data privacy concerns”, “rapid growth”)
Lost themes post-eventPositive attributes that disappeared from AI descriptions following a crisis
Attribute strength deltaHow much an attribute’s prominence changed between the two periods
Pro Tip: Before a major launch or expected PR moment, take a manual Brand Perception snapshot by exporting the current attribute data. This gives you a clean “before” baseline to compare against post-event data — the time filter alone may not perfectly isolate the window you want.

Filters That Help

FilterHow to use it for this question
PeriodUse a custom date range that brackets the event — do not use preset periods like “Last 30 days” which may straddle the event window
ProviderIdentify whether recovery is uniform or whether specific AI models are still propagating the crisis narrative
Topic / TagIf the crisis was topic-specific (e.g., a product recall, a security issue), isolate those topics to avoid diluting the signal with unrelated responses

How to Interpret the Results

Good result

Sentiment shows a clear pre-event baseline (e.g., average 72), a drop at or near the event date (to 48), and a recovery curve that reaches or exceeds the pre-event level within 4–8 weeks. Brand Perception themes introduced by the crisis (negative attributes) fade from the top clusters as the period progresses. All providers converge on the recovered narrative within the same timeframe.

Needs attention

Sentiment dropped and has not recovered after 8+ weeks. The crisis-related themes (e.g., “lawsuit”, “controversy”, “security breach”) remain in the top Brand Perception clusters. One or more AI providers continues to describe the crisis in present tense rather than past tense. The Performance Dashboard shows that visibility increased during the crisis but has not converted back to positive sentiment now that visibility has normalized.
AI sentiment scores are averages across many responses and prompts. A single viral negative article that dominates retrieval can suppress your overall score even if most responses remain neutral. Drill into the Responses view and read the specific answers contributing to the negative score before drawing conclusions about the breadth of the problem.

Example

Scenario: A fintech company experienced a payment processing outage that generated significant press coverage for 72 hours. Three weeks later, they want to know whether AI sentiment has recovered and which providers are still referencing the outage.
  1. Open Insights > Sentiment Analysis, set Period = custom range from 6 weeks before the outage to today. The time-series shows a sentiment drop from 68 to 41 during the outage week, followed by a gradual recovery to 59 — still 9 points below the pre-event baseline.
  2. Apply the Provider filter to compare providers. Perplexity (which uses live web retrieval) has largely recovered to 65. ChatGPT and Claude remain at 54–57, suggesting their training or cached data still includes the outage narrative.
  3. Open Visibility > Responses and filter by Provider = “ChatGPT” and Period = “Last 7 days”. Read responses mentioning the brand. Several responses include a sentence such as “the company faced a significant outage in [month], raising questions about reliability” — confirming that the outage is still present in recent ChatGPT outputs.
  4. Open Strategy > Brand Perception and compare the Period = “Pre-outage” snapshot against Period = “Current”. The theme “reliability” has dropped from strength 74 to strength 38. The theme “outage” has appeared with strength 51.
  5. Export this data via Workspace > Exports and include it in the monthly communications report with a recovery plan targeting the content and citation strategies needed to rebuild the “reliability” attribute.

Go Further