TL;DR — AI sentiment can drop sharply after a crisis and take weeks or months to recover, even after the issue is resolved. Go to Insights > Sentiment Analysis with a custom date range bracketing the event to see the drop and recovery curve, then compare before/after snapshots in Strategy > Brand Perception to identify which themes the crisis introduced. Check the Provider breakdown to find which AI models are still propagating the old narrative. Pro tip: export a Brand Perception snapshot before any major launch or expected PR moment so you have a clean baseline to measure against.
The Question
“How has AI sentiment changed after a PR crisis or product launch?”Major brand events — a public controversy, a viral product release, a funding announcement, a data breach, a CEO departure — leave traces in AI-generated content. Unlike search rankings that update within days, AI perception can lag by weeks or persist for months after the event is long resolved. Understanding whether sentiment shifted, when it shifted, and whether it has recovered is essential for crisis communications teams, PR agencies, and brand managers. This question is also relevant for positive events: a successful launch, a major press feature, or an award. You want to confirm that the AI narrative has absorbed the positive signal and is reflecting it back to users asking about your brand. You might also be wondering:
- “Has AI sentiment recovered since our data breach was resolved?”
- “Did our product launch create a measurable shift in how AI describes us?”
- “Are specific AI providers still referencing the crisis narrative after others have moved on?”
Where to Go in Qwairy
Start here: Insights > Sentiment Analysis
Navigate to Insights > Sentiment Analysis — your primary view for tracking sentiment over time.
Set the Period selector to span the event: start 4 weeks before the event date and extend to today. The time-series chart will show whether sentiment shifted at, immediately after, or gradually following the event.
Compare snapshots: Strategy > Brand Perception
Cross-reference with Strategy > Brand Perception to compare attribute themes before and after the event.
Use the Period filter to create two snapshots: one ending on the event date and one starting from it. Compare the dominant theme clusters — a crisis will typically introduce new negative themes; a launch will introduce new product attributes.
Trace evolution: Overview > Performance
Open Overview > Performance to overlay sentiment with visibility and position metrics.
A crisis often causes visibility to spike (your brand is mentioned more) while sentiment drops — the performance view makes that combination visible.
What to Look For
Sentiment Analysis — Time-Series View
The time-series chart in Sentiment Analysis plots your average sentiment score (0–100) across all collected responses, broken down by date. Each data point represents the average sentiment of all responses collected on that day or in that week.| Element | What it tells you |
|---|---|
| Sentiment trend line | The direction of travel — recovering, stable, declining |
| Drop date | When the negative shift first appeared in AI-generated content |
| Recovery curve | How quickly (or slowly) sentiment is returning to the pre-event baseline |
| Provider breakdown | Whether the shift is uniform across ChatGPT, Claude, Perplexity, or localized to one |
| Score floor | The lowest point reached — useful for calibrating severity |
Brand Perception — Comparison Snapshots
Brand Perception is not natively a time-series tool, but using the period filter to create before/after snapshots gives you a structured theme comparison.| Element | What it tells you |
|---|---|
| New themes post-event | Attributes that appeared after the event (e.g., “data privacy concerns”, “rapid growth”) |
| Lost themes post-event | Positive attributes that disappeared from AI descriptions following a crisis |
| Attribute strength delta | How much an attribute’s prominence changed between the two periods |
Pro Tip: Before a major launch or expected PR moment, take a manual Brand Perception snapshot by exporting the current attribute data. This gives you a clean “before” baseline to compare against post-event data — the time filter alone may not perfectly isolate the window you want.
Filters That Help
| Filter | How to use it for this question |
|---|---|
| Period | Use a custom date range that brackets the event — do not use preset periods like “Last 30 days” which may straddle the event window |
| Provider | Identify whether recovery is uniform or whether specific AI models are still propagating the crisis narrative |
| Topic / Tag | If the crisis was topic-specific (e.g., a product recall, a security issue), isolate those topics to avoid diluting the signal with unrelated responses |
How to Interpret the Results
Good result
Sentiment shows a clear pre-event baseline (e.g., average 72), a drop at or near the event date (to 48), and a recovery curve that reaches or exceeds the pre-event level within 4–8 weeks. Brand Perception themes introduced by the crisis (negative attributes) fade from the top clusters as the period progresses. All providers converge on the recovered narrative within the same timeframe.Needs attention
Sentiment dropped and has not recovered after 8+ weeks. The crisis-related themes (e.g., “lawsuit”, “controversy”, “security breach”) remain in the top Brand Perception clusters. One or more AI providers continues to describe the crisis in present tense rather than past tense. The Performance Dashboard shows that visibility increased during the crisis but has not converted back to positive sentiment now that visibility has normalized.Example
Scenario: A fintech company experienced a payment processing outage that generated significant press coverage for 72 hours. Three weeks later, they want to know whether AI sentiment has recovered and which providers are still referencing the outage.
- Open Insights > Sentiment Analysis, set Period = custom range from 6 weeks before the outage to today. The time-series shows a sentiment drop from 68 to 41 during the outage week, followed by a gradual recovery to 59 — still 9 points below the pre-event baseline.
- Apply the Provider filter to compare providers. Perplexity (which uses live web retrieval) has largely recovered to 65. ChatGPT and Claude remain at 54–57, suggesting their training or cached data still includes the outage narrative.
- Open Visibility > Responses and filter by Provider = “ChatGPT” and Period = “Last 7 days”. Read responses mentioning the brand. Several responses include a sentence such as “the company faced a significant outage in [month], raising questions about reliability” — confirming that the outage is still present in recent ChatGPT outputs.
- Open Strategy > Brand Perception and compare the Period = “Pre-outage” snapshot against Period = “Current”. The theme “reliability” has dropped from strength 74 to strength 38. The theme “outage” has appeared with strength 51.
- Export this data via Workspace > Exports and include it in the monthly communications report with a recovery plan targeting the content and citation strategies needed to rebuild the “reliability” attribute.
Go Further
Export pre/post sentiment comparison
Export sentiment data with date range filters to compare pre-crisis and post-crisis (or pre/post-launch) periods
Real-time sentiment monitoring
Build a real-time sentiment monitoring dashboard in Looker Studio using the answer-details data source
Share the crisis/launch report with leadership
Create a shared view of the sentiment timeline for your crisis response team or leadership
Related Questions
How is my overall AI visibility trending?
Overlay sentiment recovery with visibility changes to get the full impact picture.
How do I correct misinformation that AI is spreading about my brand?
If the crisis narrative has become factually incorrect, use this workflow to address it.
Is AI generating negative sentiment about my brand on specific topics?
Identify the specific topics driving the negative signal post-crisis.
What does AI actually say about my brand?
Read the full response text to understand exactly how the crisis is being described.

