Skip to main content
TL;DR — Set a custom date range on Overview > Performance spanning before and after your campaign to spot a step change in the evolution chart. Go to Compare > By Topic to verify the lift is concentrated on campaign-relevant topics, not seasonal noise. Check Analytics > Referrer Analytics for the same window to see if AI-referred traffic spiked alongside visibility. Pro tip: export the Compare evolution data showing your brand separating from the competitor pack during the campaign window — it is your most compelling proof-of-impact slide.

The Question

“Did my latest marketing campaign improve my AI visibility?”
Campaigns drive content, press, backlinks, and social proof — all inputs that AI models weigh when deciding what to recommend. But the lag between a campaign going live and AI models updating their outputs can range from days to weeks depending on the provider. This page shows you how to isolate a campaign window in Qwairy and read the before/after signal with precision. You might also be wondering:
  • “How long does it take for AI models to reflect my campaign’s content?”
  • “Did my competitors also gain visibility during my campaign period, or did I pull ahead?”
  • “Did sentiment improve alongside visibility, or just raw mention count?”

Where to Go in Qwairy

1

Start here: Overview > Performance

Navigate to Overview > Performance — your primary before/after view. Set the Period filter to a custom range: start one full period before the campaign launch (e.g., 30 days pre-launch) and end at today. The evolution chart will show you the trend line across Brand Mention Visibility, Source Citation Visibility, and Average Sentiment with the campaign date as an inflection point to look for. Focus on the evolution sparkline per metric card — a genuine campaign lift shows a visible step change in the curve, not just noise fluctuation.
2

Go deeper: Overview > Compare

Cross-reference with Overview > Compare (Competitor Compare). Use the By Provider and By Topic breakdowns to verify whether your visibility gain was broad-based or concentrated. A campaign about a specific product feature should show a lift specifically on topic tags related to that feature — not a uniform lift across all topics, which would suggest the change is seasonal rather than campaign-driven. Use the Evolution tab in Compare to overlay your brand’s trend against competitors: if competitors also moved up during the same window, external factors (seasonality, industry news) are likely driving the change. If only you moved, your campaign earned it.
3

Complete the picture: GA4 Traffic Correlation

Connect Google Analytics 4 referrer data by opening Analytics > Referrer Analytics for the same campaign window. Look for a concurrent spike in AI-referred traffic (sessions from chatgpt.com, perplexity.ai, claude.ai, gemini.google.com) that coincides with the visibility lift in Qwairy. A visibility gain without a traffic signal means AI models are mentioning you but users are not clicking through — valuable brand-building, but a different type of win. A visibility gain with a matching traffic spike confirms the campaign is driving bottom-of-funnel AI impact.

What to Look For

Evolution Chart — Performance Dashboard

The evolution chart is your primary instrument for reading campaign impact. It plots each tracked metric as a time series across the selected period. Set a custom date range that spans at least two weeks before the campaign launch and the full duration after, so the baseline is clearly established before the step change.
ElementWhat it tells you
Brand Mention Visibility trendWhether AI models are citing your brand more often after the campaign
Source Citation Visibility trendWhether new content or press from the campaign earned incoming AI citations
Average Sentiment trendWhether campaign messaging improved how AI describes your brand
Provider-level sparklinesWhich AI platforms responded fastest to your campaign content

Competitor Compare — Evolution Tab

The Compare evolution view adds competitive context. Without it, you cannot distinguish a true campaign win from a rising tide that lifted all boats in your category.
Pro Tip: Export the Compare evolution data via Workspace > Exports and paste it into a slide. A chart showing your brand’s curve separating from the competitor pack during the exact campaign window is your most compelling proof-of-impact asset for a marketing review.

Filters That Help

FilterHow to use it for this question
Period (custom range)Isolate exactly the campaign window — 4 weeks pre-launch through 4 weeks post-launch is a reliable baseline-to-impact window
ProviderCheck which AI platforms responded to the campaign — campaign PR and content often moves Perplexity and Google AI first, then ChatGPT with a lag
Topic / TagVerify that visibility lifted on the topics your campaign targeted, not just overall — confirms the right signal is moving

How to Interpret the Results

Good result

Brand Mention Visibility increased by 5 percentage points or more within three weeks of campaign launch, with the lift sustained (not a single spike that reverted). Source Citation Visibility also moved — at least a 2-point increase — indicating that new content or press coverage from the campaign is being indexed by AI models as a trusted source. Sentiment remained stable or improved. The Compare evolution tab shows your brand outpacing competitors over the same window.

Needs attention

Brand Mention Visibility spiked briefly (one data point) and then returned to baseline, suggesting a temporary crawl rather than lasting incorporation into AI training signals. Or: visibility lifted on providers that send no measurable referrer traffic, so the brand uplift has no revenue correlation. Or: competitors moved by the same magnitude in the same window, indicating seasonal lift rather than campaign impact.
A short campaign window (under 2 weeks) may not be long enough for all AI providers to reflect new content. Perplexity and Google AI Mode update fastest (days). ChatGPT and Claude can take 2–6 weeks to reflect new web content depending on crawl cycles. Do not conclude a campaign failed if you only check results 5 days after launch.

Example

Scenario: A D2C skincare brand launches a new anti-aging serum in October with a 3-week product launch campaign — publishing clinical trial results on their blog, earning 15 beauty editor reviews, and running influencer partnerships across Instagram and TikTok.
  1. Open Overview > Performance and set the period to September 1 – November 15. The evolution chart shows Brand Mention Visibility at a stable 18% through September, then a visible step to 27% starting in week 2 of October — aligning with the beauty editor reviews going live and being crawled by AI models.
  2. Navigate to Overview > Compare > By Topic and filter to the topic tag “anti-aging skincare.” Your visibility on that tag rose from 12% to 33%, while the two main competitors stayed flat at 26% and 29% respectively. This confirms the lift is campaign-specific and tied to the product launch, not a seasonal trend.
  3. Open Analytics > Referrer Analytics for the same window and filter to AI referrers. Sessions from Perplexity increased 280% month-over-month — beauty publications that reviewed the serum are being cited as sources. ChatGPT referrers show a 45% increase with a 2-week lag, consistent with its slower crawl cycle for new product content.

Go Further