Insights

What Drives Brand Mentions in AI Answers?

Why It Matters

AI is changing how your customers search and rewriting the rules of brand visibility. 

As marketers, we need to understand what makes large language models (LLMs) call out some brands and not others when relevant questions are asked. 

To do that, we need to start unpacking what factors may influence visibility in these answers.

Why We Built This Study

We’ve seen some great small-scale tests on LLM outputs, but nothing that tackles the big picture.

Why? Because scaling question tracking and analysis is incredibly hard. That’s where our proprietary LLM monitoring tool came in.

GenAI Share of Voice

We don’t only track priority questions and answers for our clients (similar to how most SEOs track priority keywords to monitor the SERPs), but we can also run large-scale tests and join data from other sources to uncover patterns between LLM mentions and external factors.

We’re focused on finding actionable insights for the future to determine what factors actually make an impact on LLM visibility.

The Approach

Our ultimate goal is to uncover the factors that correlate with strong LLM visibility and put them to the test. This means tracking specific questions over time, making strategic adjustments based on the factors we’ve identified, and measuring the impact on individual brands. It’s all about figuring out what works and what doesn’t in a real-world context.

This post is just the beginning. For now, we’re focused on phase one—spotting correlations between LLM mentions and search-related factors.

Next, we’ll dig into areas like on-page content, PR features, ratings, reviews, and more to understand how they shape LLM responses. 

The framework we’re building will help us run smarter, targeted tests and refine our strategies for helping clients maximize visibility in AI answers.

What We Wanted to Find Out

Our mission? To answer big AI Answer questions:

  • Do top Google or Bing rankings boost LLM mentions?
  • Does diverse content (images, videos, ads) make brands more visible?
  • Are backlinks and domain rank the secret sauce behind mentions?

How We Did It

We went big with our process:

  1. Gathered Keywords: We pulled 300K+ keywords in finance and SaaS from paid and organic sources.
  2. Checked Rankings: We ran these keywords through Google and Bing to get rankings and search engine result page (SERP) data.
  3. Extracted Questions: These keywords triggered nearly 600K People Also Ask questions across Google and Bing. We’re using People Also Ask (PAA) questions proxies for real user queries and narrowed them to 10,000 relevant ones.
  4. Queried the LLM: Ran those 10,000 questions through OpenAI’s GPT4o API to see which brands came up. We specifically focused on questions that would naturally trigger brand mentions (e.g., “What’s the best CRM for small businesses?” and not “How do I open a bank account?”).
  5. Counted Mentions: Extracted brand names from answers to measure how often they appeared across the dataset.

The Join Key Challenge

Here’s where things got tricky:

To analyze everything, we had to match brand mentions with their domains to create a join key to properly blend LLM and SEO data. Sounds simple, right? Not so much. Because LLMs are providing answers to specific questions, they do a great job naming specific products but not the overarching brand itself.

This step was a headache but has inspired several new features we plan on integrating into our LLM monitoring tool to enable future cross-channel analyses with AI answers.

Bringing It All Together

Once we had the join key, we connected:

  • LLM data, including brand mentions and responses
  • Our brand/domain join key
  • SERP data from Google and Bing, filtered to the keywords that triggered the specific PAA questions we asked to ChatGPT

This data blend lets us dig deep into trends and correlations to see what’s really driving LLM visibility.

Correlation of LLM Mentions by SERP Factor

 

What Surprised Us

Google Rankings Matter:

Brands ranking on page 1 of Google showed a strong correlation (~0.65) with LLM mentions. Bing rankings also mattered but less so (~0.5–0.6).

Backlinks Don’t Mean Much:

We expected backlinks to play a big role, but their impact was weak or even neutral.

Content Variety Is Overrated:

Multi-modal content didn’t move the needle as much as we thought it would.


Filtering the Noise

Not all websites provide solution-oriented content. Forums, social media, and aggregators are places where people ask questions, not the solution to them.

For example, if you ask “What’s the best credit card for students?” to an LLM, you’re not going to get “Reddit” as the answer. So we needed to remove websites that might rank organically for search terms related to “student credit cards” but would never show up in LLM answers for related questions.

We used our SeerSignals website categorization feature to separate solution-focused websites (like SaaS providers or services sites) from noise. When we filtered out forums, aggregators, and similar sites, the correlations between rankings and mentions became even stronger.

Correlation of LLM Mentions by Website Category

The Results: By removing noise (e.g., forums, social media, and aggregators), we saw stronger correlations for solution-oriented websites, which were more likely to appear in LLM answers.


Key Takeaways

Search rankings appear to play some role in influencing LLM mentions, but they’re not the whole story. PR, partnerships, and on-page strategies are areas we’ll be digging into next. 

Filtering noise from the data to focus on solution-oriented websites shows even stronger correlations, proving the importance of high-quality, relevant content.

What’s Next?

Stay tuned because this test is the first step in a larger study. We have many more factors to test, including on-page factors, PR efforts, and more, like:

  • Exploring how PR efforts and OpenAI partnerships (like Hearst’s) influence mentions.
  • Looking at the role of citation policies and specific content strategies.
  • Seeing how real-time updates impact brand mentions in AI answers.

Once we identify some positive correlations, we’ll dive deeper—tracking a set of questions on a recurring basis, making targeted changes based on those factors, and measuring whether they influence LLM-generated answers for specific brands.

We’re excited to keep learning and pushing the boundaries of AI visibility.

What should we test next?

Have ideas on what factors might matter? Leave a comment —we’d love to hear your thoughts!

CB - What Drives Brand Mentions in AI Answers

 

We love helping marketers like you.

Sign up for our newsletter for forward-thinking digital marketers.