How to Check If Your Brand Is Recommended by ChatGPT (and Other AI Assistants)

AI brand visibility checking is the process of testing whether AI assistants mention, cite, or recommend your brand when users ask questions about your industry. This article covers the complete process: 15 ready-to-use prompts, a tracking spreadsheet template, free and paid monitoring tools compared, and how to interpret your results.

Back to Blog

Understand what AI brand visibility means

AI brand visibility is whether and how AI assistants mention your brand when users ask questions related to your industry, products, or services. It covers three distinct outcomes:

  • Recommendation — the AI names your brand as a solution ("Try [Brand] for this use case")
  • Citation — the AI quotes or links to your content as a source
  • Omission — the AI answers the question without mentioning you at all

This is different from SEO in a specific way. In Google, you compete for 10 positions on a results page, and users choose which link to click. In ChatGPT, the AI typically names 3-5 brands per response and builds the answer itself. The user never sees the other options the AI considered and discarded.

Those responses also change based on how the question is phrased, the user's conversation history, and whether web search is enabled. A brand that appears in one session may be absent in the next. In our analysis of over 1,000 AI sessions at Far & Wide, we found that brand recommendations can vary by 30-40% between identical prompts run minutes apart — which is why a single check tells you almost nothing.

Why this matters now

If your brand is invisible in AI responses, potential customers are finding your competitors instead — through a channel that nearly a billion people already use every week.

Run your first manual brand check in ChatGPT

Open ChatGPT in incognito mode (or log out) to avoid personalized results. When you're logged in, ChatGPT tailors responses based on your conversation history, so you'll see a skewed picture of your actual visibility. Incognito mode simulates what a first-time user sees.

Step 1: Start a fresh session without prior context

Use incognito or private browser mode. If you're logged in, start a new chat. This gives you the baseline visibility that all brands compete for — no personalization, no prior context.

Step 2: Ask a category-level question without mentioning your brand

Type a question your potential customer would ask, but leave your brand name out. The goal is to see if the AI recommends you unprompted.

Examples:

  • "What are the best project management tools for remote teams?"
  • "Which CRM works best for a 20-person sales team?"
  • "How do I choose an email marketing platform for e-commerce?"

Step 3: Document what appears

For each response, write down:

  • Whether your brand is mentioned (yes/no)
  • Position in the list (1st, 3rd, not listed)
  • Context: recommended as a solution, mentioned neutrally, or mentioned with caveats
  • Which competitors appear and in what positions
  • Whether sources or links are cited

Step 4: Run the same prompt 3-5 times

AI responses are non-deterministic. The same question produces different brand lists each time. Running multiple iterations shows whether your brand appears consistently (strong visibility) or only occasionally (weak signal that could disappear any time).

Step 5: Test with web search on and off

ChatGPT with web search enabled pulls fresh data from the web. Without web search, it relies on training data only. Test both modes — they reveal different things:

  • Web search off = how well-known your brand is in the AI's training data (parametric knowledge)
  • Web search on = how well your web presence supports real-time retrieval

A brand that appears with web search off has strong parametric presence. A brand that appears only with web search on depends entirely on its current web footprint — which can change when the AI's retrieval algorithm updates. This distinction between parametric knowledge and retrieval visibility is at the core of Far & Wide's three-layer AI visibility model — and it matters because each layer requires a different optimization strategy.

Test your brand across all major AI platforms

ChatGPT is the largest AI assistant, but each platform retrieves and cites sources differently. A brand visible in ChatGPT may be absent in Perplexity, and the reverse is also true.

Run the same 3-5 prompts across all five platforms:

PlatformHow to accessHow it works
ChatGPTchat.openai.comNames 3-5 brands per response; results vary between sessions; web search is optional
Perplexityperplexity.aiAlways searches the web; shows inline source links; tends to cite recent content
Geminigemini.google.comIntegrated with Google's index; may pull from Google Business Profile data
Claudeclaude.aiConservative with recommendations; tends to cite fewer brands with higher confidence
Copilotcopilot.microsoft.comPowered by Bing's index; shows clickable source links

Cross-platform consistency is the strongest signal. Appearing in 4-5 out of 5 platforms means your brand has broad AI visibility. Appearing in only one suggests your visibility depends on a single retrieval path that could break with the next model update.

Use these 15 prompts to audit your brand's AI presence

A single prompt gives you a single data point. You need 15-20 prompts across different intent types to understand your real visibility. Use these prompts as a starting template, replacing the brackets with your actual brand, category, and use case.

Category discovery prompts (unbranded)

These test whether AI recommends your brand when users look for solutions in your space. They are the most important prompts in this list — because they simulate how new customers find you.

  1. "What are the best [your category] tools in 2026?"
  2. "Which [your category] platform do professionals recommend?"
  3. "I need a [your category] solution for [your target audience]. What are my options?"
  4. "Compare the top [your category] tools for [specific use case]"
  5. "What should I look for when choosing a [your category] platform?"

Branded awareness prompts

These test what the AI already knows about your brand specifically:

  1. "What is [your brand] and what does it do?"
  2. "Is [your brand] good? What are the pros and cons?"
  3. "How does [your brand] compare to [top competitor]?"
  4. "What do people say about [your brand]?"
  5. "Who are [your brand]'s main competitors?"

Problem-solution prompts

These test whether the AI recommends you when users describe a problem you solve:

  1. "How do I [problem your product solves]?"
  2. "What's the best way to [job your customer hires you for]?"
  3. "I'm struggling with [pain point]. What tools can help?"

Industry expertise prompts

These test whether your content gets cited as a source (not just your brand recommended):

  1. "What is [topic you write about]? Explain with examples"
  2. "What are the best practices for [your area of expertise]?"

Start with prompts 1-5 (category discovery). These tell you the most about real-world visibility because they match the queries your potential customers actually type.

Record results in a brand visibility tracking sheet

Consistent tracking turns random spot checks into data you can act on. Use this format to record every prompt test:

DatePlatformPromptMentioned?PositionContextCompetitorsNotes
2026-03-30ChatGPT"Best CRM for sales teams"Yes3rd of 5Recommended for small teamsSalesforce (1st), HubSpot (2nd)Appeared in 3/5 runs
2026-03-30Perplexity"Best CRM for sales teams"NoNot mentionedSalesforce, HubSpot, ZohoCompetitor dominates
2026-03-30ChatGPT (no web)"Best CRM for sales teams"NoNot mentionedSalesforce, HubSpot, PipedriveAbsent without web search

Key metrics to calculate from your tracking sheet

MetricWhat it tells you
Mention ratePercentage of prompts where your brand appears. Below 30% on category prompts means AI mostly ignores you. Above 60% means consistent visibility.
Average positionWhere you typically appear in the list. 1st-2nd is strong. 3rd-5th is present but not dominant.
Share of voiceYour mentions as a percentage of total brand mentions. If the AI names 5 brands per prompt and you appear in 3 of 10 runs, your share of voice is roughly 6%.
Cross-platform consistencyHow many platforms mention you for the same prompt. 3+ out of 5 is solid.
Context qualityRatio of positive recommendations to neutral mentions to caveats. A mention with "users report issues with..." is worse than absence.

Track these monthly. AI visibility shifts faster than organic search rankings, so quarterly checks miss important changes.

Interpret your results and identify gaps

Data without interpretation leads to either false confidence or unnecessary panic. These are the five patterns we see most often when analyzing brand visibility across Far & Wide audits and our dataset of 1,000+ AI sessions, and what each one means.

Pattern 1: Strong branded, weak unbranded

The AI knows your brand and describes it correctly when asked directly. But it doesn't recommend you when users ask for solutions in your category.

Your brand has parametric presence (it's in the training data) but lacks the web signals that trigger recommendations in real-time retrieval. Review platforms, comparison pages, and third-party mentions are likely missing.

To fix this, create comparison pages ("[Your Brand] vs [Competitor]"), "best [category]" content, and get listed on review aggregators — G2, Capterra, Trustpilot, Product Hunt, TrustRadius. These are sources AI platforms pull recommendation data from.

Pattern 2: Present in Perplexity, absent in ChatGPT

Perplexity always searches the web in real time. ChatGPT often answers from training data alone, especially for familiar topics.

Your web content is discoverable by search-based retrieval, but your brand hasn't been absorbed into ChatGPT's parametric memory. This is common for brands that launched or grew after ChatGPT's training data cutoff.

To fix this, build presence in sources that feed AI training data: Wikipedia (if notable enough), major publications, industry reports, open-source repositories, and conference proceedings.

Pattern 3: Mentioned but with wrong positioning

The AI recommends you, but for the wrong use case, audience, or price point. It might call you "budget-friendly" when you're premium, or "best for enterprises" when you target small teams.

Your online presence sends mixed signals. Your website says one thing, G2 reviews describe another, and a 2023 blog post references your old positioning.

To fix this, audit all brand mentions across review platforms, your website, guest posts, and press coverage. Align messaging everywhere. Create one definitive "What is [Your Brand]" page that states your positioning exactly as you want AI to represent it.

Pattern 4: Sporadic mentions (appears in 1 of 5 runs)

Your brand shows up sometimes but disappears in the next session. You're on the visibility threshold — the AI considers you a borderline candidate.

Your brand signal exists but isn't strong enough to consistently beat competitors. A small improvement in mention density could push you from occasional to consistent.

To fix this, increase third-party mentions: more reviews, more backlinks from authoritative sites, more structured data on your website, more content covering your key use cases.

Pattern 5: Complete absence

The AI doesn't mention your brand in any prompt on any platform — not in branded queries, not in category queries, not anywhere.

This is typical for newer brands, niche players, or companies that have relied on paid advertising and direct sales without building an organic web presence.

Start with the fundamentals. Check that your robots.txt isn't blocking AI crawlers (GPTBot, ClaudeBot, PerplexityBot). Publish content that answers the questions in your category. Get listed on review platforms. Build third-party mentions through PR, guest articles, and industry participation. Also worth doing: ask your existing clients how they found you and whether they used AI assistants during their research — this reveals how much business you may already be losing.

Choose a monitoring tool that fits your budget

Manual checking works for an initial audit, but it doesn't scale. AI responses change weekly, sometimes daily. To track trends, catch drops, and benchmark against competitors over time, you need a monitoring tool or a periodic professional audit.

Free methods

MethodWhat you getLimitation
Manual prompt testingDirect observation of actual AI responsesTime-consuming; no trend data; limited to 15-20 prompts per session
Google Search ConsoleAI referral traffic data (filtered by source)Shows clicks only, not mentions; doesn't cover ChatGPT directly

Paid tools

ToolStarting priceStandout featureBest for
Far & Wide Brand Visibility Report€80 one-timeExpert analysis on your actual queries with interpretation and action planBrands that want specific recommendations, not just dashboards
Ahrefs Brand RadarFrom $129/mo (Lite plan)13.5M prompt database; competitor benchmarking across ChatGPTSEO teams already using Ahrefs
SE RankingFrom $119/mo (Pro plan)AI tracking across 6 platforms integrated into a full SEO suiteAgencies managing multiple clients
Semrush AI Toolkit~$139/mo (included in plan)AI visibility metrics alongside organic and paid dataSEO teams already using Semrush
Peec AIFrom €85/mo (Starter)Prompt-level analysis, sentiment tracking, 115+ languages, 7 AI modelsEuropean brands or multilingual markets

How to decide

  • Budget under €100/month: start with the manual testing method from this article. It's free and gives you a real baseline.
  • Budget €100-150/month: if you already use Ahrefs or Semrush, activate their AI visibility features. No need for a separate tool.
  • Budget €150+/month: combine your SEO suite's AI features with a dedicated platform like Peec AI for prompt-level analysis across 7 AI models.
  • Want more than monitoring — want to know what specifically to fix? A Far & Wide Brand Visibility Report (€80, one-time) analyzes your ChatGPT presence on your actual target queries and delivers a concrete action plan.

Avoid these 7 brand checking mistakes

Most brands that run AI visibility checks make errors that produce misleading results. These are the seven we encounter most often in our audit practice — and each one can make you either overconfident or unnecessarily alarmed.

  1. Testing only branded queries. Asking "What is [your brand]?" and getting a correct answer feels reassuring, but tells you almost nothing about whether the AI recommends you to people who don't know you yet. Unbranded category queries are what drive new customer discovery through AI.
  2. Checking once and drawing conclusions. AI responses are non-deterministic — the same question produces different brand lists across sessions. Run each prompt 3-5 times and calculate mention rate.
  3. Using your logged-in account. ChatGPT personalizes responses based on your history. Always test in incognito mode or a fresh session to see what strangers see.
  4. Ignoring the context of mentions. "Mentioned" and "recommended" are different things. If the AI says "[Your brand] is known for X, but users have reported issues with Y" — that's a negative mention that may be worse than absence.
  5. Testing only on ChatGPT. Your potential customers use multiple AI assistants. Test across at least three platforms.
  6. Tracking vanity prompts instead of real customer queries. Build your prompt library from real search data, sales call transcripts, and customer support questions — not from generic category terms.
  7. Confusing retrieval visibility with parametric knowledge. When ChatGPT searches the web and cites your article, that's retrieval visibility — temporary, depends on your current web presence. When ChatGPT knows your brand without searching, that's parametric knowledge — more durable, built from training data. Test with web search on AND off to distinguish them.

Act on your findings

Checking your brand's AI visibility is only useful when followed by specific changes. Based on the pattern you identified in the interpretation section, here are the priority actions.

If your brand is completely absent

  1. Check your robots.txt. Confirm that GPTBot, ClaudeBot, and PerplexityBot are not blocked from crawling your site.
  2. Get listed on review platforms. G2, Capterra, Trustpilot, Product Hunt, and TrustRadius are common sources that AI platforms pull recommendation data from.
  3. Publish authoritative content in your category. Comparison pages, how-to guides, and best-practices articles that directly answer the prompts your customers would type.
  4. Build third-party mentions. Guest posts on industry publications, PR coverage, podcast appearances, and conference talks create mention density across the web.

If your brand is mentioned but not recommended

  1. Create answer-ready content. Pages that directly answer "what is the best [category] for [use case]?" with your brand positioned as a solution. Use comparison tables, not prose.
  2. Align brand information across all platforms. Keep your brand name, description, and positioning identical everywhere.
  3. Add structured data. Organization, Product, and FAQ schema markup helps AI systems understand what you do and who you serve.

If your brand is recommended with wrong positioning

  1. Audit all online mentions. Use Google search and brand monitoring to find outdated articles, old product descriptions, and incorrect profiles.
  2. Create a definitive brand page. One authoritative "What is [Your Brand]" page that defines your brand exactly as you want AI to present it.
  3. Update and respond to reviews. Review content on G2, Capterra, and similar platforms directly shapes how AI describes your brand.

How to track whether your changes are working

Set up a monthly re-check using the same prompts and tracking spreadsheet. Compare month-over-month changes in mention rate, position, and context quality. Also ask your sales team and clients directly: "Did you use ChatGPT or another AI assistant while researching solutions?" — this connects AI visibility to actual business outcomes.

AI brand visibility checklist

Use this checklist to run a complete brand visibility audit.

Preparation

  • Open incognito or private browsing mode
  • Prepare 15+ test prompts across four categories (category discovery, branded, problem-solution, expertise)
  • Set up tracking spreadsheet with columns: Date, Platform, Prompt, Mentioned, Position, Context, Competitors, Sources, Notes
  • Identify your top 3 competitors for benchmarking

Manual testing

  • Test all prompts in ChatGPT with web search enabled
  • Test all prompts in ChatGPT with web search disabled
  • Test all prompts in Perplexity
  • Test all prompts in Gemini
  • Test all prompts in Claude
  • Run each prompt 3-5 times per platform
  • Record every result in the tracking sheet

Analysis

  • Calculate mention rate per platform
  • Calculate share of voice vs. top 3 competitors
  • Identify which gap pattern matches your results
  • Classify mentions by context quality (recommended, neutral, negative)
  • Compare web-search-on vs. web-search-off results

Action

  • Check robots.txt for AI crawler access (GPTBot, ClaudeBot, PerplexityBot)
  • Verify brand consistency across website, review platforms, and directory listings
  • Create or update comparison pages and category content
  • Add or update schema markup (Organization, Product)
  • Set up monthly re-testing schedule or select a monitoring tool
  • Ask existing clients whether they used AI assistants during their research
AI brand visibility checking is the process of testing whether AI assistants mention, cite, or recommend your brand when users ask questions about your industry — across ChatGPT, Perplexity, Gemini, Claude, and Copilot — and interpreting the results to improve your presence in AI-generated recommendations.

Don't want to do this manually?

A Far & Wide Brand Visibility Report (€80) analyzes your ChatGPT presence on your actual target queries and delivers a prioritized action plan — including the parametric vs. retrieval distinction that SaaS monitoring tools don't test. One report, one fee, no subscription.

Get your report