This guide covers the disambiguation between base ChatGPT and ChatGPT Search, how the Bing index decides who gets cited, the three-bot robots.txt model, the ranking factors that April 2026 large-scale citation studies measured, the retrieval-vs-citation gap, and why a page that wins in Perplexity often loses in ChatGPT.
ChatGPT Search is not the same product as ChatGPT
Most articles on this topic blur the two. They are different surfaces.
Base ChatGPT answers from training data, without touching the web. ChatGPT Search is the web-grounded retrieval surface inside the same app: when ChatGPT decides a query needs fresh information, it runs Bing-powered searches, retrieves a handful of pages, and synthesizes an answer with inline citations. As of February 2026, ChatGPT Search fires on roughly 34.5% of prompts, down from 46% in late 2024. The other 65% are answered from training data alone.
This article is about being cited in the retrieval surface. To detect whether your brand currently appears in any ChatGPT answer, see how to check your brand in ChatGPT. To be inside the training data is a slower parametric play covered in how AI chooses brands to recommend. For the foundation, see what is AEO.
Optimize for Bing, because ChatGPT Search runs on Bing
ChatGPT Search uses the Bing index as its retrieval source. An independent 2026 study found that 87% of SearchGPT citations match Bing's top results, and only ~12% match Google's top 10. Roughly 88% of ChatGPT-cited URLs do not rank in Google's top 10 at all.
The Baccarat Hotel case study (Search Engine Land, Apr 6, 2026) makes the consequence concrete. Baccarat NYC, founded 2015, with 1,300+ reviews and a Forbes Travel Guide #4 placement, appeared once across 68 ChatGPT prompts about the best NYC hotels. The Fifth Avenue Hotel, founded 2023 with 213 reviews, appeared 13 times. The only meaningful delta was Bing SERP visibility.
Concrete steps:
- Verify your domain in Bing Webmaster Tools and submit your XML sitemap
- Connect the IndexNow API to your CMS so each publish pings the index
- Run your top 20 target queries in Bing and audit positions 1 to 10. That is your real ranking proxy, not Google
- Fix any URL that is canonicalized, blocked, or 404'd in Bing's URL Inspection tool
Bing's search market share is irrelevant here. Bing's index is the substrate ChatGPT Search retrieves from.
Configure your robots.txt for the three-bot model
OpenAI ships three independent crawlers, each with a different job. CMS defaults from 2023-2024 quietly added several to disallow lists, and the result is many sites that are Bing-indexed but ChatGPT-Search-invisible.
| Bot | User-Agent | What it does | Default stance |
|---|---|---|---|
| OAI-SearchBot | OAI-SearchBot/1.0 | Indexes pages for the ChatGPT Search retrieval surface | Allow |
| GPTBot | GPTBot | Crawls for future model training (parametric layer) | Allow, unless training opt-out is required |
| ChatGPT-User | ChatGPT-User | Fetches a URL on demand when a user pastes a link | Allow |
A clean baseline robots.txt block, verified against the OpenAI bots reference:
User-agent: OAI-SearchBot
Allow: /
User-agent: ChatGPT-User
Allow: /
User-agent: GPTBot
Allow: /
User-agent: PerplexityBot
Allow: /
User-agent: ClaudeBot
Allow: /
User-agent: Google-Extended
Allow: /
User-agent: Applebot-Extended
Allow: /Verify the bot is real by reverse-DNS lookup against openai.com before reacting to suspicious traffic. Blocking OAI-SearchBot is the single most common cause of “Bing-indexed but ChatGPT-Search-invisible” diagnoses in our audits.
Optimize the seven ranking factors that actually move citations
An April 2026 industry analysis covered the largest public dataset on this question: 50,553 ChatGPT responses across 16,851 queries (Search Engine Land, Apr 16, 2026). Three findings cut against a decade of pillar-page advice.
Top Bing position dominates. Position 1 was cited in 58.4% of responses; position 10 in 14.2%. Positions 1 to 3 hold most of the citation budget.
Narrower beats broader. Pages that answered the main query directly outperformed comprehensive pillar guides. Sweet spot: 500 to 2,000 words. Pages over 5,000 words were cited less than pages under 500.
Fresh, but not freshest. Pages 30 to 89 days old beat pages younger than 30 days. Give your page time to settle into the Bing index before measuring.
Schema lifts citation rate by 6.5 percentage points in some studies, but other independent experiments find no direct multiplier. JSON-LD pages were cited in 38.5% of responses vs 32.0% without in the April 2026 dataset; an earlier 3-month controlled experiment found no direct citation lift from schema alone. The reconciling read: schema is necessary scaffolding for entity disambiguation and retrieval matching, not a citation multiplier on its own. Implement it; don't expect it to do the work alone.
Subhead density matters. Sweet spot: 4 to 10 H2/H3 subheadings per page. Too few and the page reads as a wall; too many and chunks shrink below the threshold ChatGPT extracts cleanly.
Listicles dominate. Listicle pages account for 43.8% of all ChatGPT-cited page types (Ahrefs, 1.4M prompts, Search Engine Journal Apr 16, 2026). The “best X for Y” format wins, not the pillar guide.
URL slugs need to be readable. Ahrefs' same dataset: pages with clear slugs were cited 89.78% of the time vs 81.11% for opaque ones. Stop kebab-casing keyword-stuffed slugs.
For a deeper dive on schema, see schema markup for AEO.
Close the retrieval-vs-citation gap
The most counter-intuitive finding in the Apr 2026 data is the gap between being read and being cited. Ahrefs analyzed 1.4 million ChatGPT prompts and found that Reddit was retrieved on 67.8% of uncited results but cited only 1.93% of the time. ChatGPT uses Reddit extensively to gauge consensus, but almost never gives it credit. Reddit is a grounding channel, not a citation channel.
The implication for your strategy:
- Use Reddit and forum presence to shape what ChatGPT understands about your category, but do not expect citations
- Citation-grade content has to live on your own domain: clearly attributed, schema-marked, Bing-indexed
- A page with a clear author, publication date, and primary data gets cited; a Wikipedia-style summary gets read
Third-party mention density still matters for the parametric layer. The recommendation surface in ChatGPT Search is your own URLs.
Adapt to citation consolidation post-GPT-5.3
GPT-5.3 Instant became default in early March 2026. Resoneo and Oncrawl analyzed 27,000 responses before and after the switch (Search Engine Journal, Apr 6, 2026) and found measurable consolidation: domains per response dropped from 19 to 15; URLs per response from 24 to 19.
ChatGPT is not visiting as many sites per response, but it is going just as deep into each one. The pool of cited sites is shrinking; the bar to enter it is rising.
What to do about it:
- Treat each cited slot as scarce. Aim to be the single best answer for a tightly scoped query
- Audit your top 10 cited URLs and double down on what works: listicles, comparison tables, narrow how-to articles
- Expect some erosion in citation share post-March 2026 even on pages that were stable before
Compare ChatGPT Search to base ChatGPT, Perplexity, and Gemini
A page that ranks in Perplexity often loses inside ChatGPT Search. The reasons are mechanical. The table below compares citation behavior across four surfaces (Gemini included for context, not part of F&W's audit scope).
| Behavior | Base ChatGPT | ChatGPT Search | Perplexity | Gemini |
|---|---|---|---|---|
| Web search per query | Off by default; ~34.5% trigger rate | On when triggered, Bing-indexed | Always on | Always on, Google-indexed |
| Primary index | Training data | Bing | Own crawler + partners | |
| Reddit citation share | Moderate (in training) | Low (~5%, declining) | Very high (~46.7%) | Very low (~0.1%) |
| Citation links in UI | Rare in default mode | Inline when Search fires | Numbered, every query | Variable, “learn more” links |
| Freshness sweet spot | Training cycle (months+) | 30 to 89 days post-publish | Last 30 to 90 days | Tied to Google freshness signals |
| Page-length sweet spot | n/a | 500 to 2,000 words | Answer-first short passages | Answer-first short passages |
A Reddit-heavy strategy that wins in Perplexity does almost nothing in ChatGPT Search and is invisible in Gemini. A Bing-indexed listicle that wins in ChatGPT Search may be ignored by Perplexity if it is older than 90 days. Same content, three different outcomes.
For platform-by-platform deep dives, see how to rank in Perplexity, Google AI Mode optimization, and the Perplexity vs ChatGPT vs Gemini comparison.
Avoid these common mistakes
A handful of patterns block ChatGPT Search visibility specifically. Most come from copy-pasting Google SEO instinct into a different surface.
1. Treating ChatGPT and ChatGPT Search as the same product. Only ~34.5% of prompts trigger Search; the rest are training-data answers.
2. Optimizing only for Google rankings. 88% of ChatGPT-cited URLs do not rank in Google's top 10. Bing visibility is the relevant proxy.
3. Writing pillar guides for ChatGPT. Pages over 5,000 words are cited less than pages under 500. Narrower beats broader.
4. Publishing and measuring on day 7. Pages younger than 30 days underperform; pages 30 to 89 days old win.
5. Skipping Bing Webmaster Tools because Bing is small. Bing's market share is irrelevant; ChatGPT's index dependency is the reason to be there.
6. Blocking GPTBot reflexively. The three OpenAI bots are independently controlled. Blocking GPTBot suppresses your parametric layer but does not affect ChatGPT Search citation.
7. Treating Reddit posts as a citation channel. Reddit is a grounding channel with a 1.93% citation rate. The recommendation surface lives on your own domain.
8. Counting referral traffic as the success metric. An industry citation analysis circulated in 2026 showed referral traffic down 52% even where citation share grew. CTR is roughly 1.3% vs Google's ~29%. Track Share of Recommendation instead.
Switch the metric from referral traffic to Share of Recommendation
If you are still measuring AEO success in clicks, the data has moved past you. Aleyda Solís's 2026 reframing, Share of Recommendation, is the right scoreboard: the percentage of relevant prompts in which an AI system recommends your brand as a solution, weighted by recommendation prominence. The full eight-point framework is in her AI Search Content Optimization Checklist.
Mike King (iPullRank) introduced the umbrella term Relevance Engineering at SEO Week NYC in April 2026: structuring content, entities, and signals so retrieval systems pick your content as the answer. AEO is the practitioner-level slice; Relevance Engineering is the broader frame.
Practical measurement, in the order we test inside the F&W audit:
- Mention rate is the share of target prompts where ChatGPT names your brand
- Average citation position is first-cited, second-cited, lower
- Cross-engine variance is the same prompt run in ChatGPT Search vs Perplexity vs Claude
- Fan-out coverage maps the sub-queries ChatGPT runs internally
- Source-URL distribution shows which of your pages ChatGPT actually picks
A baseline of 20 to 30 prompts run weekly in fresh sessions gets you a defensible Share-of-Recommendation curve.
ChatGPT Search ranking checklist
Twelve actions in priority order. The first four are days of work; the rest run on a quarterly rhythm.
- Verify the site in Bing Webmaster Tools and submit the XML sitemap
- Connect IndexNow so every publish pings Bing immediately
- Audit robots.txt and explicitly Allow OAI-SearchBot, ChatGPT-User, GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Applebot-Extended
- Run your top 20 target queries in Bing and record positions 1 to 10 as your baseline
- Add or fix Organization, Article, and FAQPage JSON-LD on priority pages
- Restructure your three top commercial pages with answer-first paragraphs and 4 to 10 subheads
- Cut any page above 5,000 words into a 500-to-2,000-word version with clear URL slugs
- Update high-traffic pages quarterly, then wait 30 days before measuring
- Convert pillar comparisons into listicles or tables for “best X for Y” queries
- Track Share of Recommendation for 20 to 30 prompts weekly in fresh sessions
- Run the same prompts through Perplexity and Claude to capture cross-engine variance
- Audit which third-party listicles, review platforms, and forums cite you
The contrarian take: small, narrow, slightly old pages beat big, fresh, comprehensive ones
Most AEO advice tells you to publish ultimate guides, refresh weekly, and chase comprehensive coverage. The 2026 data says the opposite. Multiple April 2026 large-dataset analyses all point in the same direction: ChatGPT Search rewards the page that answers one specific query, lives on its own URL with a clean slug, has been around long enough to settle into the Bing index, and carries enough JSON-LD for the model to know what it is looking at.
That is closer to a 1990s how-to article than a 2020s pillar page. If you have been writing 5,000-word ultimate guides for two years, the cheapest move this quarter is to break one of them into ten narrow articles and let the next 89 days do the rest.
Be the #1 response in ChatGPT Search
Want to know if ChatGPT Search currently cites your brand — and how to win the top citation slot? Get a Far & Wide AEO Enterprise Audit (from €750). We benchmark you across ChatGPT (including ChatGPT Search), Claude, and Perplexity in three retrieval scenarios (parametric, with web tool, fresh session). You get screenshots, a 15+ document remediation roadmap, the three-bot robots.txt config, and a 1.5-hour live strategy session focused on Bing indexability and ChatGPT Search citation patterns.
Get your audit