This guide covers Claude's two answer modes (parametric vs grounded), the three-bot crawler framework with the exact robots.txt block to publish, the Opus 4.7 regression event of April 16, 2026, the B2B vs consumer source mix, the "long, honest content beats short promotional" rule, the Constitutional AI / AUP refusal pattern, and a manual workflow for tracking visibility when no public crawl-log exists.
Thesis up front: Opus 4.7 made Claude harder to game and harder to measure simultaneously. Source attribution dropped, paraphrasing rose, refusals on benign branded queries spiked — but the underlying retrieval (Brave Search backbone, three-bot crawler framework, conservative-citation philosophy) didn't change. Winners optimize both the parametric layer (training-data presence) and the grounded layer (Brave rank + Claude-SearchBot allow + structured citable content) at once.
Understand what "ranking in Claude" actually means
Claude is Anthropic's general-purpose AI assistant — used in Claude.ai chat, Claude Code, the Anthropic API, and a long tail of agentic tools. There is no ranking algorithm in the Google sense; Claude reasons over pages rather than ranking them. You rank in Claude by becoming a source Claude either remembers (parametric knowledge from training) or retrieves and cites (the web tool, which industry analyses suggest is fired primarily against the Brave-Search-aligned retrieval layer).
Two distinctions matter:
- Parametric vs grounded. Claude answers many questions from training data without firing the web tool. Knowledge cutoff for Opus 4.7 is January 2026. If your brand isn't in the training corpus and the user hasn't triggered the tool, you're invisible.
- Conservative citation. Even when the web tool fires, Claude drops sources it doesn't trust — undated pages, promotional copy, content without a named author. This is Constitutional AI training, not a knob you can negotiate with.
Trace what changed with Opus 4.7 — and what didn't
Claude Opus 4.7 launched April 16, 2026 as the new default on Max and Team Premium tiers. Within 24 hours, AEO-relevant complaints landed in three places: r/ClaudeAI, GitHub issues, and trade press. Most Claude-optimization guides on the public internet are still pre-4.7.
What regressed in 4.7, and what didn't (industry-observed magnitudes — exact figures contested across sources):
| What changed | Direction | Source |
|---|---|---|
| BrowseComp benchmark | -4.4 points vs Opus 4.6 | MindStudio review, late April 2026 |
| MRCR v2 8-needle (1M context) | 78.3% → 32.2% | Wentuo regression analysis |
| Citation specificity | Lower; "fewer direct quotes, more paraphrasing that drifts" | GitHub #50235 |
| AUP refusals on benign branded queries | Spiked | The Register, April 23, 2026 |
| Brave Search retrieval backbone | Unchanged | Anthropic web-search documentation (Vertex AI) |
| Three-bot crawler framework | Unchanged | Search Engine Journal, Feb 25 2026 |
| Knowledge cutoff | January 2026 (unchanged) | Anthropic model card |
The community headline was r/ClaudeAI's thread "Claude Opus 4.7 is a serious regression, not an upgrade", which cleared roughly 2,300 upvotes within 48 hours. The complaint cluster: more paraphrasing, less precise source attribution, claims attributed to the wrong document when synthesizing across multiple inputs.
The signature failure mode is what GitHub issue #50235 calls "confident-prose fabrication." Verbatim: "Confident-sounding internal output gets emitted as factual claim without tool-verification… The model does not treat its own prose as requiring source attribution the way it treats structured data." The thread documents fabricated git commit hashes presented as real.
The AUP spillover hit the same week. The Register's April 23 piece ("Claude Opus 4.7 has turned into an overzealous query cop") collected reports of refusals on innocuous branded and technical queries. For regulated categories, the same filter that refuses "best supplement for X" affects whether Claude will name your firm.
The frame. Retrieval didn't change. Generation did: Claude cites less precisely and refuses more often. Your brand can inform a Claude answer without appearing as a clickable URL — visibility-without-citation is a real failure mode, and it's why measurement got harder at the same moment optimization got more rewarding.
Tell Claude with web tool apart from Claude parametric
The single most-confused topic in Claude AEO is when the web tool actually fires. Claude doesn't search by default — it searches when the user explicitly asks, when the model self-elects, or when an agentic flow has tools enabled. In standard chat with no triggers, Claude answers parametrically from training data.
The two modes have different optimization strategies. Parametric visibility comes from being in the training corpus (Wikipedia, large publications, analyst sites — crawled before the January 2026 cutoff). Grounded visibility comes from ranking in Brave Search and being indexable by Claude-SearchBot. Optimize only the grounded layer and you lose half the surface.
When does the web tool fire?
| Condition | Web tool fires? | Implication |
|---|---|---|
| Free tier, web tool toggled OFF | No — parametric only | If you're not in training data, you're invisible |
| User types "search the web for…" or pastes a URL | Yes | Brave rank decides whether you appear |
| Current event, recent news, or post-cutoff topic | Usually yes — model self-elects | Brave rank decides whether you appear |
| Generic "what's the best X for Y" with no time signal | Sometimes; probability rises with category specificity | Both layers matter |
| API call with no web tool enabled | No | Only training-data presence matters |
| Max / Team Premium, default tools on | More often (default-on for many flows post-4.7) | Brave rank dominates |
| Claude Code / agentic workflow with tool access | Frequently — tool use is default | Brave + structured data dominate |
Practical implication. You cannot pick one layer. Brands that win in Claude treat parametric (Wikipedia, G2, Gartner, LinkedIn, press) and grounded (Brave + Claude-SearchBot + structured citable content) as a single program with two tracks. See How AI Chooses Brands to Recommend.
Configure the three-bot crawler framework correctly
Anthropic uses three separate crawlers, each with a different job. The boilerplate "block all AI bots" template that spread across CMS platforms in 2023-2024 now actively suppresses Claude visibility for sites that copy-pasted without revisiting.
| Bot | Function | Effect of blocking |
|---|---|---|
| ClaudeBot | Retrieves content for training future Claude models | Excludes you from training data; future Claudes don't "know" your brand parametrically |
| Claude-User | Real-time page fetch when a user / model requests it during a Claude conversation | Prevents Claude from reading your URL when users paste it into a session |
| Claude-SearchBot | Indexes content for Claude's web-search results when the tool fires | Removes your site from Claude's search-result citations entirely — the visibility killer |
The minimum publishable configuration — copy this into robots.txt if you want Claude visibility:
User-agent: ClaudeBot
Allow: /
User-agent: Claude-User
Allow: /
User-agent: Claude-SearchBot
Allow: /If you have a strong reason to opt out of training (proprietary research, paywalled content, IP concerns), block ClaudeBot only — keeping in-conversation and search-index access intact:
User-agent: ClaudeBot
Disallow: /
User-agent: Claude-User
Allow: /
User-agent: Claude-SearchBot
Allow: /Audit step before you read another line. Many sites running WordPress, Shopify, or older CMS templates inherited a wildcard rule that quietly catches Claude-SearchBot. Anthropic's own warning: blocking Claude-SearchBot "may reduce your site's visibility and accuracy in user search results." That's the polite way of saying it removes you from the citation set entirely.
Align with Claude's web-search backbone: Brave
Claude's web search tool retrieves through Brave Search. This is Anthropic-confirmed in the official Claude web search documentation, surfaced via the Vertex AI integration docs (“Claude uses Brave Search as its web search provider”). The practical implication: optimize for Brave indexability, not just Google. Your Brave rank matters more than your Google rank when Claude's web tool fires.
Brave runs its own independent web index — not a re-skin of Bing or Google. Three implications: your Google rank doesn't transfer (a page ranked #3 on Google may rank #28 on Brave or not be indexed); Brave Search Webmaster Tools offers sitemap submission and crawl reports that most marketing teams have never opened; Brave's signals favor independent publishers, technical documentation, and Reddit threads over mass-media domains.
Concrete tactics:
- Submit your sitemap to Brave Search Webmaster Tools. Free, 15 minutes.
- Test 10–15 customer-intent queries on search.brave.com in incognito. The delta vs Google is your opportunity gap.
- Confirm Bravebot crawls your site — check server logs for the
Bravebotuser-agent. - Earn links from independent publishers and community sources — outsized impact on Brave.
- Maintain canonical URLs and clean redirects. Brave's smaller crawl budget penalizes redirect chains harder than Google.
Honesty caveat: Opus 4.7's paraphrasing tendency means the same Brave rank now produces a less attributable Claude mention than pre-April. You may rank #1 in Brave, get pulled into Claude's context window, and still not see a clickable citation.
Structure content the way Claude prefers to cite it
The content rule that most surprises B2B marketers: long, dense, honest content beats short, punchy, promotional content on Claude. This contradicts the usual SEO and ChatGPT priors. Claude reasons across the entire document, and its conservative-citation default actively penalizes promotional tone.
Long beats short
A 2,000-word guide covering a topic comprehensively performs better in Claude than the same material split into five 400-word posts. Claude reasons across the document; ChatGPT and Perplexity extract individual chunks. For B2B AEO, target 1,800–3,500 words per page on any topic you want Claude to treat as authoritative.
Honest limitations boost citation rate
Pages that include phrases like "we don't know," "this is a limitation," "your mileage may vary" get cited more than pages claiming certainty. Directional figure circulating in industry analyses through April 2026: roughly 1.7× citation lift for content with explicit limitations vs equivalent content in promotional voice. Constitutional AI training rewards hedging.
This is the contrarian rule. Most marketing teams write to project authority and certainty — exactly the voice Claude downweights. Pages that win look like good engineering documentation, not ad copy. Working test: count explicit uncertainty statements on your top-priority page. If zero, you're writing to lose Claude.
Dated, named-author content compounds
Pages with no visible publish date, no last-modified date, and no by-line get filtered out: visible publish date and last-modified date at the top; named author with bio link (not "Team [Brand]"); author credentials (LinkedIn, professional title). For schema, use Article with datePublished, dateModified, and author as Person. See Schema Markup for AEO.
Formats that extract well
| Format | Claude extractability | Use for |
|---|---|---|
| Long-form prose with clear H2/H3 | High | Comprehensive guides, frameworks |
| Tables with labeled rows | High | Comparisons, decision matrices |
| Numbered lists, FAQ blocks | Medium-high | Procedures, long-tail questions |
| Pull quotes from named sources | High | Original research, expert commentary |
| Marketing-flavored prose | Low (actively downweighted) | Avoid |
Earn the third-party validation Claude trusts for B2B
For B2B and professional-services queries, Claude's source preference tilts hard toward analyst, review, and entity sources — not Reddit. Opposite of the Perplexity playbook, and where most cross-platform AEO advice falls apart.
The B2B sources Claude favors disproportionately: Gartner reports and Magic Quadrant references; LinkedIn long-form posts and Pulse articles; G2 / Capterra / TrustRadius reviews; Wikipedia entity articles (Wikipedia eligibility is itself a strong entity signal); industry analyst blogs (Forrester, IDC, sector-specific firms); and company blogs with structured data, named authors, and dated publication.
Reddit's role is real but category-conditional. Industry-observed figures put Claude's Reddit-citation share at roughly 6.6% on aggregate, but the average is misleading. Consumer queries ("best running shoes") pull Reddit far more often. B2B queries ("best CRM for healthcare") pull Gartner, G2, LinkedIn, Wikipedia disproportionately. "Claude loves Reddit" generalizes from consumer data; "Claude ignores Reddit" generalizes from B2B data.
Concrete moves for B2B brands:
- Get listed on G2 and Capterra with 15+ reviews and every field complete. Sparse profiles get filtered out.
- Establish a Wikipedia entity if you qualify under notability guidelines. Even one article changes Claude's parametric treatment.
- Earn analyst mentions — guest commentary, vendor landscape reports, Gartner Peer Insights presence.
- Publish on LinkedIn under named authors (founder, senior practitioners) — Pulse articles, not status updates.
- Build your knowledge graph entity with
sameAsschema linking your domain to Wikipedia, LinkedIn, Crunchbase. See Brand Entity Optimization for AI.
For consumer brands, invert: lean into Reddit, YouTube, and community platforms while keeping the structural signals.
Plan around the Constitutional AI refusal pattern
Claude's safety training causes it to refuse direct recommendations in regulated categories. Queries like "best supplement for sleep," "best lawyer in [city]," or "best stock pick for retirement" often produce a category-level explanation rather than a named recommendation in Opus 4.7. The AUP filter tightened around the 4.7 launch.
For regulated verticals (legal, health, financial services, supplements, tax advisory), this is structural — the play is different from non-regulated B2B AEO:
- Don't optimize for direct-recommendation queries. Claude won't name a specific lawyer or supplement.
- Optimize for category-authority queries. "What to look for in a [category] firm," "questions to ask a [practitioner]," "how to evaluate [product type]" are not refused. Claude will happily cite long, well-structured guides on these.
- Build trust signals for the category, not the brand. A guide on "how to evaluate a tax advisor" gets cited; a "we are the best tax advisor" page gets refused.
- Use educational content as the entry point. Brands that rank educationally get pulled into the rarer cases where Claude does name examples.
See AEO for Law Firms and Negative AI Memory for Supplement Brands.
Track whether Claude actually cites you
No public dataset of "what Claude-SearchBot crawled this week," no Search Console equivalent, no Anthropic-maintained webmaster dashboard. Third-party tools fill the gap with directional, not ground-truth data. To measure Claude visibility, you build a manual workflow.
The minimum process — 60–90 minutes per week:
- Build 10–15 customer-intent queries from customer questions, sales-call transcripts, keyword research. See Find AI Prompts Customers Use.
- Test each in three modes: Claude default chat (parametric, logged out); Claude with web tool explicitly invoked; Claude Code or agentic flow if your buyers use them.
- Record four signals per result: brand named in the prose (Y/N); brand cited with a clickable URL (Y/N — these now diverge in 4.7); brand paraphrased without attribution (Y/N — the new failure mode); position vs competitors.
- Screenshot everything in a dated folder. One screenshot is anecdote; ten are pattern.
- Cross-reference server logs. Did Claude-User or Claude-SearchBot hit your URL in the last 7 days? If not, you have a crawler issue, not a content issue.
- Repeat monthly. Track week-over-week share of voice across the same query set.
Laborious — but the only methodology that captures Opus 4.7's split-mode behavior, where a brand can be invisible in clickable citations but visible in unattributed prose. See AI Share of Voice: How to Measure.
If 60–90 minutes a week doesn't fit your team, the Far & Wide AEO Enterprise Audit (from €750) tests Claude alongside ChatGPT and Perplexity across all three retrieval scenarios — parametric, clean session with web search, and session with ideal customer profile — and delivers a 15+ document remediation roadmap with screenshots and per-product visibility analysis. Start at farandwide.io.
Decide where Claude fits in your AI visibility strategy
Claude isn't a first-priority platform for every brand, but for B2B and professional-services brands whose buyers use Claude Pro, it has the highest visibility-to-effort ratio of any AI engine right now — precisely because it's still under-optimized by competitors.
Citation behavior: Claude (web tool) vs Claude (parametric) vs ChatGPT vs Perplexity
| Factor | Claude (web tool) | Claude (parametric) | ChatGPT | Perplexity |
|---|---|---|---|---|
| Default behavior | Fires when needed | Default in standard chat | Fires on ~31% of prompts | Always searches |
| Retrieval index | Brave Search | Training data (Jan 2026 cutoff) | Bing-grounded | Own crawler + partners |
| Citation format | Inline; URLs paraphrased post-4.7 | Brand names, no URLs | Names; links rare | Numbered inline [1][2][3] |
| Reddit weight | Low for B2B, moderate for consumer | Moderate | Moderate | Very high |
| Analyst / review weight | Very high for B2B | High | Moderate-high | Moderate |
| Freshness sensitivity | High when tool fires | Low (anchored to cutoff) | Moderate | Very high |
| Optimize first | Brave + Claude-SearchBot | Wikipedia + analyst | Bing + entity | Reddit + freshness |
| Hardest to measure | Yes (paraphrasing 4.7) | Yes (no endpoint) | Moderate | Easiest |
For a fuller cross-platform comparison, see Perplexity vs ChatGPT vs Gemini and How to Rank in Perplexity.
When Claude should be a first priority
- B2B SaaS, professional services, agencies, consultancies, advisors where buyers use Claude Pro for research.
- Brands with strong analyst, review-platform, or Wikipedia presence — those signals over-index on Claude.
- Brands already writing long-form, dated, named-author content — you're partway optimized; remaining work is robots.txt and Brave.
When Claude is a second priority
- Pure consumer brands — ChatGPT and Perplexity reach more buyers faster.
- Brands that publish only short-form content — Claude downweights short promotional pages.
- Highly regulated direct-recommendation categories — Claude visibility is structurally capped until you've built category-authority content.
Realistic payoff window
For a B2B brand starting from scratch — robots.txt fix, Brave indexing, schema, three to five long-form pages, G2/Capterra cleanup, Wikipedia eligibility — the realistic compounding-payoff window is 6 to 8 weeks before consistent week-over-week Claude mentions appear. Crawler unblocks land in days. Brave indexing in 1–3 weeks. Content compounding in 4–8. Wikipedia and analyst placements in 3–6 months. Don't expect ChatGPT-style "Reddit comment cited within 24 hours" velocity.
Avoid these 5 Claude optimization mistakes
These patterns kill Claude visibility specifically. Some overlap with general AEO mistakes, but several are unique to how Claude works.
1. Blocking Claude-SearchBot via legacy robots.txt boilerplate. The 2023-2024 "block all AI" templates catch Claude-SearchBot through wildcard rules and quietly remove your site from Claude's citation set. Audit before any other Claude work.
2. Treating Claude as Google AI. Claude doesn't search through Google's index. Your Google rank doesn't transfer. If you've never opened Brave Search Webmaster Tools, you've never optimized for the index that feeds Claude's web tool.
3. Writing in promotional voice. "Industry-leading," "transform your business," "the smarter way to…" — Constitutional AI training treats these as downweight signals. Pages that win read like documentation or analyst reports, not ad copy.
4. Splitting deep topics into thin posts. The standard SEO advice inverts on Claude. A 2,500-word guide beats five 500-word posts on subtopics, because Claude reasons across the document.
5. Optimizing the grounded layer only. Brave rank and Claude-SearchBot allowance get you into the web-tool answer set — but the web tool only fires sometimes. The other half is parametric, served from training data. If your brand isn't in Wikipedia, on G2, in analyst commentary, or in long-form content from before the January 2026 cutoff, you're invisible in standard chat regardless of Brave rank.
Final checklist
Foundation (Week 1)
- Audit
robots.txt: explicitly allowClaude-UserandClaude-SearchBot; decide onClaudeBot - Submit sitemap to Brave Search Webmaster Tools
- Test 10–15 customer-intent queries on Brave and on Claude (parametric + with web tool); record baseline
- Confirm visible publish date, last-modified date, and named author on every priority page
- Add
ArticleandPersonschema withdatePublished,dateModified, full author markup - Consolidate top 10 priority pages to 1,800+ words per topic
Content + signals (Weeks 2–4)
- Rewrite top 5 pages with explicit limitations, hedging, uncertainty statements (where genuinely true)
- Convert prose comparisons into tables; add named entities, specific numbers, dates
- Complete G2 / Capterra / TrustRadius profiles — minimum 15 reviews, every field filled
- Publish 2–3 named-author pieces on LinkedIn from founder / senior team
- Pursue Wikipedia eligibility (or refresh existing article)
- Build
sameAsschema linking domain to Wikipedia, LinkedIn, Crunchbase
Tracking (monthly)
- Re-run target queries in Claude (all 3 modes); log brand-named, URL-cited, paraphrased, position
- Cross-reference server logs for Claude-User and Claude-SearchBot activity
- Compare results to ChatGPT and Perplexity on the same query set
- Refresh top pages with new dates, data, examples
FAQ
Both matter. Claude-SearchBot governs what appears when the web tool fires. ClaudeBot governs what Claude knows in standard parametric chat. Allowing only Claude-SearchBot makes you visible in grounded answers and invisible in parametric ones.
Be the #1 response in Claude
Want to know if Claude actually recommends your brand — and how to get cited as the first option? Get a Far & Wide AEO Enterprise Audit (from €750). Claude is one of the three engines we benchmark, alongside ChatGPT and Perplexity — same query set, all three retrieval scenarios (parametric, with web tool, fresh session), screenshots, a 15+ document remediation roadmap, and a 1.5-hour live strategy session. We test Claude with the web tool fired and in parametric mode separately, so you see both halves of the surface.
Note: the €80 AI Visibility Report covers ChatGPT only — Claude visibility is benchmarked in the Audit. Start with the Audit if Claude is your priority engine.
Get your audit