AI hides your brand on the queries customers actually ask. Here are the 5 prompt types that return zero brands — and how to win them anyway.

We tested 5 AI engines (ChatGPT, Claude, Gemini, Perplexity, plus a search-engine baseline) on 47 supplement queries this month. Across 235 (prompt × platform) combinations, 51 returned zero brand recommendations. That is 22%. One in five of the queries customers actually ask cannot be won at the brand level, no matter how much budget a brand pours into the slot. The pattern is not random. It clusters by prompt type, and the prompt types that return zero brands are also the ones that sound most natural to a real human asking a real health question.

For content marketers and AEO strategists, this changes which prompts are worth optimizing for, and what content has to do when direct brand recommendation is structurally off the table.

Back to Blog

Why AI returns zero brands

AI reads the question literally. “Best supplements for perimenopause” asks about a category and a persona, so the four LLMs we tested deliver category education — ingredient discussion, study summaries, named experts. “Best magnesium brand for sleep” asks about brands, so the same four LLMs deliver brand names. The four engines (ChatGPT, Claude, Gemini, Perplexity) all behave this way. The search-engine baseline behaves differently because it surfaces consumer-review articles where brand names already live in the title, but the LLMs treat persona and safety phrasings as a signal to refuse direct recommendation.

The implication is that prompt phrasing, not category competitiveness, is the dominant variable on whether a brand can be surfaced at all.

The prompt-type breakdown

Across the 47 prompts, we grouped queries into 15 types and measured two metrics per type: average number of brands surfaced per (prompt × platform) combo, and the rate at which a combo returned zero brands.

Prompt typeAvg brands per comboZero-brand rateExample
trust-cert14.60%“third-party tested vitamin brands”
worst8.324%“dangerous supplements to avoid”
channel8.30%“best supplements at Costco”
brand-first7.13%“best melatonin brand”
trending5.416%“best NMN supplement”
comparison4.20%“AG1 vs Bloom vs Huel”
persona-athlete4.140%“best supplements for marathon training”
symptom-led3.630%“best supplements for brain fog”
safety-question1.460%“is creatine safe for women”
persona-perimenopause0.967%“best supplements for perimenopause”

The data inverts intuition. The prompt phrasings that read as most consumer-friendly (persona questions, symptom questions, safety questions) produce the worst brand visibility. The prompt phrasings that read as boring or commercial (trust-cert, channel, brand-first, comparison) produce the most brand names.

The 5 most damaging prompt types

Ranked by zero-brand rate, the prompt types where direct brand optimization breaks down hardest are:

  1. Persona-perimenopause — 67% zero-brand rate. “Best supplements for perimenopause” returns zero brands two-thirds of the time. The substitute is ingredient education and named-expert quotes.
  2. Persona-bloodsugar and safety-question — 60% zero-brand rate each. “Natural alternatives to Ozempic” and “is creatine safe for women” both trip the same refusal pattern. AI cites studies, names ingredients, occasionally names a clinician, almost never names a brand.
  3. Persona-athlete — 40% zero-brand rate. “Best supplements for marathon training” performs better than perimenopause, but still returns no brand 4 in 10 times. The athlete category is structurally fragmented (46 unique brands across 3 prompts in our data), which compounds the visibility problem.
  4. Symptom-led — 30% zero-brand rate. “Best supplements for brain fog” and similar symptom phrasings sit in the middle of the range. AI sometimes names brands; more often it names mechanisms.
  5. Persona-pregnancy and persona-biohacker — 20% zero-brand rate each. Both perform much better than perimenopause but still leave one combo in five empty.

These five types account for most of the structural visibility loss in the supplement niche. A brand that builds its content strategy around these phrasings is, by the data, leaving the majority of its budget on prompts where AI structurally refuses to surface a brand.

What AI substitutes when it returns zero brands

When the four LLMs decline to name a brand, they do not return an empty answer. They substitute one of three things. Reading the meta-data on the 51 zero-brand combos, the pattern is consistent.

Ingredient discussion is the most common substitute. Instead of “Brand X melatonin is best for sleep”, AI explains forms — magnesium glycinate versus magnesium citrate versus magnesium L-threonate, dosing windows, absorption considerations. The reader gets a chemistry lesson, not a shopping list.

Peer-reviewed study summaries are the second. ChatGPT on creatine safety, for example, cites 5 separate studies and names zero brands. Claude on adaptogens often follows the same pattern. The content is technically authoritative and commercially neutral.

Expert authority quotes are the third. Dr Mary Claire Haver on perimenopause, Dr Jolene Brighten on women’s hormones, and similar named clinicians appear in AI responses without products attached. The expert provides the credibility AI is unwilling to grant a brand directly.

For an AEO strategy, the substitute matters as much as the refusal. The question stops being “how do we get our brand named” and becomes “how do we get our brand into the substitute content AI is willing to cite”.

What this means for content strategy

Three concrete moves come out of the data. Each addresses a different layer of the substitution problem.

Move 1: Build ingredient-authority content on a brand-owned domain. When AI substitutes “Brand X melatonin” with “magnesium glycinate is the best form for sleep”, the next question is which authority AI cites for that ingredient claim. In the 47-prompt dataset, two brand-owned domains rank in the top 25 source domains for citation share — livemomentous.com (28 citations across 4 platforms) and omre.co (11 citations across all 5 platforms). Both publish dense, ingredient-science-led content with named studies and dosing detail. AI cites them inline as authority sources alongside neutral publishers. The brand name appears as a source, not as a product recommendation, but it appears. That is a different kind of visibility, and it works on prompts where direct brand recommendation is structurally off the table.

Move 2: Reframe the optimization target as brand-first prompts. The data shows a 64-percentage-point gap between persona-perimenopause (67% zero-brand rate) and brand-first phrasing (3% zero-brand rate). A perimenopause supplement brand should not optimize content for “best supplements for perimenopause”. It should optimize for “best magnesium brand for menopause”, “best Estroven alternatives”, or head-to-head comparisons such as “Thorne Meta-Balance vs HUM Fan Club”. Brand-first phrasing is what AI answers with brand names. Build the content around the queries AI will actually surface, then earn the placement on those queries through depth, citation, and on-domain credibility.

Move 3: Target the cited expert and publisher network instead of the brand directly. When ChatGPT cites Dr Mary Claire Haver on perimenopause without products, the goal becomes “appear in a Dr Haver context” — research collaboration, podcast feature, peer-reviewed study co-authorship, sponsored editorial that puts the brand in proximity to the expert AI already trusts. The brand does not go through ChatGPT directly; it goes through the source ChatGPT cites. The same logic applies to publisher contexts: Healthline, Fortune, ConsumerLab, FDA.gov, Innerbody, and the next 20 mid-tier publishers gatekeep most of the brand citations in our supplement dataset. Editorial placement on those domains is a more reliable lever than direct prompt optimization.

These three moves work because they treat the substitution as a routing problem, not as a refusal. The brand still appears — just not where the original prompt asked.

What you can do now

  • Map your target queries by prompt type, then check the zero-brand rate per type. Persona, symptom, and safety phrasings need a different content strategy than brand-first or comparison phrasings.
  • Identify which of your queries are brand-extractable today and which are ingredient-only today. The split determines where direct optimization works versus where authority-network strategy works.
  • Audit the publisher and expert sources AI cites on your zero-brand queries. Those are the addressable surfaces for indirect visibility.

Want this same level of clarity for your category?

Which prompt types in your niche return zero brands, which prompt phrasings extract brand names, and which content patterns AI substitutes for brand recommendations. Far & Wide runs an AEO Enterprise Audit that maps your brand across ChatGPT, Claude, and Perplexity, identifies which of your target queries are brand-extractable versus ingredient-only, and delivers a prioritized content-strategy roadmap your team can execute.

Request an AEO Audit

For the full dataset behind this article (47 prompts × 5 AI engines, 1,329 brand mentions, 791 unique brands, 275 source domains), see the anchor research piece: 25 domains drive half of all AI brand recommendations in supplements.