Citation freshness across 5 AI engines: what 939 mentions in supplements reveal

Citation freshness is how recently the content cited by an AI engine was published or last updated. We measured it across five AI engines (ChatGPT, Claude, Gemini, Perplexity, Google AI Search) on 939 citations from 501 unique URLs in our April 2026 supplements snapshot. This article reports what we observed and, after each finding, what it means for AEO practice today.

Back to Blog

Why we ran this

A practitioner asked it plainly on r/AIRankingStrategy: "I read somewhere that pages updated within the last 2-3 months are twice as likely to appear in AI answers compared to older content, and that AI visibility can drop by over a third in just five weeks without any active maintenance. I'm not sure how accurate these numbers are."

That uncertainty is the entry point for this study. The numbers floating around the AEO community come from a small handful of primary sources, recombined and repackaged across dozens of agency blogs:

  • Lily Ray's research presented at Tech SEO Connect 2025: 50% of AI citations are less than 13 weeks old.
  • Ahrefs' analysis of 17 million citations (July 2025): AI-cited content is 25.7% fresher on average than traditionally ranked organic content.
  • 5WPR AI Citation Source Index 2026 (May 2026): ChatGPT's top-cited pages are 76.4% under 30 days old; meta-analysis of 680 million citations.
  • arXiv 2509.19376, "Solving Freshness in RAG" (September 2025): formal academic backing that retrieval systems require explicit recency weighting beyond semantic similarity.

Each of those is a real finding. But each was measured under different conditions, with different niches, different platforms, and different sample frames. Our goal was to run our own measurement on a defined niche (supplements), document the methodology, and report what we found without recycling someone else's headline.

How we measured

We used the second phase of Far & Wide's Brand Visibility Index (BVI), the same internal methodology we run for client snapshots. The protocol:

  1. Brand mention collection. 47 prompts representing real consumer questions about supplements, run against five AI engines (ChatGPT, Claude, Gemini, Perplexity, Google AI Search) via official APIs. Each engine returned an answer with cited sources.
  2. Mention parsing. Every brand recommendation was parsed into a structured row with platform, position, mention type, and source URLs. Total: 1,329 brand mentions, 1,277 of which carried at least one cited URL.
  3. URL date enrichment. For each unique cited URL (n=501 after deduplication and removal of Gemini grounding redirects), we fetched the page and extracted publish or modified date from JSON-LD schema, OpenGraph meta, schema.org itemprop attributes, HTML5 time tags, and standard meta names. For pages without any date metadata (typically e-commerce product or category pages), we fell back to the Internet Archive Wayback Machine first-snapshot date as a coarse first-seen proxy.
  4. Age computation. Citation age, in days, equals snapshot reference date minus the most recent of publish_date or modified_date for each URL. Snapshot reference date is 2026-04-28, the most recent date in our raw collection data.

939 of the 1,277 mentions had usable date metadata. That is 73% coverage. The 27% gap is not random: it is concentrated in e-commerce product pages, vendor pages, and listicles that omit publish_time meta entirely. We discuss the bias this introduces in the limitations section.

Per-platform freshness profile

Platformn< 30 days< 13 weeks< 6 months< 1 yearMedian age
Claude20543%72%87%91%44 days
ChatGPT12434%64%86%92%55 days
Gemini26230%70%87%92%62 days
Perplexity18332%63%85%93%62 days
Google AI Search16528%52%76%84%88 days
All platforms93933%65%84%90%56 days

Median citation age varied by platform from 44 days (Claude) to 88 days (Google AI Search). The five engines clustered into a tight middle group (Claude, ChatGPT, Gemini, Perplexity all between 44 and 62 days median) and one outlier (Google AI Search at 88 days). Sixty-five percent of citations across all platforms came from content less than 13 weeks old, slightly higher than Lily Ray's general 50% benchmark but consistent with her direction of effect.

What this means for AEO practice. If your supplements brand has content older than three months, expect lower citation rates on every engine, and especially on Google AI Search where a higher share of cited content is in the older range. Inversely, content under 30 days old is over-represented on Claude (43% of its citations) but only modestly over-represented on Google AI Search (28%). The freshness pressure is highest on conversational engines and lowest on the engine most closely tied to traditional Google ranking signals.

How content age distributes

Cumulative percentage of citations under each threshold:

  • 7 days: ~18%
  • 30 days: 33%
  • 13 weeks (91 days): 65%
  • 6 months (183 days): 84.5%
  • 1 year (365 days): 90.5%
  • Older than 1 year: 9.5%

The shape is a strong frontload with a long tail, not a cliff. The steepest gradient is between 30 days and 13 weeks: 32 percentage points of citations land in those two months. Once content crosses six months, additional decay is gradual: only six percentage points separate 6-month and 12-month cohorts. After one year, only 9.5% of citations remain.

What this means for AEO practice. If you operate on a quarterly content update cycle (every 13 weeks), you are aligned with where roughly two thirds of citations sit by age. A monthly cycle puts you in the top third of cited content. Anything beyond six months without a refresh leaves you in a gradually shrinking pool that converges on the 9.5% "long tail" survivors.

Modified versus published date

We split URLs into three subsets to compare what type of date information predicts citation:

  • Subset P (publish only). URLs with a publish_date but no modified_date in their meta. n=59. Median citation age: 98 days.
  • Subset M (real modification). URLs with both publish_date and modified_date, with the modification at least 30 days after publish. n=125. Median citation age: 70 days.
  • Subset N (excluded). URLs where publish and modified dates are within 30 days of each other (no clear modification event). n=131. Median citation age: 63 days.

The 28-day median gap between Subsets P and M is real in our data. But there is a structural caveat: we compute citation age as snapshot_date minus the most recent of publish_date or modified_date. Pages in Subset M, by definition, have a more recent modified_date and therefore a younger computed age. Some of the gap between P and M is the algebra of how we measure age, not evidence that modification independently lifts citation rate.

The honest claim available from this data: in our citation pool, pages with documented modification events appear younger than pages with only publish dates. Whether modification causes AI engines to cite a page more often than they otherwise would requires a different study design (paired before-and-after measurement on the same URLs, ideally on owned content where we control the modification).

What this means for AEO practice. Adding a documented modified_date to your meta (article:modified_time, schema.org dateModified) at minimum makes your pages eligible for the freshness signal AI engines extract. Pages without any date meta are at the floor of date-based ranking signals.

Top-cited domains and their freshness

We filtered to domains with at least 5 citations to keep medians meaningful (single-citation domains have no usable median). Top 10 by citation count:

DomainCitationsEnginesMedian ageUnder 30 days
fortune.com435/55 days100%
innerbody.com425/5182 days0%
fda.gov425/545 days21%
consumerlab.com414/58 days68%
bbcgoodfood.com354/520 days60%
store.eunatural.com344/547 days0%
naturproscientific.com334/562 days0%
livemomentous.com233/563 days0%
medicalnewstoday.com224/528 days64%
thegoodtrade.com224/576 days27%

Seven of the top ten domains have a median citation age under 90 days. Three sit in clearly different patterns: fortune.com is exceptionally fresh (median 5 days, every cited page under 30 days), fda.gov sits in a regulatory mid-fresh band (median 45 days), and innerbody.com is the visible outlier with a 182-day median yet still picked up by all five engines.

We deliberately do not draw the conclusion "authority compensates for staleness" from one outlier domain. We do not have external authority data joined to these domains. Without that, "authority" inferred from cross-platform pickup is circular: a domain cited on five engines is by definition the result we would predict from "authority". The honest reading is that one specific domain in our top-10 displays a stale-but-cited profile, and the rest are fresh.

What this means for AEO practice. Most of the domains AI engines actually cite for supplements are publishing or refreshing within a 90-day window. Earning placement on these specific top domains, where editorial cadence is high, gives you both a citation-frequency lift and consistent freshness exposure. The list is your gatekeeper map, scoped to supplements.

What survives a year

Eighty-nine citations in our snapshot were over 365 days old. The dominant patterns:

DomainStale citationsMedian ageMax ageType
supplysidesj.com12396 days983 daysRegulatory and trade news
sleepadvisor.org8566 days651 daysEvergreen guide
medicalnewstoday.com71,069 days1,069 daysMedical reference
perfectketo.com6505 days505 daysEvergreen guide
healthline.com65,961 days5,961 daysHomepage outlier
us.my-best.com5440 days440 daysReview aggregator
eatthis.com41,189 days1,189 daysListicle
glam-vegan.com4842 days842 daysNiche guide

Three qualitative content types dominate the 1-year tail: regulatory and trade content (supplysidesj.com), evergreen niche guides on stable topics (sleep, keto, vegan), and one homepage outlier (Healthline.com cited by Gemini as a generic reference, age 16 years; this is a routing artifact rather than a content-quality signal).

What this means for AEO practice. Content topics that are scientifically settled and brand-agnostic, supplement safety, mechanism-of-action explainers, regulatory primers, can survive a full year of citation visibility without aggressive refresh. Content tied to product recommendations, "best of" rankings, or trend-sensitive consumer questions decays much faster.

Cross-platform pickup and age

If freshness is the strongest signal driving AI citation, content cited by multiple engines should be systematically younger than content cited by one. We tested by grouping URLs by how many engines cited them.

URLs cited byn unique URLsMedian age< 30 days< 13 weeks
1 platform20479 days25%55%
2 platforms6761 days37%67%
3 platforms3163 days39%55%
4 platforms1047 days30%80%
5 platforms3121 days33%33%

The trend from 1-platform to 4-platform pickup is roughly monotonic: median age drops from 79 days to 47 days, percentage under 13 weeks rises from 55% to 80%. The 5-platform bucket breaks the trend (median 121 days, 33% under 13 weeks), but with only three URLs, this bucket carries no statistical weight. The three URLs in the 5-platform bucket include innerbody.com (182 days), which alone pulls the median up.

The honest takeaway: URLs picked up by multiple engines tend to be slightly fresher than single-engine URLs, but the effect is small relative to sample noise, and it confounds with domain-level patterns we cannot separate without external authority data. Calling this "freshness drives consensus" overstates a 32-day median delta on n=10.

What this means for AEO practice. The intuition that fresh content wins multi-engine pickup is directionally supported in our data, but the lift is modest. Content strategy aimed at multi-engine visibility should not over-invest in pure freshness at the expense of substance: the 4-platform tier still has a 47-day median, not a 5-day median.

Position and age within an engine

Inside a single AI response, sources cited in the first position tend to be different in age from sources cited later. We compared median age at position 1 versus position 5+ within each engine:

PlatformPosition 1 median agePosition 5+ median ageGap
Claude41 days60 days19 days
ChatGPT51 days76 days25 days
Gemini56 days60 days4 days
Perplexity53 days79 days26 days
Google AI Search71 days113 days42 days

In four of five engines, position-1 sources are at least 19 days fresher than position-5+ sources. Gemini is the exception with effectively no gap. The largest effect is on Google AI Search (42 days), the smallest on Gemini (4 days).

Note that "position" in AI responses is not directly comparable across engines: Claude tends toward inline narrative citation, Perplexity toward strict numbered ranking, Google AI Search toward link snippet ranking. We compare position within engine, not across.

What this means for AEO practice. Inside a given engine's output, fresher content is more likely to land in lead position. For a brand competing for citation visibility, the difference between landing at position 1 and position 5+ is real, and a fraction of that difference appears to be content age. Recently-modified content is over-represented at the top of engine answers.

Three-Layer Visibility Model context

This study measures Layer 3 of the Three-Layer Visibility Model, the layer where AI engines retrieve fresh content from the open web during a user session, with no prior context. Layer 1 (parametric knowledge baked into model weights at training time) updates only when models retrain, on a 3 to 12-month cycle. Layer 2 (contextual retrieval that incorporates user signals) sits between Layer 1 stability and Layer 3 volatility. Citation freshness is most operative at Layer 3, and the numbers in this article describe behavior at that layer specifically.

This matters because brands that focus optimization energy on Layer 1 (waiting for retraining cycles to embed brand awareness) miss the layer where freshness pressure is most active. Conversely, optimizing only for Layer 3 freshness without building Layer 1 entity recognition produces unstable visibility that swings session to session.

What we cannot claim

The honest scope of this study, in plain terms:

  1. Single niche. Everything in this article describes supplements. We have no comparable measurement yet for SaaS, finance, legal, healthcare, or other verticals. Cross-niche generalizations are not supported by this data alone.
  2. Single snapshot. Numbers reflect a single point in time (April 2026 collection). Run-to-run variation is unmeasured.
  3. Selection bias on dates. 27% of citations were excluded because their URLs lacked parseable date metadata. The excluded set skews toward e-commerce product pages, vendor sites, and certain listicle formats. Our distribution likely under-represents commercial-product citations.
  4. No causal claims. All findings here are descriptive and correlational. We did not refresh-and-measure on owned content, and we have no counterfactual for "what if this page had not been modified". Statements about modification "working" are not supported without that experimental arm.
  5. No external authority data. Domain Rating, Domain Authority, and traffic estimates are not joined to this data. Statements about "authority compensating for staleness" are not testable in this snapshot.
  6. Small sample in some buckets. The cross-platform 5-platform bucket has n=3. Cross-platform 4-platform has n=10. Patterns in those buckets are observations, not statistically powered findings.
  7. Position is not directly comparable across engines. Inside-engine position comparison is valid; cross-engine position comparison is not.

Future BVI snapshots will close some of these gaps. Multi-snapshot longitudinal data, multi-niche replication, and external authority enrichment are on the Phase 3 roadmap.

Frequently asked questions

How often should we update content for AI visibility?

In our supplements data, 65% of citations are under 13 weeks old, and 33% are under 30 days. A quarterly refresh cycle aligns with the median signal; a monthly cycle puts you in the top third. Beyond six months without refresh, citation share decays to a slowly-shrinking long tail.

Is the 13-week rule real?

Lily Ray's research showing 50% of AI citations under 13 weeks is real and replicates directionally in our supplements data (we observed 65% under 13 weeks, slightly higher). The exact percentage varies by niche and snapshot conditions. The directional finding holds: most cited content is recent. The precise threshold is not a universal constant.

Does updating an old page work?

In our snapshot, pages with documented modification events at least 30 days after publish appear in the citation pool with a younger median age (70 days) than pages with only a publish date (98 days). This is partially a measurement artifact (we compute age from the more recent date), and it does not prove modification independently lifts citation rate. A controlled test on owned content is the cleaner answer; we have not run that yet.

Why does my Google ranking not translate to AI visibility?

Two reasons visible in our data. First, the domains AI engines cite most often are not always the domains ranking highest on Google for the same query: domains like consumerlab.com and innerbody.com lead AI citations for supplements, regardless of their general organic ranking position. Second, AI engines weight content age more heavily than Google's traditional algorithm: a strong-ranking page that has not been refreshed can be replaced in AI citations by a fresher page from a less-authoritative domain.

Which AI engine prefers the freshest content?

In our April 2026 supplements snapshot, Claude had the lowest median citation age (44 days) and the highest share of citations under 30 days (43%). Google AI Search had the highest median citation age (88 days) and the lowest share under 30 days (28%). Whether this reflects engine-level architecture or single-snapshot variation is open until we replicate.

What kind of content survives more than a year of citation visibility?

In our data, three patterns dominate the 1-year tail: regulatory and trade content, evergreen niche guides on scientifically settled topics (sleep, keto, vegan), and homepage citations as generic references. Product-specific and trend-sensitive content decays out of the citation pool faster.

What is "active maintenance" exactly?

The vocabulary comes from practitioners: it means an ongoing refresh cadence with documented modification events, not occasional one-off edits. Operationally, the floor is to set article:modified_time and schema.org dateModified on every meaningful update. The ceiling is to update content substance on a regular cycle aligned with engine freshness signals.

Can I trust the floating numbers in AEO articles?

Some are primary research (Lily Ray's 13-week finding, Ahrefs' 17M-citation analysis, the 5WPR Citation Source Index meta-analysis). Others are secondary aggregations of those primary numbers. When you encounter a stat without a clear methodology link, treat it as directional rather than precise. Our methodology disclosure here is meant to be the kind of source you can verify before quoting.

Curious whether your brand shows up in AI answers? The AI Visibility Report (€80) maps how ChatGPT recommends in your category and gives you a 10-action roadmap to improve your visibility. PDF in your inbox in 20 minutes, no subscription. For larger sites and multi-product catalogs, the AEO Enterprise Audit (from €750) covers 3 AI engines, 15+ deliverables, and a 1.5-hour strategy call.