title: "AEO: How to Rank on ChatGPT, Perplexity, Claude, and Gemini" slug: aeo-how-to-rank-on-chatgpt-perplexity-claude-gemini description: "Answer engine optimization is the new half of the same SEO job. Here is the operating manual for getting cited by ChatGPT, Perplexity, Claude, and Gemini." pillar: pseo-geo-aeo author: rj-murray publishedAt: "2026-04-25T00:00:00Z" tags: ["aeo", "seo", "llm", "chatgpt", "perplexity", "pillar"] coverImage: /posts/aeo-how-to-rank-on-chatgpt-perplexity-claude-gemini/cover.png coverAlt: "AEO citation flow across LLM answer engines" featured: false faq:
- q: "What is answer engine optimization?" a: "Answer engine optimization is the practice of structuring a website so that large language model answer engines (ChatGPT, Perplexity, Claude, Gemini) cite it as a source when they generate an answer. It uses the same fundamentals as classic SEO (named entities, schema, topical clustering) and adds three new ones: machine-readable summaries, answer-shaped paragraphs, and an llms.txt manifest."
- q: "Is AEO a replacement for SEO?" a: "No. AEO is not a replacement for classic SEO. It is the newer half of the same job. The ranking surfaces are different (Google SERP vs. an LLM citation panel) but the underlying signals overlap. A site with strong classic SEO is roughly 70 percent of the way to AEO already. The remaining 30 percent is structure: schema, llms.txt, and answer-shaped content."
- q: "Which LLM answer engine sends the most traffic?" a: "As of April 2026, Perplexity sends the most click-through traffic per cited source because its UI surfaces inline numbered citations next to every claim. ChatGPT search sends fewer clicks per impression but generates higher-intent visitors. Gemini AI Overviews drive volume in some verticals and almost zero in others. Claude (via claude.ai search) sends the lowest absolute traffic but the highest-quality citations because the answer model selects fewer sources per response."
- q: "What is an llms.txt file?" a: "llms.txt is a proposed standard (llmstxt.org) for a plain-text manifest at the root of a domain that tells an LLM how to navigate the site. It lists the canonical URLs, summaries, and curated link sets the model should prefer when citing. It is the AEO analog of robots.txt and sitemap.xml combined into one human-readable file."
- q: "Do I need separate AEO pages or do my existing pages work?" a: "Existing pages work if they meet three criteria: they have a clean H1 + tl;dr summary at the top, the body is broken into answer-shaped paragraphs (50 to 90 words each, one claim per paragraph), and the page has JSON-LD schema. If any of those are missing, retrofit the existing page rather than building a separate AEO version. Duplicate AEO pages dilute citation signal."
- q: "How do I track AEO performance?" a: "Three signals: referrer logs (filter for chat.openai.com, perplexity.ai, claude.ai, gemini.google.com), branded-prompt monitoring (run a fixed list of 30 to 50 queries weekly across the four engines and log which sources they cite), and assisted conversions in PostHog or GA4 attributed to those referrers. Traditional rank-tracking does not work because citation order is non-deterministic across sessions."
- q: "How long does AEO take to show results?" a: "Faster than classic SEO and slower than paid. Schema and llms.txt deploy in a day. New content gets indexed by Perplexity within 48 hours and by ChatGPT search within roughly two weeks. Citation share grows for 90 to 120 days as the engines re-crawl and rebuild their internal ranking. Most of our clients see meaningful citation lift in the third month."
Answer engine optimization is not a replacement for classic SEO. It is the newer half of the same job. ChatGPT, Perplexity, Claude, and Gemini do not crawl the web the way Google does, but they do read the same pages, parse the same schema, and cite the same authoritative sources. The sites that rank in 2026 ship for both surfaces from the start. Below is the operating manual we use on every AtlasForge rebuild.
tl;dr
AEO is classic SEO with three add-ons: machine-readable summaries at the top of every page, JSON-LD schema the LLM can parse without ambiguity, and an llms.txt manifest at the root of the domain. Named entities, named tools, and named numbers get cited. Vague positioning does not. Perplexity and ChatGPT send most of the traffic; Claude and Gemini matter for trust and brand. Track citations weekly across a fixed prompt list. Rebuild pages that fail the answer-shape test. Skip AEO entirely in low-citation verticals.
AEO, defined (and how it differs from SEO)
Answer engine optimization is the practice of structuring a website so that LLM answer engines cite it when they generate an answer to a user prompt. The user never lands on a Google search results page. They land on a chat interface, read a paragraph the model wrote, and click (or do not click) a numbered citation underneath that paragraph.
Classic SEO optimizes for ten blue links and the click-through to your page. AEO optimizes for being one of the three to seven sources the model picked to write its paragraph. The two surfaces share fundamentals: crawlability, page speed, internal linking, schema, topical authority. They diverge on three points.
First, the LLM does not need to send the user to your page to extract value from it. The citation is the win. A page with no clicks but high citation share is still doing the job, because the model is now associating your brand with the answer.
Second, the LLM rewards specific, named, dense content over keyword-stuffed pages. A paragraph that names the tool, the version, the year, the city, and the dollar amount cites cleanly. A paragraph that says "leading provider of innovative services" does not. Classic Google SEO has been moving in this direction for a decade. LLM ranking accelerates it.
Third, the LLM reads the page as semantic content, not as ranked tokens. Schema, llms.txt, and answer-shaped paragraphs let the model parse the page without inference. A site that hands the model clean JSON-LD outperforms a site with the same content buried in unmarked HTML. We cover the schema layer below, and the rebuild path from WordPress is in our WordPress to Next.js migration guide.
Position on record: AEO is not a separate discipline. It is what classic SEO becomes when the search surface is a chat interface. Sites that already rank well are roughly 70 percent of the way there. The remaining 30 percent is structural, and that is what this post covers.
The four answer engines, ranked by traffic and citation behavior
The four engines that matter in April 2026 are ChatGPT, Perplexity, Claude, and Gemini. They behave differently. Optimizing without understanding the differences wastes effort.
Perplexity sends the most click-through traffic per cited source. Its UI surfaces numbered inline citations next to every claim, so users actually click them. Perplexity re-crawls fast (48 hours for new content on monitored domains) and weights schema heavily. If you can only optimize for one engine, optimize for Perplexity. The citation API documentation is at docs.perplexity.ai and worth reading end to end.
ChatGPT search generates lower click volume per cited source but higher-intent visitors. The model surfaces fewer citations per answer than Perplexity, and users click less aggressively because the conversational format makes the answer feel complete. ChatGPT search prefers sources with strong schema and clear authorship. The OpenAI platform documentation at platform.openai.com/docs covers the search tooling and the indexing behavior. Pages we have seen rank in ChatGPT search consistently have FAQPage schema, a clean tl;dr block, and Person schema for the author.
Claude (via claude.ai search and the Claude apps) sends the lowest absolute traffic but selects the fewest, highest-quality citations. When Claude cites your page, it is because the model judged the source authoritative, not because it filled a quota. Claude weights authorship signals (real Person schema, real bios, named credentials) more aggressively than the other three. Optimizing for Claude is mostly indistinguishable from writing a credible page. The Anthropic developer documentation at docs.anthropic.com covers the search-augmented behavior in the Computer Use and tools sections.
Gemini AI Overviews drive volume unpredictably. In some verticals (medical, legal, finance) Gemini is the dominant referrer. In others (B2B SaaS, agency services) it sends almost none. Gemini ranks Google's classic SERP signals heavily, so a site that ranks well in Google Search will usually appear in AI Overviews without separate work. Google's AI documentation lives at ai.google.dev.
The pattern is consistent: Perplexity and ChatGPT for traffic, Claude for trust, Gemini for vertical-dependent volume. The optimizations overlap substantially. The same page can rank in all four if it ships clean schema, named entities, and an llms.txt manifest.
For a deeper read on how the GEO surfaces interact with classic search, see pSEO in 2026: what changed and GEO pages that don't get penalized.
Why named entities, named tools, and named numbers cite better
LLMs cite specificity. The reason is mechanical: when the model is generating an answer, it is selecting source paragraphs that contain the exact tokens the user prompted with. A page that says "we deploy modern web frameworks" has no tokens that match a prompt like "best Next.js 16 agencies for mid-market B2B." A page that says "we ship Next.js 16 sites for $10M to $100M B2B firms in 14 to 21 days" matches the prompt exactly.
The rule is to name everything you can name without lying. Tools, versions, years, dollar amounts, page counts, day counts, city names, client names, clinician names, framework names. Vague claims about quality, speed, or expertise do not cite. Specific claims with named referents cite reliably.
The Therapy Connections rebuild is a clean example. We shipped a 35-page Next.js custom build in 18 days, covering acquired brain injury rehabilitation services across a three-clinician team in Kitchener-Waterloo, Ontario. Every clinician got a named bio page with credentials, treatment specialties, and Person schema. When a user prompts ChatGPT for "ABI rehabilitation clinics in Kitchener-Waterloo," the model has clean signal to cite: a real city, a real condition, a real clinic, real clinicians, real schema. The page cites. A generic "leading rehabilitation services" page does not.
This is also where the doorway-pages risk shows up. We covered the rule in detail in pSEO in 2026: what changed: a programmatic page without 500 unique words and a real underlying data shape is a doorway page waiting to get deindexed. The same uniqueness bar applies for AEO. The model needs distinct signal per page. If your 200 city pages all say the same thing, the model picks one and ignores the rest.
The agency uniqueness bar is concrete: every page has a tl;dr (60 to 100 words), every claim is sourced or specific, every list has named items, and every paragraph carries one claim. The Jetlak rebuild ran 178 product pages through n-gram uniqueness checks in CI with a 40 percent minimum differentiation. The same checks apply to AEO content.
A useful diagnostic: read your top page out loud and count the proper nouns and numerals. Under five and the page will not cite. Fifteen or more and it will.
The schema layer that LLMs actually parse (JSON-LD subset)
LLMs do not parse arbitrary microdata reliably. They parse JSON-LD. The W3C and Schema.org documentation cover dozens of types; LLMs cite cleanly from a small subset.
The schema we ship on every AtlasForge build:
- Organization with
name,url,logo,sameAs(LinkedIn, GitHub, X, Crunchbase). One block per site, in the root layout. - Person for every named author, every named clinician, every named team member. Includes
jobTitle,worksFor,sameAs, and adescriptionof credentials. Schema.org docs at schema.org/Person. - Article or BlogPosting on every blog post, with
authorreferencing the Person,datePublished,dateModified,headline, anddescription. - FAQPage on any page with a Q&A block. The schema is documented at schema.org/FAQPage. LLMs lift FAQ pairs verbatim into their answers, so the FAQ on this very post is doing AEO work.
- Service for each service the agency offers (rebuild, retainer, content engine), with
providerreferencing the Organization. - MedicalBusiness for clinical clients, with
medicalSpecialty,availableService, andaddress. The Therapy Connections rebuild ships this on every condition page; specification at schema.org/MedicalBusiness. - BreadcrumbList on every nested page. Trivial to add, cited frequently for navigation queries.
A schema mistake we see often: shipping JSON-LD that references entities that do not exist on the page. The model cross-references the schema against the rendered HTML. If the schema names a clinician and the page does not display that clinician's name, credentials, and bio, the model treats the schema as untrustworthy and the entire page loses citation rank. Schema describes what the page contains. It does not invent it.
Validation matters. We run every page through the Schema.org validator and Google's Rich Results Test before launch. A schema block with errors is worse than no schema at all because the model treats the page as untrustworthy.
For the migration path on existing sites with no schema, see WordPress to Next.js migration path and why mid-market companies keep getting stuck on WordPress. Most WordPress sites have plugin-injected schema with errors that have to be cleaned up before AEO will work.
The llms.txt file and what to put in it
llms.txt is a proposed standard at llmstxt.org for a plain-text manifest at the root of a domain. It tells the LLM how to navigate the site and which pages to prefer when citing. It is the AEO analog of robots.txt and sitemap.xml combined into one human-readable file.
Format is straightforward. A heading with the site name and one-line description, an optional second-level summary section, and one or more curated link sets under H2 headings. Each link is a markdown link with an optional one-line description.
The minimal llms.txt we ship on every AtlasForge build:
# AtlasForge
> AtlasForge is a mid-market B2B web and SEO agency. We rebuild marketing sites
> in 14 to 21 days on Next.js 16 and rank them in classic search and on LLM
> answer engines.
## Core pages
- [About](https://atlasforge.one/about): Founder, agency thesis, hiring posture.
- [Pricing](https://atlasforge.one/pricing): Foundry, Atlas, and Empire tiers.
- [Case studies](https://atlasforge.one/case-studies): 20+ client rebuilds.
## Pillar content
- [pSEO in 2026](https://atlasforge.one/blog/pseo-in-2026-what-changed)
- [AEO operating manual](https://atlasforge.one/blog/aeo-how-to-rank-on-chatgpt-perplexity-claude-gemini)
- [The 90-day organic growth plan](https://atlasforge.one/blog/the-90-day-organic-growth-plan)
## Optional
- [Blog archive](https://atlasforge.one/blog): Full index.
We have a longer post that goes deep on the format, edge cases, and what to leave out: see the llms.txt file for the complete manual.
The file matters for two reasons. First, several of the answer engines actively look for it (Perplexity confirmed support in late 2025; Anthropic's docs reference it as a navigational aid). Second, even on engines that do not formally support llms.txt yet, the file is a forcing function: writing one makes the agency state its own pillar pages and canonical URLs in plain language, and that exercise alone surfaces gaps.
The llms.txt also doubles as a sitemap for human readers who want to understand the site's structure. We link to ours from the footer.
Topical-cluster pages vs. answer-shaped pages
Classic SEO has converged on the topical-cluster model: a pillar page covers a broad topic, cluster pages cover sub-topics, and internal links flow up to the pillar. AEO inherits this model and adds a constraint: every page in the cluster has to be answer-shaped.
An answer-shaped page has six properties:
- A single H1 that names the topic in the same form a user would prompt.
- A tl;dr immediately under the H1, 60 to 100 words, written as a complete answer to the implied question.
- H2 sections that each cover one sub-claim. No rhetorical questions as H2. No second-person H2.
- Body paragraphs of 50 to 90 words each, one claim per paragraph.
- Bulleted lists for enumerable items (tools, steps, criteria), prose for everything else.
- A FAQ block at the bottom with six or more pairs, each Q&A self-contained enough to cite without context.
This page is built to those rules. Read it back: the tl;dr answers the prompt, the H2s carry the sub-claims, and the FAQ block at the bottom contains six self-contained Q&A pairs. The structure is not stylistic. It is the shape the model expects.
Topical clusters in AEO terms work the same way as in classic SEO with one difference: the pillar page should be the answer-shaped one, and the cluster pages should each answer a narrower question that links back. Our pSEO/AEO pillar is this post and pSEO in 2026: what changed; the cluster pages include GEO pages that don't get penalized, the llms.txt file, and the 48-hour before/after demo.
For mid-market clients running a content program, the topical-cluster work is also where the the mid-market SEO reporting framework ties into AEO measurement: cluster coverage maps to citation share, and the report should track both.
How Therapy Connections (clinical) gets cited for ABI rehabilitation queries
The Therapy Connections rebuild is the cleanest AEO case study we run. The clinic provides acquired brain injury rehabilitation in Kitchener-Waterloo, Ontario, with a three-clinician team. Pre-rebuild, the site was a static page that pre-dated their growth and did not stand up to insurer scrutiny. Post-rebuild, the site cites in ChatGPT, Perplexity, and Claude for ABI rehabilitation queries in the local market.
What the build ships:
- 35 pages of custom Next.js 16, written and reviewed by the lead clinician before publish.
- Per-condition pages (concussion, stroke recovery, traumatic brain injury, post-concussion syndrome) with MedicalBusiness, MedicalProcedure, and FAQPage schema on each.
- Per-clinician bio pages with Person schema, named credentials, treatment specialties, and licensure.
- A methodology page that names the assessment tools and the rehabilitation frameworks, with a tl;dr at the top and answer-shaped sections.
- An intake-flow page with explicit insurance information, named insurers, and FAQ schema.
- llms.txt at root listing the condition pages, the clinician bios, and the methodology page as the citable surface.
What the citations look like at day 90 post-launch:
- Perplexity cites the concussion page for the prompt "concussion rehabilitation Kitchener-Waterloo" inside the top three sources.
- ChatGPT search cites the methodology page for prompts about post-concussion symptom protocols.
- Claude cites a clinician bio page when a user prompts about credentials for ABI assessment.
- Gemini AI Overviews surface the FAQ block on the intake-flow page for insurance-related queries.
The reason it works is that every page has a real condition, real city, real clinician, real credentials, real schema. The model has clean signal to cite. There is no padding, no marketing language, no generic claims about quality. The clinic shows up because the page deserves to.
The same playbook applies to any clinical, legal, or specialist B2B services site. The rebuild path is in the WordPress to Next.js migration guide and the speed bar is documented in real Lighthouse scores before and after 6 mid-market rebuilds.
AEO measurement: tracking citations without traditional rank-tracking
Traditional rank-tracking does not work for AEO. There is no SERP to scrape. Citation order is non-deterministic across sessions, even for the same prompt on the same engine. Three signals replace it.
Referrer logs. The four engines pass identifiable referrer headers when a user clicks a citation. Filter for chat.openai.com, perplexity.ai, claude.ai, and gemini.google.com (plus the Google AI Overviews referrer pattern). Aggregate weekly. The trend is what matters; absolute numbers are noisy because users frequently land via direct navigation after seeing a citation without clicking it.
Branded-prompt monitoring. Maintain a fixed list of 30 to 50 prompts that represent the queries you want to rank for. Run them weekly across all four engines (manually for the first month; automate after the playbook stabilizes). Log which sources each engine cited and whether the client appeared. The metric is citation share: percentage of prompts where the client is in the top citation panel. Track it weekly. We use a private Notion doc and a once-a-week 30-minute review.
Assisted conversions. PostHog (or GA4) attribution against the LLM referrers. The visitor who lands from a Perplexity citation rarely converts on the first session. They convert two to four sessions later, often via direct or branded search. PostHog's session-stitching makes the assisted-conversion path visible. We report this monthly to clients on the mid-market SEO reporting framework.
What does not work: keyword rank-tracking tools (AhrefsSemrushMoz) running their LLM-citation modules. They are noisy, sample too small, and miss the long tail. Build the prompt list and check it manually. The discipline of writing the prompt list is itself useful, because it forces the team to articulate which queries actually matter.
A useful sanity check at the 90-day mark: pull the referrer log, run the prompt list, and ask whether the trend on both is up. If yes, keep going. If no, the page-shape audit needs a redo. We covered the audit checklist in Core Web Vitals changed in 2025; the AEO version uses the same audit cadence.
When AEO is a distraction (low-citation verticals)
AEO is not always worth the work. Three categories of vertical see almost no citation traffic and should focus on classic SEO and direct sales instead.
Transactional commerce with no information component. A site selling commodity products (replacement bulbs, generic apparel, parts catalogs) does not get cited because users do not prompt LLMs for those queries. They search Amazon, Google Shopping, or the manufacturer directly. AEO effort is wasted; the budget belongs in product feed quality and shopping ads.
Hyper-local services with thin search volume. A single-location plumber in a town of 8,000 people has too little prompt volume across the four engines for AEO to be measurable. Classic local SEO (Google Business Profile, citation consistency, local link building) returns more per hour. The exception is a clinical or legal service in a small market where the prompt volume is low but the per-prompt value is high; those are worth AEO work.
Verticals where the LLM refuses to cite specific brands. Financial advice, medical diagnosis, and legal advice generate cautious LLM responses that lean on the engine's safety guardrails rather than specific sources. The model often cites government or regulator pages and refuses to recommend specific firms. AEO can still help (named credentials, schema, named jurisdictions), but the ceiling is lower. Pair with classic SEO and trust signals (reviews, accreditation, named partnerships).
The diagnostic test: run 20 prompts a real prospective customer would actually type, across all four engines. Count how often any source in the vertical gets cited. If fewer than 30 percent of prompts return a citation panel with relevant sources, the vertical is low-citation and AEO is not the priority. If above 60 percent, AEO is core. Between those, AEO matters but should not crowd out classic work.
For the broader question of where to put marketing budget, see why CMOs should kill paid search budget. The argument for AEO and classic SEO over paid search is the same argument: build owned surfaces that compound, do not rent attention that resets to zero every month.
90-day AEO sprint plan
A 90-day plan to take a site from no AEO posture to measurable citation share. Run it sequentially. Each phase has a deliverable and a gate.
Days 1 to 14: structural baseline.
- Audit the site for schema. Ship Organization, Person (for the founder + named team), Article on every blog post, FAQPage on any page with Q&A, BreadcrumbList everywhere.
- Add a tl;dr block to the top 10 most-trafficked pages. 60 to 100 words each.
- Ship llms.txt at the root. List the top 10 pages, the pillar content, and the case studies.
- Validate every schema block in the Schema.org validator and Google Rich Results Test.
Gate: the home page, top three service pages, and top three blog posts all pass schema validation, all have a tl;dr, and the site has llms.txt at root.
Days 15 to 45: pillar content.
- Identify the three pillar topics for the business. AtlasForge runs four (speed-proofs, technical-depth, pseo-geo-aeo, mid-market-playbook).
- Write or rewrite the pillar pages to answer-shape rules. 3,000+ words each, with 6+ FAQ pairs.
- Build the cluster pages around each pillar (5 to 8 cluster pages per pillar).
- Internal-link aggressively from cluster to pillar and pillar to cluster.
Gate: three pillar pages live, each with 5+ cluster pages linking to it, every page passing answer-shape audit.
Days 46 to 75: named-entity audit and citation tracking setup.
- Audit every service page, case study, and team page. Replace generic claims with named entities (tools, versions, dollar amounts, day counts, city names, client names where contracts allow).
- Build the branded-prompt monitoring list (30 to 50 prompts).
- Set up referrer log filtering in PostHog (or GA4).
- Run the prompt list across all four engines and log baseline citation share.
Gate: every public page on the site passes the proper-noun count test (15+ proper nouns or numerals on substantive pages), and citation tracking has a baseline.
Days 76 to 90: iterate and report.
- Re-run the prompt list weekly. Log changes.
- Identify pages that should be cited but are not. Audit them against the answer-shape rules and fix gaps.
- Ship two or three new cluster posts per week to fill topical gaps the prompt list exposes.
- At day 90, write the citation-share report and compare against the day-46 baseline.
Gate: citation share up by at least 30 percent on the tracked prompt list. If not, the structural work has gaps; redo the audit before adding more content.
This sprint runs in parallel with classic SEO work, not instead of it. The full 90-day plan that combines both is the 90-day organic growth plan. For mid-market teams, AEO and classic SEO share the same content surface and the same engineering surface; running them as separate programs creates duplication.
Closing
AEO is the new half of the same job. The sites that get cited are the sites that already deserve to rank: clean schema, named entities, answer-shaped content, fast pages, real authorship. The new work is structural, not strategic. Ship llms.txt. Add tl;dr blocks. Tighten schema. Audit your top 20 pages against the answer-shape rules. The citations follow.
For mid-market companies still on WordPress, AEO is one of three forcing functions for a rebuild (the others being Core Web Vitals and the pSEO uniqueness bar). We covered the rebuild path in WordPress to Next.js migration path and the demo we run for prospects in the 48-hour before/after demo. If the existing site cannot pass schema validation, cannot ship llms.txt, and cannot retrofit answer-shape without a full content rewrite, the right move is to rebuild.
The agency stance: we ship every AtlasForge client site with the AEO posture from day one. Schema, llms.txt, tl;dr blocks, named-entity audit, citation tracking. It is not a separate engagement. It is what shipping a marketing site looks like in 2026.
RJ
