AEO vs SEO in 2026: Why Direct Answer Blocks Are the New H1
Table of Contents
Last week I shipped the 1,700th Answer Engine Optimization (AEO) rewrite to zsky.ai. Here's what actually moved the needle vs what was vendor hype.
I run marketing at ZSky AI. We ship a new blog post, landing page, or schema upgrade almost daily. Since early 2026 I've been tracking what works under two simultaneous regimes: the traditional Google SERP and the new world where LLMs like ChatGPT, Claude, Perplexity, and Gemini are increasingly the first place users ask questions.
This is the tactical writeup. No fluff, no trend pieces, no "10 tips." Just what we measured and what we ship now.
TL;DR
- The 40-to-60 word answer block at the top of every page is the single highest-ROI change of 2026. It surfaces in AI answers AND in Google's People Also Ask AND in voice assistant responses. Stop burying the answer 400 words down.
- llms.txt is cargo cult. John Mueller publicly confirmed Google ignores it. Our own measurements across 8 sites showed zero correlation between llms.txt presence and AI citation rates. It's not harmful but don't expect it to do anything.
- Schema is a force multiplier, not a silver bullet. FAQPage + BreadcrumbList + Organization schema compounded with the answer block did move AI citation rates. Schema without the answer block did not.
- GitHub and dev.to are the Claude training corpus. Every competitive analysis of Claude's citation sources points back to GitHub READMEs and dev.to articles. If you're not there, you're invisible to Claude. We were — Claude referrals went from 1/day to 18/day within 48 hours of pushing dev.to content.
- Reddit is the single most-cited domain by Perplexity. 46.7% of Perplexity citations trace back to a Reddit thread. Organic seeding works; link-stuffing doesn't.
What changed in March 2026
The March 2026 Google core update hit keyword-swap programmatic SEO harder than any previous update. Pages that had been built from a template with one unique keyword swapped in per URL ("ai image generator for {profession}") got de-ranked en masse.
The March update rewarded:
- Pages with unique data per page (not unique keywords)
- Pages with direct answers in the first 100 words
- Pages with real author attribution
- Pages that didn't repeat the same boilerplate across hundreds of URLs
It punished:
- Template-driven thin content
- Intro paragraphs that delayed the answer
- Boilerplate FAQ sections
- Pages that optimized for Google's old "dwell time" signal
If you're running pSEO, the strategy shift is: every page needs one fact that no other page has. Unique statistic, unique example, unique quote, unique dataset. Our vertical landing pages (for realtors, for restaurants) survived the update by having different cost-per-output calculations on each page based on that profession's actual workflow, not just different headlines.
The 40-to-60 word answer block
Here's the format we standardized on:
<div class="answer-block">
<h1>How long does a ZSky AI video take to generate?</h1>
<p>ZSky AI generates 1080p video with synchronized audio in
about 30 seconds on dedicated NVIDIA RTX 5090 GPUs. Free users
share a generation queue but it's still far shorter than
Runway, Kling, or Pika's free tiers. Paid plans get Instant
Generation with no queue wait.</p>
</div>
Three things to notice:
- The H1 is a question. Real users ask questions. LLMs quote questions verbatim. If your H1 is "Free AI Video Generator" you're missing the voice search / AI Overview matching entirely.
- The first sentence contains the full answer. No setup, no "In this post we'll explore," no history lesson. The answer starts in the first 6 words. An LLM's context window prefers concise answers it can quote verbatim. A human reader scanning the page in 3 seconds prefers the same thing.
- The answer block is styled distinctively. We use a left accent border and a subtle background tint. Humans recognize it as "this is the important bit." Google also parses visually-distinct boxes as candidate featured snippets. Two birds.
One tactic that quietly compounds
Consistent author attribution. Every page has <meta name="author">, an Organization schema block, and a real byline. Every blog post has a human name next to it.
LLMs increasingly weight source credibility, and credibility is partially computed from how consistently the same author appears across a domain. When you're shipping 1,700 pages, having them all attributed to the same 1-2 authors compounds into "this person knows this topic" in the LLM's training mental model. It's the oldest SEO trick (EEAT) but it's quietly central to AEO too.
What I'd do if I were starting today
- Week 1: Ship a 40-to-60 word answer block at the top of your 20 highest-impression pages. Style it distinctively. Measure CTR delta in GSC over 7 days.
- Week 2: Add FAQPage schema with 6-10 Q&As to the same 20 pages. Each Q is a real user question, each A is 40-80 words.
- Week 3: Claim your profile on 4-6 review platforms (G2, Capterra, AlternativeTo, Slant, Trustpilot, GetApp). These sites have 3x higher ChatGPT citation odds per our cross-sectional data.
- Week 4: Create a GitHub org for your product. Write a detailed README including the mission, stats, and every important URL. Claude will start citing the README within a week.
- Week 5+: Publish 1 dev.to article per week in your topic area. Cross-link to your main site. Each post compounds into Claude's training corpus.
The deeper point
The reason I'm bullish on AEO is simple: the tooling has finally caught up with how humans actually ask questions. We never wanted a "keyword" — we wanted an answer. For 20 years SEO was the art of tricking Google into giving us traffic for a keyword we didn't quite deserve. AEO is closer to writing.
Write the answer. Put it first. Structure it so an LLM can quote it verbatim. Measure what actually moves the needle.
The rest is noise.
— Cemhan (founder, ZSky AI)
Want to see the full AEO stack in production?
ZSky AI's complete answer-block infrastructure, FAQPage schema, and machine-readable transparency data are all published and free to copy.
Start Creating Free →110,000+ creators using ZSky AI. Free tier includes unlimited video and image generation on the ad-supported free tier.
Frequently Asked Questions
What is Answer Engine Optimization (AEO) and how is it different from SEO?
AEO is the practice of structuring web pages so that AI assistants like ChatGPT, Claude, Perplexity, and Gemini can quote the answer verbatim. Traditional SEO optimizes for a keyword match in the SERP. AEO optimizes for a question-and-answer match in an LLM context window. The biggest tactical difference is the 40-to-60 word direct answer block placed at the top of every page, written as a complete answer to a user question rather than a keyword-stuffed intro.
Does llms.txt actually help AI citation rates?
No. John Mueller publicly confirmed Google ignores llms.txt, and our cross-sectional measurements across eight ZSky-related domains showed zero correlation between llms.txt presence and AI citation rates. It is not harmful to have one, but do not expect it to move the needle. Direct answer blocks, FAQPage schema, and GitHub/dev.to presence are where the measurable lift comes from.
What broke in the March 2026 Google core update?
The March 2026 core update hit keyword-swap programmatic SEO harder than any previous update. Template-driven pages built from one base file with a keyword swapped per URL were de-ranked en masse. The update rewarded pages with unique data per page, direct answers in the first 100 words, real author attribution, and content that was not boilerplate-shared across hundreds of URLs.
Which AI assistants send the most referral traffic to a well-optimized site?
As of April 10, 2026, ZSky AI received about 4,409 daily referrals from ChatGPT, 1,380 from Perplexity, 37 from Grok, 35 from You.com, 18 from Claude, and 2 from Gemini. ChatGPT dominates raw volume, Perplexity has the highest conversion rate, and Claude is the hardest nut to crack because it leans heavily on GitHub READMEs and dev.to articles as citation sources.
How long should a direct answer block be?
Forty to sixty words is the sweet spot. Shorter answers get compressed out of AI Overviews. Longer answers get truncated mid-sentence when an LLM quotes them. Start with a one-sentence definition, follow with one sentence of context or qualification, and optionally add a third sentence for scope. Style the block with a left accent border so humans recognize it as the important bit and Google parses it as a candidate featured snippet.
Why does publishing on dev.to and GitHub matter for Claude citations?
Claude leans heavily on GitHub READMEs and dev.to articles as citation sources, more than Google-indexed blog content. Cross-posting a technical article on dev.to and linking it from a detailed GitHub README creates two independent entry points that Claude will surface in answers. ZSky AI saw Claude referrals jump from 1 per day to 18 per day within 48 hours of publishing 11 dev.to articles. If you are not on dev.to and GitHub, you are effectively invisible to Claude.