The AI Search Manual

CHAPTER 3

From Keywords to Questions to Conversations – and Beyond to Intent Orchestration

Search has never stood still. Each stage of its evolution has been defined by how well systems could interpret what people meant, not just what they typed. We’ve gone from counting word sequences to anticipating actions you didn’t even articulate.

Query Evolution: From N-Grams to Intent-Orchestrated Actions

A search progression snapshot:

  • N-gram matching → basic lexical search
  • Intent recognition → goal-oriented retrieval
  • Natural language queries → full-sentence, context-rich input
  • Conversational queries → multi-turn context retention
  • Orchestrated actions → AI infers and acts on next steps or prompts you for more context

Early Search: Literal Matchmaking

Search began as pattern matching.

Early engines looked for exact keyword strings in documents. If you typed “best pizza NYC,” the system broke it into individual terms — an n-gram model — and matched them against indexed pages. No context, no nuance, just literal matching.

As the web exploded, that blunt method collapsed under the weight of ambiguity. “Apple” could mean fruit, a tech company, or a record label, and the system couldn’t tell. The same word could mean multiple things, and without context, relevance was a guessing game. That’s where intent classification came into play — not just matching words, but mapping them to what the user wanted to achieve.

Andrei Broder’s early 2000s framework became the default mental model for SEOs:

  • Informational – The searcher wants to learn something (“What is schema markup?”).
  • Navigational – The searcher wants to reach a specific site or page (“iPullRank blog”).
  • Transactional – The searcher wants to take an action (“buy running shoes online”).

Broder’s taxonomy wasn’t perfect, but it gave search teams a way to think beyond the string of characters in the search bar. 

Google’s engineers expanded it over the years, introducing commercial investigation and other nuanced sub-intents, but the three-type model still shapes how many marketers approach keyword research today.

For SEOs, this meant a clear pivot:

  • Stop chasing raw keyword counts.
  • Start aligning content with the purpose of the search.
  • Build pages that satisfy the whole intent, not just match the query.

Example: In the early 2000s, “cheap flights to Chicago” might have returned an old blog post with that phrase buried in the text. Once intent classification matured, booking engines with live fare data pushed that blog post off page one.

Natural language processing expanded this scope:

  • Longer, more descriptive queries.
  • Semantic models mapping synonyms and related concepts.

Early question-answering systems (“Who is the CEO of Google?”).

The Rise of Questions

Once search engines got better at parsing meaning, users started changing how they asked for information. The “head terms” era — short, two- or three-word phrases — began giving way to full-sentence queries. Instead of “SUV safety ratings,” you’d see “What’s the safest SUV for families in 2024?”

Two major shifts drove this:

  1. Natural Language Processing (NLP) breakthroughs — Google’s Hummingbird update in 2013 and subsequent machine learning models like RankBrain and BERT improved the system’s ability to map long-tail, natural language queries to relevant results.
  2. Trust in the system — Users realized they didn’t have to “talk like a search engine” anymore. The system could interpret nuance and intent.

From an SEO perspective, this meant moving from optimizing for keywords to optimizing for answers.

  • Richer context per query — Questions often reveal intent stage and constraints.
  • Answer-focused optimization — Structured data, FAQs, and concise expert summaries became critical.
  • Zero-click exposure — Knowledge Graph and featured snippets pulled answers directly into the SERP.

Example: Pre-Hummingbird, “Who is the president of France?” would bring a set of web pages where you’d have to click for the answer. Post-Hummingbird, the answer “Emmanuel Macron” appeared instantly in a knowledge panel, pulling from structured sources like Wikipedia.

Conversations Take Over

The real break from traditional search came with multi-turn interactions. AI-driven interfaces could remember what you just asked and carry that context forward, eliminating the need to re-specify parameters.

A typical sequence:

  1. “What’s the best CRM for mid-sized B2B companies?”
  2. “Which of those integrates with HubSpot?”
  3. “Can you compare the pricing for me?”

Traditional search treated each query independently. AI Mode and ChatGPT remember the thread, carrying your constraints forward automatically. This is context retention — a core capability in multi-turn interactions.

For SEOs and GEOs, this means:

  • Follow-ups drive deeper discovery — Later questions may pull from different sections of your site than the first query did.
  • Content has to be interconnected — A single landing page isn’t enough. The AI might pull pieces from different assets to synthesize an answer.
  • Branching needs coverage — The AI may explore tangents you didn’t anticipate, but that still relate to the original query.

Example: In Google AI Mode, starting with “Best SEO tools for enterprise” and following up with “Which ones have AI features?” doesn’t restart the search. The system filters its earlier synthesis and returns an updated set of recommendations, often blending sources.

In practice, this means designing content ecosystems that span an entire topic or a set of related topics. You want the AI to see you as a consistently relevant, multi-turn, authoritative contributor. That’s why we’re seeing Google lean into penalizing websites that try to cover topics outside their core competency.

Intent Orchestration

We’ve now entered a stage where AI orchestrates multiple intents at once. Intent isn’t static. It uses the literal query, your past interactions, your profile, and real-time data to predict your next steps.

As we examined last chapter, Google AI Mode might respond to “Plan a trip to Lisbon in October” by:

  • Pulling flight and hotel data.
  • Filtering based on your past booking behavior.
  • Suggesting local events aligned with your interests.

ChatGPT-5 with integrated tools could:

  • Draft a custom itinerary.
  • Book the reservations.
  • Add reminders to your calendar.

For GEO, the challenge is ensuring your brand stays in the mix as the AI orchestrates these transitions. That means:

  • Mapping content to multiple intent states.
  • Using structured data to make it easier for the AI to “jump” between your assets.
  • Anticipating adjacent topics the AI might pivot toward.
  • Products and services need clear parameters and structured data.
  • How-tos should be modular so AI can reformat them into checklists.

Local or niche data must be current and precise to be recommended in real time.

Beyond: Proactive Agents and Prompt Inversion

The next wave of search and AI interaction goes in two directions at once — agents that act on your behalf without waiting for a prompt, and systems that pause to ask you better questions before delivering results.

Proactive Agents

Proactive agents detect latent needs based on patterns in your behavior, your context, and external data streams. They don’t just respond; they initiate.

Examples:

  • An enterprise AI notices your brand is missing from the top three results in AI Mode for a high-value product query and alerts your marketing team with suggested optimizations.
  • A travel assistant sees you booked a conference flight and automatically checks hotel options near the venue, filtered by your loyalty programs.

For GEO and relevance engineering, this demands:

  • Structured, accessible data that’s ready for integration into automated workflows.
  • Timely updates so recommendations are trustworthy.
  • Clear action endpoints (booking, purchasing, registering) so agents can execute, not just suggest.

Prompt Inversion

Prompt inversion is the AI asking you for context it knows it needs to provide a better result. Instead of forcing the user to anticipate the right phrasing, the system drives the refinement.

Examples:

  • Google AI Mode replying to “Plan a trip to Lisbon in October” with “Are you traveling solo or with a group?” before presenting results.
  • ChatGPT-5 responding to “Help me choose a CRM” with “Do you prioritize integrations, cost, or scalability?” to narrow the output.

This has SEO and GEO implications:

  • Content must be adaptable to different follow-up scenarios so it stays eligible no matter how the AI shapes the flow.
  • Coverage depth matters — the AI may surface your content in the second or third turn, not the first.
  • Topic granularity — breaking concepts into atomic, linkable ideas that can serve as direct answers to specific follow-up questions.

Together, proactive agents and prompt inversion signal a shift from “pull” search models toward continuous, adaptive assistance — where relevance isn’t just about matching a query, but about staying useful as the AI steers the interaction.

Platforms like Perplexity and Microsoft Copilot are already experimenting here. Once Google and OpenAI normalize it for consumers, relevance engineering will be about being in the AI’s knowledge graph before the question even exists.

Expanding Intent Typologies for AI Search

The informational/navigational/transactional model served its time, but conversational search demands a broader lens. Even before the development of AI Search, SEOs were exploring all of the nuanced sub-intents that people would use in their searches.

Many interactions in AI platforms are exploratory, iterative, or even ambient — with no clear “search” moment at all.

In AI contexts, intents can include:

  • Informational — Seeking knowledge or clarification.
  • Navigational — Locating a specific site, app, or profile.
  • Transactional — Completing a purchase or booking.
  • Comparative — Evaluating options side-by-side.
  • Exploratory — Open-ended discovery.
  • Clarifying — Narrowing or reframing based on feedback.
  • Orchestrated — Initiating a chain of related actions.
  • Ambient — Receiving proactive, context-triggered updates.

 

The last three are particularly relevant for GEO. In orchestrated and ambient modes, the search step may disappear from the user’s perspective — the AI retrieves, evaluates, and acts invisibly.

Why this matters for SEO/GEO:

Brands that only optimize for visible search queries will miss visibility in these “hidden” interactions. Content needs to be discoverable and usable at the action orchestration level.

 

Category

Intent Type

Description

Example

Search-Oriented

Informational

Seeking knowledge or clarification.

What is generative engine optimization?

 

Definition

Asks for the meaning of a term or concept.

Define “latent semantic indexing”.

 

How-To

Requests step-by-step instructions or a procedure.

How do I set up AI Mode in Google Search?

 

Why

Asks for reasons, causes, or explanations.

Why is my site not ranking for branded keywords?

 

Fact-Check

Seeks to verify a specific claim or data point.

Did Google remove cache links from search results?

 

Comparison

Directly compares two or more options.

Gemini vs. ChatGPT for enterprise research.

 

Review

Requests an opinion or qualitative evaluation.

Is Perplexity better than AI Mode?

 

Purchase Recommendation

Asks what product or service to buy.

Best CRM for a 500-person SaaS company?

 

Usage Recommendation

Seeks advice on using something already owned.

How do I optimize HubSpot for SEO tracking?

 

Location

Asks where something is (physical or digital).

Where is the settings menu in AI Mode?

 

Brand Navigation

Requests to open or reach a specific site, app, or tool.

Open iPullRank’s AI Search Manual.

Transactional

Booking

Requests to reserve or schedule.

Book a meeting with iPullRank next week.

 

Signup

Requests to register, subscribe, or create an account.

Sign me up for the AI Mode webinar.

 

Download

Requests a file, asset, or application.

Download the AI Search Manual PDF.

 

Purchase

Direct request to buy.

Order the SEO Week tickets.

Exploratory & Context-Building

Exploratory

Open-ended discovery without a fixed goal.

Show me interesting AI patents from 2024.

 

Clarifying

Narrows or reframes based on feedback.

I meant organic rankings, not paid.

 

Orchestrated

Initiates a chain of related actions.

Create a content plan and send me the draft.

 

Ambient

Receives proactive, context-triggered updates.

Notify me when Google updates AI Mode.

 

Proactive Agent

AI initiates assistance without a query.

“I noticed you searched for ‘AI Mode’ yesterday—want an update?”

 

Prompt Inversion

AI asks clarifying questions to refine results.

“Do you want enterprise or SMB solutions?”

Generative & Creative

Creative Generation

Requests original creative content.

Write a LinkedIn post about GEO.

 

Document Drafting

Requests formal or structured writing.

Draft a proposal for an AI-driven SEO strategy.

 

Visualization

Requests a chart, diagram, or other visual.

Create a graph of AI Mode adoption trends.

 

Rewrite

Requests to rephrase without changing meaning.

Rewrite this blog post to sound more conversational.

Utility & Troubleshooting

Troubleshooting

Reports a problem and seeks a fix.

My AI Mode isn’t loading—what’s wrong?

 

Action Request

Asks to perform a utility task.

Count how many times ‘AI Mode’ appears in this doc.

 

Null Intent

Unclear, gibberish, or mixed beyond repair.

asdfg1234? help??

Mixed & Multi-Intent

Multi-Turn Exploration

Evolves across turns from one intent to another.

What is GEO? → How do I apply it to eCommerce?

 

The progression from Broder’s early three-part model to today’s expanded taxonomy mirrors how interactions with search and conversational platforms have grown in complexity. A single exchange can now blend multiple goals, shift direction without warning, or spark entirely new lines of inquiry.

Why this matters:

  • Intents are rarely fixed; they can shift mid-conversation as the user’s focus changes.
  • Multiple intents can coexist in the same exchange, influencing how systems interpret the request.
  • Non-search intents still generate meaningful data and responses that shape the overall interaction.
  • Context from earlier turns can carry forward, affecting later results without the user repeating themselves.

Modern conversational systems adapt to these patterns by reworking the user’s input before retrieval ever begins. Instead of processing a single raw query, they:

  • Break it into subqueries that target specific aspects of the request.
  • Use passage retrieval to pull focused, relevant segments from source material.
  • Apply query rewriting to clarify ambiguous language, expand on implied meaning, or align with the system’s knowledge structure.

These steps happen invisibly, but they define the quality and accuracy of the final answer. The next section will unpack these processes in detail. Later in the book, we’ll look at how one query can branch into multiple retrieval paths through query fan-out, creating a network of related results from a single starting point.

How AI Breaks Down Complex Queries

When humans talk to humans, we skip steps. We leave out details, we change direction mid-sentence, we use pronouns instead of repeating ourselves. Large language models and Conversational Search platforms have to close those gaps on the fly, and that’s where subqueries, passage retrieval, and query rewriting come in.

These processes sit under the hood of AI Mode, ChatGPT, Claude, and similar systems. You may enter one sentence, but the machine breaks it apart into multiple structured requests, finds relevant fragments, and recombines them into an answer.

Subqueries

A single complex query often gets split into subqueries — discrete search requests targeting specific aspects of your input.

Example:

“Compare Trek FX 3 vs. Specialized Sirrus for commuting, and tell me which is better for rainy climates.”

An AI system may internally run:

  • “Trek FX 3 specs”
  • “Specialized Sirrus specs”
  • “Best commuter bike for rainy climates”
  • “Trek FX 3 performance in rain”
  • “Specialized Sirrus performance in rain”

For SEO and GEO, this means your content can contribute to the final answer even if it never ranks for the full original query. It only needs to satisfy one of the subqueries.

  • Subqueries target specific facets of the question (“best running shoes” → “best running shoes for flat feet,” “top-rated brands,” “current 2025 models”).
  • They allow parallel retrieval from multiple knowledge sources.
  • They capture secondary intents the user might not have explicitly stated.

What to prioritize:

  • Topic depth: Cover related angles that could be isolated into subqueries.
  • Content granularity: Use clear headings and sections to make passage-level extraction easier.

Passage Retrieval

Instead of evaluating entire pages, modern search agents look for the most relevant passages — compact, self-contained sections that directly address a need.

Google research papers on passage ranking (e.g., BERT-based Passage Ranking Models, 2020) describe how context windows are used to score segments. In AI Mode, these passages can be stitched together from multiple sites to form a synthesized response.

Example:

  • Your 3,000-word blog post on hybrid bikes might only have two paragraphs about rain resistance.
  • If that section is cleanly written and well-structured, the AI can lift it directly into an answer without reading the rest of the article.

Query Rewriting

AI platforms don’t always take your words at face value. They’ll reformulate them to improve clarity and retrieval quality.

This is query rewriting — a key bridge between UX for humans and AX (Agent Experience) for AI systems.

In Google AI Mode, this often happens silently. A request like:

“Where should I stay in Lisbon for a conference in October?”

…may be internally rewritten as:

  • “Lisbon hotels near conference centers”
  • “Lisbon hotels with October availability”
  • “Lisbon hotels with good reviews for business travelers”

OpenAI papers on query decomposition show similar behavior — refining queries to match retrieval indexes better, sometimes expanding them to include synonyms or related terms.

Implications for AI Search Strategy

Understanding these mechanics lets you design content that’s AI-friendly without pandering, which we’ll cover in Chapter 9. In practice, that means:

  • Building entity-rich content so query rewriting can still find you.
  • Structuring articles with clear sections that work as standalone passages.
  • Covering adjacent subtopics that may spin out into subqueries.

UX for Humans → AX for Agents

Search used to be about designing for human consumption: clear page titles, intuitive layouts, and content hierarchy that matched how people scan. But conversational AI platforms aren’t “reading” your content like a human at all. They’re parsing it, segmenting it, and slotting it into a framework that supports synthesis and action. That shift changes the audience for your work — now you’re building for two very different interpreters: humans and agents.

UX for Humans focuses on:

  • Visual hierarchy: headings, subheadings, and scannable chunks.
  • Emotional cues: copy tone, imagery, and storytelling to engage people.
  • Interaction design: buttons, menus, and flows that guide manual navigation.

AX (Agent Experience) for Agents requires:

  • Explicit entity definition: named entities, clear relationships, and consistent terminology so the agent can resolve meaning.
  • Structural clarity: content broken into well-labeled, semantically consistent sections for parsing.
  • Action-ready formatting: instructions, parameters, and conditions stated unambiguously.
  • Disambiguation: context embedded directly in text to avoid multiple interpretations.

In practice, an AI Mode result might never expose your original visual design. It might extract just a paragraph, blend it with other sources, and reframe it in a way that suits the answer synthesis. ChatGPT might go further — taking your structured content and executing tasks on top of it, like summarizing, comparing, or generating next-step actions.

This raises the core question: should content be created in two versions — one optimized for human experience, one for agent parsing — or can a single artifact serve both well enough?

  • Separate versions may provide maximum control, but double the production workload.
  • A hybrid approach focuses on structural duality: presentational cues for humans layered over machine-readable scaffolding for agents.

Real-world example:

  • For humans: A product comparison page with photos, pros/cons lists, and call-to-action buttons.
  • For agents: The same page also includes structured product data, explicit brand mentions, consistent units, and clearly stated comparative factors that can be parsed without visual context.

As agents take on more proactive behaviors, surfacing information before it’s asked for, executing sequences of actions, this dual-optimization becomes even more critical. The same content might need to drive both a compelling human experience and fuel an invisible API call in the background of an AI-driven platform.

Human UX vs. AX for Agents: Design Element Mapping

Human UX Element

Purpose for People

AX Equivalent

Purpose for Agents

Example in Practice

Headings & Subheadings

Breaks content into readable chunks; signals topic hierarchy

Semantic section labels (H-tags, schema headline)

Helps agents segment and classify content into logical parts

H2: “Best Running Shoes for Flat Feet” tied to schema markup for product category

Introductory Paragraphs

Set context and engage curiosity

Context-rich entity definitions

Establishes topic scope and relationships early for accurate retrieval

First 2 sentences define “flat feet” and its impact on running biomechanics

Visual Hierarchy

Directs user’s eye flow and priority

Metadata hierarchy

Guides agent parsing order and importance weighting

Using aria-label and sectioning tags to indicate hierarchy

Images & Captions

Adds visual context and emotional appeal

Alt text with descriptive entities

Gives agents non-visual interpretation of image meaning

Image: “Blue Nike Pegasus 41” → Alt: “Nike Pegasus 41 men’s running shoe, blue, size 10”

Lists & Bullet Points

Increases scannability

Delimited structured lists

Enables agents to extract discrete, atomic facts

List of pros/cons each in <li> with clear descriptors

Navigation Menus

Helps users move between sections

Internal link graph with anchor context

Helps agents understand site structure and topical relationships

Internal link: “/running-shoes/flat-feet” with anchor “Flat Feet Running Shoes”

Calls to Action (CTAs)

Guides human decision-making

Explicit action parameters

Lets agents translate into actionable commands

CTA “Book Now” paired with structured data offers object for booking API

Microcopy & Tooltips

Clarifies specific interactions

Inline definitions or metadata annotations

Gives agents the missing nuance to resolve ambiguous terms

Tooltip “PR” clarifies as “Public Relations” not “PageRank”

Comparative Tables

Makes side-by-side evaluation easier

Structured comparison datasets

Allows agents to directly compare attributes

HTML table tagged with product attributes and consistent units

Storytelling/Case Studies

Builds trust and emotional connection

Timestamped, entity-tagged narrative blocks

Lets agents surface examples when contextually relevant

Case study text with structured mentions of company, results, date

The move from designing for human UX to designing for agent AX sets the stage for understanding who actually controls discovery in this new search environment. These agents aren’t just abstract software—they operate within ecosystems owned and maintained by gatekeepers, each with their own rules for what gets surfaced, summarized, or left invisible.

Google remains the dominant gatekeeper, but its role has shifted. AI Mode, AI Overviews, and conversational interfaces mean the company isn’t just ranking pages—it’s orchestrating the entire information flow, from retrieval to synthesis. How you design for agents is directly shaped by how Google’s systems break down your content, interpret its relevance, and decide whether it fits the answer space at all.

That’s why the next chapter turns squarely toward Google as the central filter between your work and the user. Before we talk about query fan-out or multi-source expansion later, we’ll map out how Google’s position as the primary gatekeeper is evolving, how its AI-driven decisions are made, and what that means for anyone competing for visibility inside its walls.

We don't offer SEO.

We offer
Relevance
Engineering.

If your brand isn’t being retrieved, synthesized, and cited in AI Overviews, AI Mode, ChatGPT, or Perplexity, you’re missing from the decisions that matter. Relevance Engineering structures content for clarity, optimizes for retrieval, and measures real impact. Content Resonance turns that visibility into lasting connection.

Schedule a call with iPullRank to own the conversations that drive your market.

MORE CHAPTERS

Part IV: Measurement and Reverse Engineering for GEO

» Chapter 12

» Chapter 13

» Chapter 14

» Chapter 15

Part V: Organizational Strategy for the GEO Era

» Chapter 16

» Chapter 17

Part VI: Risk, Ethics, and the Future of GEO

» Chapter 18

» Chapter 19

» Chapter 20

APPENDICES

The appendix includes everything you need to operationalize the ideas in this manual, downloadable tools, reporting templates, and prompt recipes for GEO testing. You’ll also find a glossary that breaks down technical terms and concepts to keep your team aligned. Use this section as your implementation hub.

//.eBook

The AI Search Manual

The AI Search Manual is your operating manual for being seen in the next iteration of Organic Search where answers are generated, not linked.

Want digital delivery? Get the AI Search Manual in Your Inbox

Prefer to read in chunks? We’ll send the AI Search Manual as an email series—complete with extra commentary, fresh examples, and early access to new tools. Stay sharp and stay ahead, one email at a time.

Want the AI Search Manual

In Bites-Sized Emails?

We’ll break it up and send it straight to your inbox along with all of the great insights, real-world examples, and early access to new tools we’re testing. It’s the easiest way to keep up without blocking off your whole afternoon.