The AI Search Manual

CHAPTER 3

From Keywords to Questions to Conversations – and Beyond to Intent Orchestration

Explore this topic in AI Search

Want a different perspective? These links open AI Search platforms with a prompt to explore this topic, how it works in AI Search, and how iPullRank approaches it.

*These buttons will open a third-party AI Search platform and submit a pre-written prompt. Results are generated by the platform and may vary.

Search has never stood still. Each stage of its evolution has been defined by how well systems could interpret what people meant, not just what they typed. We’ve gone from counting word sequences to anticipating actions users didn’t even articulate.

Query Evolution: From N-Grams to Intent-Orchestrated Actions

A search progression snapshot:

  • N-gram matching → basic lexical search
  • Intent recognition → goal-oriented retrieval
  • Natural language queries → full-sentence, context-rich input
  • Conversational queries → multi-turn context retention
  • Orchestrated actions → AI infers and acts on next steps or prompts you for more context

Early Search: Literal Matchmaking

Search began as pattern matching. 

Early engines looked for exact keyword strings in documents. If you typed “best pizza NYC,” the system broke it into individual terms — an n-gram model — and matched them against indexed pages. No context, no nuance, just literal matching.

As the web exploded, that blunt method collapsed under the weight of ambiguity. Since the same word could mean multiple things, without context, relevance was a guessing game: “Apple” could mean fruit, a tech company, or a record label, and the system couldn’t tell which. That’s where intent classification came into play — not just matching words, but mapping them to what the user wanted to achieve.

Andrei Broder’s early-2000s framework became the default mental model for SEOs:

  • Informational: The searcher wants to learn something. (“What is schema markup?”)
  • Navigational: The searcher wants to reach a specific site or page. (“iPullRank blog”)
  • Transactional: The searcher wants to take an action. (“Buy running shoes online”)

Broder’s taxonomy wasn’t perfect, but it gave search teams a way to think beyond the string of characters in the search bar. And while Google’s engineers expanded it over the years, introducing commercial investigation and other nuanced sub-intents, this three-type model still shapes how many marketers approach keyword research today.

For SEOs, this meant a clear pivot:

  • Stop chasing raw keyword counts.
  • Start aligning content with the purpose of the search.
  • Build pages that satisfy the whole intent instead of just matching the query.

For example, in the early 2000s, “cheap flights to Chicago” might have returned an old blog post with that phrase buried in the text. Once intent classification matured, booking engines with live fare data pushed that blog post off page one.

Natural language processing (NLP) expanded this scope:

  • Longer, more descriptive queries
  • Semantic models mapping synonyms and related concepts
  • Early question-answering systems (“Who is the CEO of Google?”)

The Rise of Questions

Once search engines got better at parsing meaning, users started changing how they asked for information. The “head terms” era — short two- or three-word phrases — began giving way to full-sentence queries. Instead of “SUV safety ratings,” you’d see “What’s the safest SUV for families in 2024?”

Two major shifts drove this:

  1. NLP breakthroughs: Google’s Hummingbird update in 2013, and subsequent machine-learning models like RankBrain and BERT, improved the system’s ability to map long-tail natural-language queries to relevant results.
  2. Trust in the system: Users realized they didn’t have to “talk like a search engine” anymore. The system could interpret nuance and intent.

From an SEO perspective, this meant moving from optimizing for keywords to optimizing for answers.

  • Richer context per query: Questions often revealed intent stage and constraints.
  • Answer-focused optimization: Structured data, FAQs, and concise expert summaries became critical.
  • Zero-click exposure: Knowledge Graph and featured snippets pulled answers directly into the SERP.

So, pre-Hummingbird, “Who is the president of France?” would bring a set of web pages where you’d have to click through for the answer. Post-Hummingbird, the answer “Emmanuel Macron” would appear instantly in a knowledge panel, pulling from structured sources like Wikipedia.

Conversations Take Over

The real break from traditional search came with multi-turn interactions. AI-driven interfaces could remember what you had just asked and carry that context forward, eliminating the need to respecify parameters.

A typical sequence:

  1. “What’s the best CRM for mid-sized B2B companies?”
  2. “Which of those integrates with HubSpot?”
  3. “Can you compare the pricing for me?”

Traditional search treated each of these queries independently. AI Mode and ChatGPT remember the thread, carrying your constraints forward automatically. This is context retention — a core capability in multi-turn interactions.

For SEOs and GEOs, this means:

  • Follow-ups drive deeper discovery: Later questions may pull from different sections of your site than the first query did.
  • Content has to be interconnected: A single landing page isn’t enough. The AI might pull pieces from various assets to synthesize an answer.
  • Branching needs coverage: The AI may explore tangents you didn’t anticipate but that still relate to the original query.

In Google AI Mode, starting with “Best SEO tools for enterprise” and following up with “Which ones have AI features?” doesn’t restart the search. The system filters its earlier synthesis and returns an updated set of recommendations, often blending sources.

In practice, this means designing content ecosystems that span an entire topic or a set of related topics. You want the AI to see you as a consistently relevant, multi-turn, authoritative contributor. This is why Google is leaning into penalizing websites that try to cover topics outside their core competency.

Intent Orchestration

We’ve now entered a stage where AI orchestrates multiple intents at once. Because intent isn’t static, it uses the literal query along with your past interactions, your profile, and real-time data to predict your next steps.

As we examined in the previous chapter, Google AI Mode might respond to a query like “Plan a trip to Lisbon in October” by:

  • Pulling flight and hotel data
  • Filtering based on your past booking behavior
  • Suggesting local events aligned with your interests

ChatGPT-5 with integrated tools could go further:

  • Draft a custom itinerary
  • Book the reservations
  • Add reminders to your calendar

For GEO, the challenge is ensuring your brand stays in the mix as the AI orchestrates these transitions. That means:

  • Mapping content to multiple intent states
  • Using structured data to make it easier for the AI to “jump” between your assets
  • Anticipating adjacent topics the AI might pivot toward
  • Giving products and services clear parameters and structured data
  • Making how-tos modular, so AI can reformat them into checklists
    Keeping local or niche data current and precise so it can be recommended in real time

Beyond: Proactive Agents and Prompt Inversion

The next wave of search and AI interaction goes in two directions at once — agents that act on your behalf without waiting for a prompt, and systems that pause to ask you better questions before delivering results.

Proactive Agents

Proactive agents detect latent needs based on patterns in your behavior, your context, and external data streams. They don’t just respond; they initiate.

Examples:

  • An enterprise AI notices your brand is missing from the top three results in AI Mode for a high-value product query, and alerts your marketing team with suggested optimizations.
  • A travel assistant sees that you booked a conference flight and automatically checks hotel options near the venue, filtered by your loyalty programs.

For GEO and Relevance Engineering, this demands:

  • Structured, accessible data that’s ready for integration into automated workflows
  • Timely updates so recommendations are trustworthy
  • Clear action endpoints (booking, purchasing, registering) so agents can execute, not just suggest

Prompt Inversion

Prompt inversion is when the AI asks you for context it knows it needs to provide a better result. Instead of forcing the user to anticipate the right phrasing, the system drives the refinement itself.

Examples:

  • Google AI Mode replies to “Plan a trip to Lisbon in October” with “Are you traveling solo or with a group?” before presenting results.
  • ChatGPT-5 responds to “Help me choose a CRM” with “Do you prioritize integrations, cost, or scalability?” to narrow the output.

This too has SEO and GEO implications:

  • Content must be adaptable to different follow-up scenarios, so it stays eligible no matter how the AI shapes the flow.
  • Coverage depth matters — the AI may surface your content in the second or third turn, not the first.
  • Topics should be granular, breaking concepts into atomic, linkable ideas that can serve as direct answers to specific follow-up questions.

Together, proactive agents and prompt inversion signal a shift from “pull” search models toward continuous, adaptive assistance — where relevance isn’t just about matching a query, but about staying useful as the AI steers the interaction.

Expanding Intent Typologies for AI Search

The informational/navigational/transactional model served its time, but conversational search demands a broader lens. Happily, even before the development of AI Search, SEOs were exploring the nuanced sub-intents that people would use in their searches.

Many interactions in AI platforms are exploratory, iterative, or even ambient — with no clear “search” moment at all.

In AI contexts, intents can be:

  • Informational: Seeking knowledge or clarification
  • Navigational: Locating a specific site, app, or profile
  • Transactional: Completing a purchase or booking
  • Comparative: Evaluating options side-by-side
  • Exploratory: Open-ended discovery
  • Clarifying: Narrowing or reframing based on feedback
  • Orchestrated: Initiating a chain of related actions
  • Ambient: Receiving proactive, context-triggered updates

The last three are particularly relevant for GEO. In orchestrated and ambient modes, the search step may disappear entirely, from the user’s perspective — the AI retrieves, evaluates, and acts invisibly.

Profound compiles data on search intents for particular queries:

Various AI Search intents

Why this matters for SEO/GEO:

Brands that only optimize for visible search queries will miss visibility in these “hidden” interactions. Content needs to be discoverable and usable at the action orchestration level.

CategoryIntent TypeDescriptionExample
Search-OrientedInformationSeeks knowledge or clarificationWhat is generative engine optimization?
 DefinitionAsks for the meaning of a term or conceptDefine “latent semantic indexing.”
 How-ToRequests step-by-step instructions or a procedureHow do I set up AI Mode in Google Search?
 WhyAsks for reasons, causes, or explanationsWhy is my site not ranking for branded keywords?
 Fact-CheckSeeks to verify a specific claim or data pointDid Google remove cache links from search results?
 ComparisonDirectly compares two or more optionsGemini vs. ChatGPT for enterprise research.
 ReviewRequests an opinion or qualitative evaluationIs Perplexity better than AI Mode?
 Purchase RecommendationAsks what product or service to buy.Best CRM for a 500-person SaaS company?
 Usage RecommendationSeeks advice on using something already ownedHow do I optimize HubSpot for SEO tracking?
 LocationAsks where something is (physical or digital)Where is the settings menu in AI Mode?
 Brand NavigationRequests to open or reach a specific site, app, or toolOpen iPullRank’s AI Search Manual.
TransactionalBookingRequests to reserve or scheduleBook a meeting with iPullRank next week.
 SignupRequests to register, subscribe, or create an accountSign me up for the AI Mode webinar.
 DownloadRequests a file, asset, or applicationDownload the AI Search Manual PDF.
 PurchaseMakes a direct request to buyOrder the SEO Week tickets.
Exploratory & Context-BuildingExploratoryOpen-ended discovery without a fixed goalShow me interesting AI patents from 2025.
 ClarifyingNarrows or reframes based on feedbackI meant organic rankings, not paid.
 OrchestratedInitiates a chain of related actionsCreate a content plan and send me the draft.
 AmbientReceives proactive, context-triggered updatesNotify me when Google updates AI Mode.
 Proactive AgentAI initiates assistance without a query“I noticed you searched for ‘AI Mode’ yesterday — want an update?”
 Prompt InversionAI asks clarifying questions to refine results.“Do you want enterprise or SMB solutions?”
Generative & CreativeCreative GenerationRequests original creative contentWrite a LinkedIn post about GEO.
 Document DraftingRequests formal or structured writingDraft a proposal for an AI-driven SEO strategy.
 VisualizationRequests a chart, diagram, or other visualCreate a graph of AI Mode adoption trends.
 RewriteRequests to rephrase without changing meaningRewrite this blog post to sound more conversational.
Utility & TroubleshootingTroubleshootingReports a problem and seeks a fixMy AI Mode isn’t loading—what’s wrong?
 Action RequestAsks to perform a utility taskCount how many times “AI Mode” appears in this doc.
 Null IntentUnclear, gibberish, or mixed beyond repairasdfg1234? help??
Mixed & Multi-IntentMulti-Turn ExplorationEvolves across turns from one intent to anotherWhat is GEO?How do I apply it to ecommerce?

The progression from Broder’s early three-part model to today’s expanded taxonomy mirrors how interactions with search and conversational platforms have grown in complexity. A single exchange can now blend multiple goals, shift direction without warning, or spark entirely new lines of inquiry.

This leads to some important takeaways:

  • Intents are rarely fixed; they can shift mid-conversation as the user’s focus changes.
  • Multiple intents can coexist in the same exchange, influencing how systems interpret the request.
  • Non-search intents still generate meaningful data and responses that shape the overall interaction.
  • Context from earlier turns can carry forward, affecting later results without the user repeating themselves.

Modern conversational systems adapt to these patterns by reworking the user’s input before retrieval ever begins. Instead of processing a single raw query, they:

  • Break it into subqueries that target specific aspects of the request
  • Use passage retrieval to pull focused, relevant segments from source material
  • Apply query rewriting to clarify ambiguous language, expand on implied meaning, or align with the system’s knowledge structure

These steps happen invisibly, but they define the quality and accuracy of the final answer. The next section will unpack these processes in detail. Later in the book, we’ll look at how one query can branch into multiple retrieval paths through query fan-out, creating a network of related results from a single starting point.

How AI Breaks Down Complex Queries

When humans talk to humans, we skip steps: We leave out details, we change direction mid-sentence, we use pronouns instead of repeating ourselves. Large language models and conversational search platforms have to close those gaps on the fly — and that’s where subqueries, passage retrieval, and query rewriting come in.

These processes sit under the hood of AI Mode, ChatGPT, Claude, and similar systems. You may enter a single sentence, but the machine breaks it apart into multiple structured requests, finds relevant fragments, and recombines them into an answer.

Subqueries

A single complex query often gets split into discrete search requests targeting specific aspects of your input, known as subqueries.

So given the query:

“Compare Trek FX 3 vs. Specialized Sirrus for commuting, and tell me which is better for rainy climates.”

An AI system may internally run:

  • “Trek FX 3 specs”
  • “Specialized Sirrus specs”
  • “Best commuter bike for rainy climates”
  • “Trek FX 3 performance in rain”
  • “Specialized Sirrus performance in rain”

For SEO and GEO, this means your content can contribute to the final answer even if it never ranks for the full original query. It only needs to satisfy one of the subqueries.

  • Subqueries target specific facets of the question (“best running shoes” → “best running shoes for flat feet,” “top-rated brands,” “current 2025 models”).
  • They allow parallel retrieval from multiple knowledge sources.
  • They capture secondary intents the user might not have explicitly stated.

What to prioritize:

  • Topic depth: Cover related angles that could be isolated into subqueries.
  • Content granularity: Use clear headings and sections to make passage-level extraction easier.

Passage Retrieval

Instead of evaluating entire pages, modern search agents look for the most relevant passages — compact, self-contained sections that directly address a need.

Research papers on passage ranking (like this one by Rodrigo Nogueira and Kyunghyun Cho called  “Passage Re-ranking with BERT”) describe how context windows are used to score segments. In AI Mode, these passages can be stitched together from multiple sites to form a synthesized response:

  • Your 3,000-word blog post on hybrid bikes might only have two paragraphs about rain resistance.
  • If that section is cleanly written and well-structured, the AI can lift it directly into an answer without reading the rest of the article.

Dan Petrovic’s research on web page length analyzed 44,684 web pages and measured their content length using Gemini’s token counter. It found that the median web page contains about 2,400 words, or five pages of text, which is around 3,200 tokens. However, a ten-document retrieval can reach more than 350,000 tokens, so it’s important to keep that in mind for budgeting.

His suggestions:

  • Design for the median (3K tokens), but make sure you can handle the 99th percentile (140K tokens).
  • Expect high variance between sources.
  • Budget conservatively, as average costs will be 3x median costs due to outliers.

Query Rewriting

AI platforms don’t always take your words at face value. They’ll reformulate them to improve clarity and retrieval quality. This is query rewriting — a key bridge between UX for humans and AX (agent experience) for AI systems.

In Google AI Mode, this often happens silently. A request like:

“Where should I stay in Lisbon for a conference in October?”

may be internally rewritten as:

  • “Lisbon hotels near conference centers”
  • “Lisbon hotels with October availability”
  • “Lisbon hotels with good reviews for business travelers”

Research papers on “query decomposition” show similar behavior — refining queries to match retrieval indexes better, and sometimes expanding them to include synonyms or related terms.

Implications for AI Search Strategy

Understanding these mechanics lets you design content that’s AI-friendly without pandering, which we’ll cover in depth in Chapter 9. But in a nutshell, this means:

  • Building entity-rich content, so query rewriting can still find you
  • Structuring articles with clear sections that work as stand-alone passages
  • Covering adjacent subtopics that may spin out into subqueries

UX for Humans → AX for Agents

Search used to be about designing for human consumption: clear page titles, intuitive layouts, and content hierarchy that matched how people scan. But as we’ve seen, conversational AI platforms aren’t “reading” your content like a human at all. They’re parsing it, segmenting it, and slotting it into a framework that supports synthesis and action. That shift changes the audience for your work — now you’re building for two very different interpreters: humans and agents.

UX for Humans focuses on:

  • Visual hierarchy: headings, subheadings, and scannable chunks
  • Emotional cues: copy tone, imagery, and storytelling to engage people
  • Interaction design: buttons, menus, and flows that guide manual navigation

AX for Agents requires:

  • Explicit entity definition: named entities, clear relationships, and consistent terminology, so the agent can resolve meaning
  • Structural clarity: content broken into well-labeled, semantically consistent sections for parsing
  • Action-ready formatting: instructions, parameters, and conditions stated unambiguously
  • Disambiguation: context embedded directly in text to avoid multiple interpretations

In practice, an AI Mode result might never expose your original visual design. It might instead extract just a paragraph, blend it with other sources, and reframe it in a way that suits the answer synthesis. And ChatGPT might go even further — taking your structured content and executing tasks on top of it, like summarizing, comparing, or generating next-step actions.

This raises the core question: Should content be created in two versions — one optimized for human experience, one for agent parsing — or can a single artifact serve both well enough?

  • Separate versions may provide maximum control, but double the production workload.
  • A hybrid approach focuses on structural duality: presentational cues for humans, layered over machine-readable scaffolding for agents.

So a product-comparison page might be approached this way:

  • For humans: The page contains photos, pros/cons lists, and call-to-action buttons
  • For agents: The page also includes structured product data, explicit brand mentions, consistent units, and clearly stated comparative factors that can be parsed without visual context.

As agents take on more proactive behaviors — surfacing information before it’s asked for, executing sequences of actions — this dual optimization becomes even more critical. The same content might need to drive both a compelling human experience and fuel an invisible API call in the background of an AI-driven platform.

Human UX vs. AX for Agents: Design Element Mapping

Human UX ElementPurpose for PeopleAX EquivalentPurpose for AgentsExample in Practice
Headings & SubheadingsBreak content into readable chunks; signal topic hierarchySemantic section labels (H-tags, schema headlines)Help agents segment and classify content into logical partsH2: “Best Running Shoes for Flat Feet,” tied to schema markup for product category
Introductory ParagraphsSet context and engage curiosityContext-rich entity definitionsEstablish topic scope and relationships early for accurate retrievalFirst two sentences define “flat feet” and their impact on running biomechanics
Visual HierarchyDirects user’s eye flow and priorityMetadata hierarchyGuides agent parsing order and importance weightingAria-label and sectioning tags used to indicate hierarchy
Images & CaptionsAdd visual context and emotional appealAlt text with descriptive entitiesGive agents a non-visual interpretation of image meaningImage: “Blue Nike Pegasus 41” → Alt text: “Nike Pegasus 41 men’s running shoe, blue, size 10”
Lists & Bullet PointsIncrease scannabilityDelimited structured listsEnable agents to extract discrete, atomic factsList of pros/cons, each in <li> with clear descriptors
Navigation MenusHelp users move between sectionsInternal link graph with anchor contextHelp agents understand site structure and topical relationshipsInternal link: “/running-shoes/flat-feet” with anchor “Flat Feet Running Shoes”
Calls to Action (CTAs)Guide human decision-makingExplicit action parametersLet agents translate into actionable commandsCTA “Book Now,” paired with structured data offers object for booking API
Microcopy & TooltipsClarify specific interactionsInline definitions or metadata annotationsGive agents the missing nuance to resolve ambiguous termsTooltip “PR” clarifies as “Public Relations,” not “PageRank”
Comparative TablesMake side-by-side evaluation easierStructured comparison datasetsAllow agents to directly compare attributesHTML table tagged with product attributes and consistent units
Storytelling/Case StudiesBuild trust and emotional connectionTimestamped, entity-tagged narrative blocksLet agents surface examples when contextually relevantCase-study text with structured mentions of company, results, date

The move from designing for human UX to also designing for agent AX sets the stage for understanding who actually controls discovery in this new search environment. These agents aren’t just abstract software — they operate within ecosystems owned and maintained by gatekeepers, each with their own rules for what gets surfaced, summarized, or left invisible.

Google remains the dominant gatekeeper, but its role has shifted. AI Mode, AI Overviews, and conversational interfaces mean the company isn’t just ranking pages — it’s now orchestrating the entire information flow, from retrieval to synthesis. And so how you design for agents is directly shaped by how Google’s systems break down your content, interpret its relevance, and decide whether it fits the answer space at all.

The next chapter therefore begins by turning squarely toward Google as the central filter between your work and the user. Before we talk about things like query fan-out and multi-source expansion, we’ll map out how Google’s position as the primary gatekeeper is evolving, how its AI-driven decisions are made — and what that means for anyone competing for visibility inside its walls.

We don't offer SEO.

We offer
Relevance
Engineering.

If your brand isn’t being retrieved, synthesized, and cited in AI Overviews, AI Mode, ChatGPT, or Perplexity, you’re missing from the decisions that matter. Relevance Engineering structures content for clarity, optimizes for retrieval, and measures real impact. Content Resonance turns that visibility into lasting connection.

Schedule a call with iPullRank to own the conversations that drive your market.

MORE CHAPTERS

APPENDICES

The appendix includes everything you need to operationalize the ideas in this manual, downloadable tools, reporting templates, and prompt recipes for GEO testing. You’ll also find a glossary that breaks down technical terms and concepts to keep your team aligned. Use this section as your implementation hub.

//.eBook

The AI Search Manual

The AI Search Manual is your operating manual for being seen in the next iteration of Organic Search where answers are generated, not linked.

Want digital delivery? Get the AI Search Manual in Your Inbox

Prefer to read in chunks? We’ll send the AI Search Manual as an email series—complete with extra commentary, fresh examples, and early access to new tools. Stay sharp and stay ahead, one email at a time.

Want the AI Search Manual

In Bites-Sized Emails?

We’ll break it up and send it straight to your inbox along with all of the great insights, real-world examples, and early access to new tools we’re testing. It’s the easiest way to keep up without blocking off your whole afternoon.