Want a different perspective? These links open AI Search platforms with a prompt to explore this topic, how it works in AI Search, and how iPullRank approaches it.
*These buttons will open a third-party AI Search platform and submit a pre-written prompt. Results are generated by the platform and may vary.
Search has never stood still. Each stage of its evolution has been defined by how well systems could interpret what people meant, not just what they typed. We’ve gone from counting word sequences to anticipating actions users didn’t even articulate.
A search progression snapshot:
Search began as pattern matching.
Early engines looked for exact keyword strings in documents. If you typed “best pizza NYC,” the system broke it into individual terms — an n-gram model — and matched them against indexed pages. No context, no nuance, just literal matching.
As the web exploded, that blunt method collapsed under the weight of ambiguity. Since the same word could mean multiple things, without context, relevance was a guessing game: “Apple” could mean fruit, a tech company, or a record label, and the system couldn’t tell which. That’s where intent classification came into play — not just matching words, but mapping them to what the user wanted to achieve.
Andrei Broder’s early-2000s framework became the default mental model for SEOs:
Broder’s taxonomy wasn’t perfect, but it gave search teams a way to think beyond the string of characters in the search bar. And while Google’s engineers expanded it over the years, introducing commercial investigation and other nuanced sub-intents, this three-type model still shapes how many marketers approach keyword research today.
For SEOs, this meant a clear pivot:
For example, in the early 2000s, “cheap flights to Chicago” might have returned an old blog post with that phrase buried in the text. Once intent classification matured, booking engines with live fare data pushed that blog post off page one.
Natural language processing (NLP) expanded this scope:
Once search engines got better at parsing meaning, users started changing how they asked for information. The “head terms” era — short two- or three-word phrases — began giving way to full-sentence queries. Instead of “SUV safety ratings,” you’d see “What’s the safest SUV for families in 2024?”
Two major shifts drove this:
From an SEO perspective, this meant moving from optimizing for keywords to optimizing for answers.
So, pre-Hummingbird, “Who is the president of France?” would bring a set of web pages where you’d have to click through for the answer. Post-Hummingbird, the answer “Emmanuel Macron” would appear instantly in a knowledge panel, pulling from structured sources like Wikipedia.
The real break from traditional search came with multi-turn interactions. AI-driven interfaces could remember what you had just asked and carry that context forward, eliminating the need to respecify parameters.
A typical sequence:
Traditional search treated each of these queries independently. AI Mode and ChatGPT remember the thread, carrying your constraints forward automatically. This is context retention — a core capability in multi-turn interactions.
For SEOs and GEOs, this means:
In Google AI Mode, starting with “Best SEO tools for enterprise” and following up with “Which ones have AI features?” doesn’t restart the search. The system filters its earlier synthesis and returns an updated set of recommendations, often blending sources.
In practice, this means designing content ecosystems that span an entire topic or a set of related topics. You want the AI to see you as a consistently relevant, multi-turn, authoritative contributor. This is why Google is leaning into penalizing websites that try to cover topics outside their core competency.
We’ve now entered a stage where AI orchestrates multiple intents at once. Because intent isn’t static, it uses the literal query along with your past interactions, your profile, and real-time data to predict your next steps.
As we examined in the previous chapter, Google AI Mode might respond to a query like “Plan a trip to Lisbon in October” by:
ChatGPT-5 with integrated tools could go further:
For GEO, the challenge is ensuring your brand stays in the mix as the AI orchestrates these transitions. That means:
The next wave of search and AI interaction goes in two directions at once — agents that act on your behalf without waiting for a prompt, and systems that pause to ask you better questions before delivering results.
Proactive agents detect latent needs based on patterns in your behavior, your context, and external data streams. They don’t just respond; they initiate.
Examples:
For GEO and Relevance Engineering, this demands:
Prompt inversion is when the AI asks you for context it knows it needs to provide a better result. Instead of forcing the user to anticipate the right phrasing, the system drives the refinement itself.
Examples:
This too has SEO and GEO implications:
Together, proactive agents and prompt inversion signal a shift from “pull” search models toward continuous, adaptive assistance — where relevance isn’t just about matching a query, but about staying useful as the AI steers the interaction.
The informational/navigational/transactional model served its time, but conversational search demands a broader lens. Happily, even before the development of AI Search, SEOs were exploring the nuanced sub-intents that people would use in their searches.
Many interactions in AI platforms are exploratory, iterative, or even ambient — with no clear “search” moment at all.
In AI contexts, intents can be:
The last three are particularly relevant for GEO. In orchestrated and ambient modes, the search step may disappear entirely, from the user’s perspective — the AI retrieves, evaluates, and acts invisibly.
Profound compiles data on search intents for particular queries:
Why this matters for SEO/GEO:
Brands that only optimize for visible search queries will miss visibility in these “hidden” interactions. Content needs to be discoverable and usable at the action orchestration level.
| Category | Intent Type | Description | Example |
| Search-Oriented | Information | Seeks knowledge or clarification | What is generative engine optimization? |
| Definition | Asks for the meaning of a term or concept | Define “latent semantic indexing.” | |
| How-To | Requests step-by-step instructions or a procedure | How do I set up AI Mode in Google Search? | |
| Why | Asks for reasons, causes, or explanations | Why is my site not ranking for branded keywords? | |
| Fact-Check | Seeks to verify a specific claim or data point | Did Google remove cache links from search results? | |
| Comparison | Directly compares two or more options | Gemini vs. ChatGPT for enterprise research. | |
| Review | Requests an opinion or qualitative evaluation | Is Perplexity better than AI Mode? | |
| Purchase Recommendation | Asks what product or service to buy. | Best CRM for a 500-person SaaS company? | |
| Usage Recommendation | Seeks advice on using something already owned | How do I optimize HubSpot for SEO tracking? | |
| Location | Asks where something is (physical or digital) | Where is the settings menu in AI Mode? | |
| Brand Navigation | Requests to open or reach a specific site, app, or tool | Open iPullRank’s AI Search Manual. | |
| Transactional | Booking | Requests to reserve or schedule | Book a meeting with iPullRank next week. |
| Signup | Requests to register, subscribe, or create an account | Sign me up for the AI Mode webinar. | |
| Download | Requests a file, asset, or application | Download the AI Search Manual PDF. | |
| Purchase | Makes a direct request to buy | Order the SEO Week tickets. | |
| Exploratory & Context-Building | Exploratory | Open-ended discovery without a fixed goal | Show me interesting AI patents from 2025. |
| Clarifying | Narrows or reframes based on feedback | I meant organic rankings, not paid. | |
| Orchestrated | Initiates a chain of related actions | Create a content plan and send me the draft. | |
| Ambient | Receives proactive, context-triggered updates | Notify me when Google updates AI Mode. | |
| Proactive Agent | AI initiates assistance without a query | “I noticed you searched for ‘AI Mode’ yesterday — want an update?” | |
| Prompt Inversion | AI asks clarifying questions to refine results. | “Do you want enterprise or SMB solutions?” | |
| Generative & Creative | Creative Generation | Requests original creative content | Write a LinkedIn post about GEO. |
| Document Drafting | Requests formal or structured writing | Draft a proposal for an AI-driven SEO strategy. | |
| Visualization | Requests a chart, diagram, or other visual | Create a graph of AI Mode adoption trends. | |
| Rewrite | Requests to rephrase without changing meaning | Rewrite this blog post to sound more conversational. | |
| Utility & Troubleshooting | Troubleshooting | Reports a problem and seeks a fix | My AI Mode isn’t loading—what’s wrong? |
| Action Request | Asks to perform a utility task | Count how many times “AI Mode” appears in this doc. | |
| Null Intent | Unclear, gibberish, or mixed beyond repair | asdfg1234? help?? | |
| Mixed & Multi-Intent | Multi-Turn Exploration | Evolves across turns from one intent to another | What is GEO? → How do I apply it to ecommerce? |
The progression from Broder’s early three-part model to today’s expanded taxonomy mirrors how interactions with search and conversational platforms have grown in complexity. A single exchange can now blend multiple goals, shift direction without warning, or spark entirely new lines of inquiry.
This leads to some important takeaways:
Modern conversational systems adapt to these patterns by reworking the user’s input before retrieval ever begins. Instead of processing a single raw query, they:
These steps happen invisibly, but they define the quality and accuracy of the final answer. The next section will unpack these processes in detail. Later in the book, we’ll look at how one query can branch into multiple retrieval paths through query fan-out, creating a network of related results from a single starting point.
When humans talk to humans, we skip steps: We leave out details, we change direction mid-sentence, we use pronouns instead of repeating ourselves. Large language models and conversational search platforms have to close those gaps on the fly — and that’s where subqueries, passage retrieval, and query rewriting come in.
These processes sit under the hood of AI Mode, ChatGPT, Claude, and similar systems. You may enter a single sentence, but the machine breaks it apart into multiple structured requests, finds relevant fragments, and recombines them into an answer.
A single complex query often gets split into discrete search requests targeting specific aspects of your input, known as subqueries.
So given the query:
“Compare Trek FX 3 vs. Specialized Sirrus for commuting, and tell me which is better for rainy climates.”
An AI system may internally run:
For SEO and GEO, this means your content can contribute to the final answer even if it never ranks for the full original query. It only needs to satisfy one of the subqueries.
What to prioritize:
Instead of evaluating entire pages, modern search agents look for the most relevant passages — compact, self-contained sections that directly address a need.
Research papers on passage ranking (like this one by Rodrigo Nogueira and Kyunghyun Cho called “Passage Re-ranking with BERT”) describe how context windows are used to score segments. In AI Mode, these passages can be stitched together from multiple sites to form a synthesized response:
Dan Petrovic’s research on web page length analyzed 44,684 web pages and measured their content length using Gemini’s token counter. It found that the median web page contains about 2,400 words, or five pages of text, which is around 3,200 tokens. However, a ten-document retrieval can reach more than 350,000 tokens, so it’s important to keep that in mind for budgeting.
His suggestions:
AI platforms don’t always take your words at face value. They’ll reformulate them to improve clarity and retrieval quality. This is query rewriting — a key bridge between UX for humans and AX (agent experience) for AI systems.
In Google AI Mode, this often happens silently. A request like:
“Where should I stay in Lisbon for a conference in October?”
may be internally rewritten as:
Research papers on “query decomposition” show similar behavior — refining queries to match retrieval indexes better, and sometimes expanding them to include synonyms or related terms.
Understanding these mechanics lets you design content that’s AI-friendly without pandering, which we’ll cover in depth in Chapter 9. But in a nutshell, this means:
Search used to be about designing for human consumption: clear page titles, intuitive layouts, and content hierarchy that matched how people scan. But as we’ve seen, conversational AI platforms aren’t “reading” your content like a human at all. They’re parsing it, segmenting it, and slotting it into a framework that supports synthesis and action. That shift changes the audience for your work — now you’re building for two very different interpreters: humans and agents.
UX for Humans focuses on:
AX for Agents requires:
In practice, an AI Mode result might never expose your original visual design. It might instead extract just a paragraph, blend it with other sources, and reframe it in a way that suits the answer synthesis. And ChatGPT might go even further — taking your structured content and executing tasks on top of it, like summarizing, comparing, or generating next-step actions.
This raises the core question: Should content be created in two versions — one optimized for human experience, one for agent parsing — or can a single artifact serve both well enough?
So a product-comparison page might be approached this way:
As agents take on more proactive behaviors — surfacing information before it’s asked for, executing sequences of actions — this dual optimization becomes even more critical. The same content might need to drive both a compelling human experience and fuel an invisible API call in the background of an AI-driven platform.
| Human UX Element | Purpose for People | AX Equivalent | Purpose for Agents | Example in Practice |
| Headings & Subheadings | Break content into readable chunks; signal topic hierarchy | Semantic section labels (H-tags, schema headlines) | Help agents segment and classify content into logical parts | H2: “Best Running Shoes for Flat Feet,” tied to schema markup for product category |
| Introductory Paragraphs | Set context and engage curiosity | Context-rich entity definitions | Establish topic scope and relationships early for accurate retrieval | First two sentences define “flat feet” and their impact on running biomechanics |
| Visual Hierarchy | Directs user’s eye flow and priority | Metadata hierarchy | Guides agent parsing order and importance weighting | Aria-label and sectioning tags used to indicate hierarchy |
| Images & Captions | Add visual context and emotional appeal | Alt text with descriptive entities | Give agents a non-visual interpretation of image meaning | Image: “Blue Nike Pegasus 41” → Alt text: “Nike Pegasus 41 men’s running shoe, blue, size 10” |
| Lists & Bullet Points | Increase scannability | Delimited structured lists | Enable agents to extract discrete, atomic facts | List of pros/cons, each in <li> with clear descriptors |
| Navigation Menus | Help users move between sections | Internal link graph with anchor context | Help agents understand site structure and topical relationships | Internal link: “/running-shoes/flat-feet” with anchor “Flat Feet Running Shoes” |
| Calls to Action (CTAs) | Guide human decision-making | Explicit action parameters | Let agents translate into actionable commands | CTA “Book Now,” paired with structured data offers object for booking API |
| Microcopy & Tooltips | Clarify specific interactions | Inline definitions or metadata annotations | Give agents the missing nuance to resolve ambiguous terms | Tooltip “PR” clarifies as “Public Relations,” not “PageRank” |
| Comparative Tables | Make side-by-side evaluation easier | Structured comparison datasets | Allow agents to directly compare attributes | HTML table tagged with product attributes and consistent units |
| Storytelling/Case Studies | Build trust and emotional connection | Timestamped, entity-tagged narrative blocks | Let agents surface examples when contextually relevant | Case-study text with structured mentions of company, results, date |
The move from designing for human UX to also designing for agent AX sets the stage for understanding who actually controls discovery in this new search environment. These agents aren’t just abstract software — they operate within ecosystems owned and maintained by gatekeepers, each with their own rules for what gets surfaced, summarized, or left invisible.
Google remains the dominant gatekeeper, but its role has shifted. AI Mode, AI Overviews, and conversational interfaces mean the company isn’t just ranking pages — it’s now orchestrating the entire information flow, from retrieval to synthesis. And so how you design for agents is directly shaped by how Google’s systems break down your content, interpret its relevance, and decide whether it fits the answer space at all.
The next chapter therefore begins by turning squarely toward Google as the central filter between your work and the user. Before we talk about things like query fan-out and multi-source expansion, we’ll map out how Google’s position as the primary gatekeeper is evolving, how its AI-driven decisions are made — and what that means for anyone competing for visibility inside its walls.
If your brand isn’t being retrieved, synthesized, and cited in AI Overviews, AI Mode, ChatGPT, or Perplexity, you’re missing from the decisions that matter. Relevance Engineering structures content for clarity, optimizes for retrieval, and measures real impact. Content Resonance turns that visibility into lasting connection.
Schedule a call with iPullRank to own the conversations that drive your market.
The appendix includes everything you need to operationalize the ideas in this manual, downloadable tools, reporting templates, and prompt recipes for GEO testing. You’ll also find a glossary that breaks down technical terms and concepts to keep your team aligned. Use this section as your implementation hub.
//.eBook
The AI Search Manual is your operating manual for being seen in the next iteration of Organic Search where answers are generated, not linked.
Prefer to read in chunks? We’ll send the AI Search Manual as an email series—complete with extra commentary, fresh examples, and early access to new tools. Stay sharp and stay ahead, one email at a time.
Sign up for the Rank Report — the weekly iPullRank newsletter. We unpack industry news, updates, and best practices in the world of SEO, content, and generative AI.
iPullRank is a pioneering content marketing and enterprise SEO agency leading the way in Relevance Engineering, Audience-Focused SEO, and Content Strategy. People-first in our approach, we’ve delivered $4B+ in organic search results for our clients.
We’ll break it up and send it straight to your inbox along with all of the great insights, real-world examples, and early access to new tools we’re testing. It’s the easiest way to keep up without blocking off your whole afternoon.