The AI Search Manual

CHAPTER 2

User Behavior in the Generative Era: From Clicks to Conversations

People are searching differently. Google still owns the lion’s share of the search market. Still, Organic Search’s rapid transformation, driven by advancements in machine learning and improved natural language understanding, is fundamentally changing the way that we search.

Search engines like Google are increasingly effective at interpreting and answering complex, conversational queries that previously would have required multiple searches and clicks. 

Elizabeth Reid, Google’s VP of Search, champions the shift, noting, “AI in search is making it easier to ask Google anything and get a helpful response with links to the web,” highlighting AI Overviews as “one of the most successful launches in search in the past decade.” 

Similarly, in late 2024, Alphabet CEO Sundar Pichai hinted at even deeper upcoming changes, stating at the Book Summit, “Search itself will continue to change profoundly in ’25. We are going to be able to tackle more complex questions than ever before… You’ll be surprised even early in ’25, the newer things search can do compared to where it is today.” 

Pichai’s forecast was realized a few months later with Google’s rollout of AI Mode.

These advancements are fundamentally reshaping search behavior. Rather than seeking out individual links, people increasingly rely on synthesized answers directly from search results, such as AI Overviews, or engage with platforms like Google’s AI Mode and ChatGPT. Independent of what SEOs may want you to believe, conversational search platforms are designed to satisfy search intent (according to Google).

Traditional keyword searches are getting deprecated. We need rich, nuanced prompts that contain deep contextual information to provide more accurate answers. It is shifting the very nature of search from quick lookups to iterative conversational exploration. Those quick lookups still have value in the appropriate context.

But search behavior will become conversational, interactive, and exploratory, reflecting something akin to Google’s 2020 concept of the “messy middle,” where users navigate complex decision journeys through continuous back-and-forth interactions.

But with hallucinations and content stolen from publishers, we must confront critical questions about trust and reliance on AI-generated outputs. 

As people increasingly accept AI-generated responses without always verifying their sources (due to laziness), brands face strategic considerations regarding how editorial guidelines, implicit trust dynamics, and even conversational biases. Prompts unintentionally or intentionally guide the responses, shape search interactions, and individual decisions. If everyone gets a different answer to their personalized and contextualized questions, what are brands supposed to do?

The upside? The traffic that navigates to a website after engaging with generative search results tends to be highly qualified and intentional, presenting more transparent and more actionable conversion opportunities.

Brands need to recognize and adapt to these strategic shifts by optimizing their visibility within AI-generated contexts today, particularly Google’s AI Overviews, while strategically preparing for deeper integration with conversational search behaviors tomorrow. 

This chapter explores these behavioral shifts in detail, highlighting how brands can anticipate conversational bias, leverage iterative discovery journeys, and foster trust within AI-driven search environments, ultimately aligning with human values and user expectations in this generative era.

Less Clicking, More Synthesizing

Let’s not sugarcoat it: adoption of AI Search is skyrocketing, and clicks are disappearing. 

People can’t turn off AI Overviews, so even if you hate it, you can’t avoid it. People are not going to go jump through all of the hoops to avoid AI Search. And frankly, they are increasingly satisfied with the answer Google gives them right there in the results. 

AI Overviews are synthesized, AI-generated summaries built to fulfill the query without sending people elsewhere (unless they’re going to buy your product or service). 

Google’s AI Overviews are now showing up at the top of the search results more often than not

They take up space, deliver direct answers, and push traditional organic listings deep below the fold—sometimes by over 1,500 pixels. It’s a total redistribution of attention.

Here’s what that looks like in practice:

  • Click-through rates (CTR) for top-ranking pages have dropped dramatically. Ahrefs data shows a 34.5% decline for #1 organic results when AI Overviews are present. If you’re used to owning that top spot, you’re no longer guaranteed the traffic.
  • Zero-click searches are spiking. Similarweb found that for news-related queries, the rate of users not clicking anything jumped from 56% in May 2024 to 67% in May 2025—a 11-point swing in a single year.
  • Publishers are taking the hit. In a joint study by Press Gazette and Similarweb, ~40 of the top 100 U.S. news publishers saw measurable traffic declines linked directly to the presence of AI Overviews.

Why is this happening? Because people are adjusting. They’re retraining themselves not to browse. Instead, they scan the AI output, get what they need, and bounce—or worse, end their session entirely. 

Pew Research reports that only 1% of users click links inside AI summaries, and 26% abandon their session altogether after reading them.

If users aren’t clicking, but they’re still satisfied, what does that tell us?

It tells us the changes in search behavior are already happening. Google is collecting engagement data, watching abandonment rates drop, and concluding that generative answers work. The model is giving people what they want: immediate, frictionless synthesis. And the numbers back it up. 

Next up? They’ll ramp up ads in AI Mode starting in Q4 2025. If they generate as much or more revenue as traditional search, the inevitable next step is AI Mode as the default version of search.

Now here’s the uncomfortable part.

If users prefer AI-generated summaries over visiting the actual source, then publishers are no longer part of the value chain. They’re the raw material. That breaks the unspoken agreement that has powered the open web for decades: you create quality content, Google sends you traffic. 

But if no one enforces that social contract (i.e., no legislation, no user backlash, no drop in satisfaction), Google, OpenAI, and Perplexity will keep moving forward. And people will follow.

It’s a rewiring of search behavior.

AI is Rewiring Your Search Behavior

  • If your brand doesn’t show up in the answer, you don’t exist in the session.
  • If your content isn’t shaping the synthesis, it’s not shaping perception.
  • And if you’re not adapting to this shift, rethinking what visibility means, where users find you, and how your information lives inside AI systems, then you’re already behind.

TikTok, Reddit, and other discovery platforms are taking the spillover. But Google is still the front door. The door just looks a lot more like a chatbot now.

Prompts Are the New Queries

Search has always been a form of self-expression. But with generative AI in the mix, your ability to get the right answer depends on how well you can articulate the question. One vague prompt can return something generic. A precise, context-rich prompt? That gets you gold.

Not All Prompts Are Created Equal

Here’s a simple truth about AI Search: what you put in shapes what you get out. Garbage in, garbage out. Yet most people will treat AI prompts like keyword search. That’s what they’re used to. Search Engines have trained them to use ‘keywordese.’

Let’s break that down:

 

Prompt Quality

Example

Output Quality

Casual Prompt

“Italy travel tips”

Generalized advice, most popular cities, basic do’s and don’ts.

Intermediate Prompt

“What should I know before traveling to Italy for the first time in the summer?”

Season-specific travel advice, cultural etiquette, packing recommendations.

Advanced Prompt

“What are the key differences between traveling to Northern vs Southern Italy in July with a toddler, and which region is better for family-friendly local experiences and public transit access?”

Personalized recommendations with deeper segmentation, trade-offs, and justification based on stated priorities.

 

In practice, it takes time to write a longer prompt. There’s still friction there. But a better answer requires context:

  • Who are you?
  • What’s your intent?
  • What have you tried or ruled out?

What constraints do you have?

Anatomy of a Strong Prompt

A well-structured prompt for AI Search includes:

  • Subject: What the user is asking about
    “Italy travel”
  • Context: Who the user is or what their situation is
    “…with a toddler”
  • Intent: What they want out of the answer
    “…family-friendly experiences and easy transportation”
  • Constraints: Requirements or limitations
    “…in July, prefer public transit”

As AI Search platforms get smarter, they reward specificity. The more context a prompt offers, the more accurate and tailored the synthesis becomes. That doesn’t mean most people will use longer prompts, but over time, we can expect search behavior to change. 

We are seeing subtle indications of that behavior shift via early AI Mode data:

People searching on AI Mode use slightly more words than those using traditional Google Search, but significantly fewer words than those using ChatGPT. That’s likely due to unfamiliarity with the functionality of AI Search.

How Prompt Quality Impacts Output

AI Mode and search-enabled ChatGPT filter and weigh the web based on how your prompt sets the stage. With a vague query, they infer your intent. With a strong prompt, they align to it precisely.

That’s why seasoned users (especially researchers, analysts, and SEOs) use more sophisticated prompts to:

  • Emphasize task-specific language (e.g. “compare,” “rank,” “summarize pros and cons”)
  • Set clear roles (“act as a travel planner,” “respond as a UX researcher”)
  • Specify output format (“in bullet points,” “in a table,” “give sources”)

This is where we’re headed: Prompt fluency is the new search literacy.

Over time, AI search platforms will reduce the friction of prompting by pulling context from everything they already know about the person searching. Depending on the tool, that could include your device type, location, search history, preferences, past chats, and even behavior patterns. The more context the system has upfront, the less you’ll need to spell out in the prompt itself.

Both Google and ChatGPT already provide local search results tailored to your location and search intent. We see different results in Google depending on whether you’re searching from a computer or a phone.

All the data these tools collect from you will enhance your search experience, but it raises concerns about data privacy and whether the best answers are what you want to hear versus what you need to hear.

Iterative Discovery and Multi-turn Search Behavior

Search used to be a guessing game.

You’d punch in a few keywords, scan a wall of blue links, maybe click one or two, then head back to the search bar when it wasn’t quite right. Rinse and repeat. It was a clunky back-and-forth. Less about finding the answer, more about figuring out how to ask the question the way the engine wanted.

Search is no longer a one-and-done transaction. It’s a dialogue. One that unfolds over multiple turns as users clarify, refine, and expand on their original queries. This shift toward iterative discovery will quietly, yet fundamentally, reshape how we understand search behavior, content performance, and optimization strategies.

What’s a “Turn” in AI Search?

A turn is a single back-and-forth between the user and the AI:

  • You ask a question.
  • The AI gives you an answer.
  • That full exchange = 1 turn.

According to LLM monitoring software Profound, in August of 2025:

  • The average number of turns in a ChatGPT conversation: 5.2 
  • The median number of turns in a ChatGPT conversation: 2
  • In ChatGPT, 49.4% of conversations are single-turn, while 50.6% are multi-turn.

Most people are having a brief exchange at this point.

In multi-turn search, context compounds. Each turn builds on the last. And that’s where things get interesting.

Contextual Insight: We Don’t Search Like We Used To

Whereas traditional search relied on short, disconnected queries (“best toddler bikes”), generative platforms invite follow-ups (“what about ones that are easy to store?” … “are any under $100?” … “what colors are available?”). Each new turn deepens the session’s understanding of your intent.

This behavior aligns with what we saw in Google’s 2020 Messy Middle study that we referred to earlier: people gather, filter, and compare in loops before taking action. AI search just accelerates and condenses that loop into a single interface.

Real-World Example: Perplexity and Iterative Exploration

In a recent interview, Perplexity CEO Aravind Srinivas shared that users on their platform often start with general questions, then progressively narrow in on specifics. 

“You ask [users] a question, you get an answer… But users do follow‑up — often narrowing or adjusting based on what they see.”

Rather than starting over with a new search, users iterate naturally—each prompt shaped by the last response. It’s conversational, fluid, and cognitively closer to how people think when they’re learning or comparing.

Here’s a sample three-turn search flow showing how a typical AI search journey builds context and moves a user toward action:

Example: Planning a Weekend Trip to Austin

Turn 1 – Broad Intent Discovery

  • User Prompt: “What are the best things to do in Austin for a weekend trip?”
  • AI Response:
    • Highlights 8–10 top attractions (Barton Springs, Zilker Park, live music venues, BBQ spots).
    • Offers a high-level itinerary suggestion.
    • Includes links to official sites for each attraction.

User Goal: Get an overview of the possibilities.

Search Behavior Change: In traditional search, this would be several separate keyword queries (“Austin attractions,” “Austin BBQ,” “Austin music venues”). AI Search collapses it into one broad, synthesized answer.

Turn 2 – Narrowing and Filtering

  • User Prompt: “Which of these are best if I don’t have a rental car?”
  • AI Response:
    • Filters attractions to walkable or public transit-accessible options.
    • Suggests downtown hotels near venues.
    • Links to Google Maps walking/transit directions.

User Goal: Filter based on constraints.

Search Behavior Change: No need to re-enter the query from scratch; context from Turn 1 is retained.

Turn 3 – Moving Toward Action

  • User Prompt: “Can you make a day-by-day itinerary with restaurant reservations included?”
  • AI Response:
    • Generates a 2-day plan, mapping activities and restaurant timings.
    • Recommends booking Franklin BBQ 2 weeks in advance, provides reservation links.
    • Suggests an evening concert at the Continental Club with a ticket link.

User Goal: Create a ready-to-use plan.

Search Behavior Change: The AI is now functioning like a personalized trip planner—combining search, filtering, and action initiation in a single flow.

This is the outcome we hope to see, but the current conversational search experience is far from perfect. In practical use, results from AI Mode, ChatGPT, or Perplexity often fall short in terms of accuracy, sourcing, or depth. 

Recommendations can still be inconsistent, outdated, or shaped by the AI’s training data rather than the freshest or most authoritative insights. 

Right now, they’re “good enough” for many users: fast, confident, and convenient. And because the outputs keep getting marginally better with every model update, that’s often all it takes for people to stick with them instead of clicking deeper. The risk for brands is that “good enough” will eventually mean users never click through to you at all, your expertise is still in the answer, but the AI gets all the credit and the customer relationship.

So, how does it improve with a more personalized result?

Trusting the AI: The Invisible Hand of Generative Authority

People are already treating AI-driven answers as if they’re authoritative, often without verifying them. Due to automation bias, people tend to “accept AI output without question,” particularly when it’s framed in confident, natural language. This isn’t surprising: humans are hardwired to value efficiency, and if the answer feels complete, we stop searching.

A large randomized experiment by Li & Aral tested how design choices shape trust in AI search. Findings: people trust GenAI search less than traditional search by default, but links/citations significantly increase trust—even when those links are wrong or hallucinated. Showing uncertainty/confidence reduces trust. In short, presentation details can inflate misplaced confidence. 

And everyone’s susceptible, regardless of their education level. Despite the assumption that more educated people possess a higher level of critical thinking, participants with higher levels of education (college degree or higher) are more likely to trust GenAI information and are significantly more willing to share it than those with no college degree.

Independent reporting reinforces the risk. A Choice Mutual audit found 57% error rates in Google AI Overviews for life-insurance queries—yet the summaries still look convincing to lay readers, and a Pew analysis shows behavior shifting alongside trust: when an AI summary appears, users click traditional links about half as often (8% vs. 15%). 

Strategic implications:

  • Brand risk – If users trust the AI result over your own site, you risk being replaced as the “final word” in your category.
  • Editorial bias – Your visibility depends on how each platform curates and filters sources; you may be excluded from the answer entirely if your content doesn’t match the platform’s editorial signals.
  • Opportunity – Platforms reward brands that make credibility explicit: author credentials, robust citations, and transparent sourcing. These trust signals can increase the likelihood of being surfaced and cited.

Technical Explanation: How Context Sticks

That trust dynamic becomes even more potent when combined with context retention. Once a platform decides you’re a credible source, its ability to recall and re-use that trust in future responses compounds your visibility.

AI systems like ChatGPT and Google’s AI Mode maintain continuity in two primary ways:

  • User Embeddings – Vector representations of user interests, which evolve over time based on interactions.
  • Session Memory – Temporary or persistent storage of prior turns, allowing the model to recall past inputs and tailor responses accordingly.

Google AI Mode: Personal Context

Google AI Mode will start leveraging personal context from across your account—search history, Gmail, Drive, YouTube viewing habits, and even calendar info—to deliver intuitive, personalized responses. If you search for “things to do in Brooklyn this weekend,” it can recommend venues based on your itinerary, previous reservations, and stated interests like live music or local food. AI Mode will also provide answers based on past searches and entire search journeys.

ChatGPT: Memory and Memory-Like Behavior

While OpenAI doesn’t pull from your Gmail or calendar, ChatGPT does retain conversational context within a session. It remembers what you asked previously and responds accordingly—like a human conversant. If you began by asking, “Summarize this article about electric car battery life,” then later added, “Now explain cost comparisons for home charging,” ChatGPT adapts its next response to your earlier prompt—making the entire session coherent.

In ChatGPT (with memory on), this context can persist beyond a single session. You can ask follow-ups days later, and it still “remembers” the background. Google’s AI Mode appears to do this within the session only—for now. But persistent context is likely coming.

This trust–context pairing raises new strategic questions:

  • What if the AI forgets too quickly, causing your brand to drop out of the synthesis?
  • What if it remembers too much, creating stale or overly repetitive answers?

In either case, your position in the AI’s “mental model” of trustworthy sources becomes as critical as any keyword ranking ever was.

The technical mechanics of personal context and memory are more than just engineering feats. Once an AI system knows who it’s talking to, those details don’t just help it remember your itinerary or summarize yesterday’s news—they begin to steer the entire search experience. What starts as convenience quickly turns into a customized information loop, where the AI decides what to highlight, omit, or frame based on what it has already learned about you. This is where the conversation shifts from “how” AI remembers to “what” that memory does to the truth you see.

The Bias You Don’t See

Generative search systems trust you more the more they know about you. Your preferences—your location, browsing habits, or even the persona you project—shape what the AI recommends. This creates a self-reinforcing loop: the outputs reflect your biases, the AI adapts to those biases, and you become more comfortable relying on answers that fit your worldview.

Garrett Sussman’s AI is Rewiring Search presentation at SEO Week illustrated this feedback loop with live demonstrations, showing how identical prompts produced different results in Google AI Mode, ChatGPT, and Perplexity when tested with varying persona and location contexts. The takeaway was clear: personalization doesn’t just adapt answers to you—it subtly rewrites your information environment.

These shifts are amplified by platform-level alignment strategies.

  • Anthropic uses a “constitutional” approach to model alignment, embedding principles like mutual respect and historical accuracy, then validating them through real-world conversation evaluations (Anthropic “Values in the Wild”).
  • OpenAI applies RLHF (reinforcement learning from human feedback) and “deliberative alignment” to optimize ChatGPT for safety, helpfulness, and adherence to policy (OpenAI on safety & alignment).
  • Google integrates its Gemini model into AI Mode to produce fast and accessible outputs that integrate your personal context and data. Its alignment philosophy is less transparent and primarily designed to maintain a familiar search user experience (Google AI Overviews blog).

Each system’s alignment influences the expression of its bias. Claude’s consistency reflects its values charter, ChatGPT’s answers follow human-preference tuning, and Google’s outputs optimize for surface-level accuracy and click safety. When persona details or “contextual primes” are in play, these biases deepen. Two users can enter the same query and receive completely different recommendations—differences not just in tone or format, but in which brands, sources, and perspectives are surfaced.

This is the “rewiring” in action: your identity and prior behavior become inputs to the query architecture itself. Search is no longer one-size-fits-all. It’s a modal, context-driven experience shaped by the AI’s interpretation of who you are.

From Context to Consequence

Once personal context and memory are embedded into the search experience, bias shifts from being about what is shown to shaping why a user clicks at all. That behavioral change is central to Google’s defense of AI Overviews. Click volume for certain queries has declined, but Google states that the clicks that remain are more intentional and more likely to lead to deeper engagement. In their view, an AI-powered snippet filters out quick-bounce visits and surfaces the users who are ready to take action.

The Paradox Emerges

Elizabeth Reid, VP and Head of Search at Google, described AI Overviews as “the most significant upgrade” in search history, asserting they lead to more searches and more valuable clicks. According to Google’s data, overall organic click volume remains stable year over year, and the proportion of “quality” clicks has grown. 

They define a quality click as one where the user does not quickly return to the results page. Google attributes this to AI Overviews answering low-intent questions in-line, such as “When is the next full moon,” while still encouraging further exploration for more complex or transactional topics.

Many in the SEO community remain skeptical. Independent traffic data often tells a different story, with some publishers reporting significant losses. 

Critics point out that Google both controls the feature and determines the metrics for its success. This raises questions about the objectivity of the quality click narrative.

Why the Engagement Paradox Matters

Even if Google’s data is accurate, the mechanics described create a structural change for SEOs and Generative Engine Optimizers.

  • Fewer visitors may arrive at a site, but those who do are further along in the decision process.
  • If the AI Overview fully satisfies the user’s need, even highly relevant content may go unseen.
  • Brands that are included and cited within AI Overviews are more likely to capture a greater share of the remaining high-intent traffic.

Mitigation Strategy for SEOs and GEOs

Adapting to this shift requires a focus on being present in the spaces where AI Overviews end and user curiosity continues. That means:

  • Structuring content so it is easily retrievable by AI systems, in addition to traditional search ranking.
  • Creating resources that address the likely follow-up questions users will have after an AI-generated answer.
  • Tracking how queries expand into subtopics and identifying where AI-generated responses leave gaps.

In practice, this involves moving from a focus on ranking for a query to ensuring visibility in the specific information gaps that drive clicks and build trust in a more competitive attention environment.

Balancing Today’s SEO with Tomorrow’s Search

AI-driven search is changing how people find and use information. Instead of short keyword queries, users are giving longer prompts that contain more context. The AI uses this to provide answers directly, often reducing the need to click through to websites.

For many sites, this means fewer visits. But the visits that do come through tend to be more intentional. People arrive after refining their questions through the AI, so they’re further along in their decision-making process. This can make each click more valuable, even if overall volume is lower.

Trust is a bigger factor now. Generative platforms decide which sources to highlight, and those decisions influence what people see as credible. To stand out, brands need to show clear expertise and make it easy for AI systems to recognize and surface their content.

The takeaway is to keep doing what works for traditional search, but start building for AI-driven results now. Focus on visibility in today’s AI Overviews while preparing for a future where AI-first search may be the main way people find information.

We don't offer SEO.

We offer
Relevance
Engineering.

If your brand isn’t being retrieved, synthesized, and cited in AI Overviews, AI Mode, ChatGPT, or Perplexity, you’re missing from the decisions that matter. Relevance Engineering structures content for clarity, optimizes for retrieval, and measures real impact. Content Resonance turns that visibility into lasting connection.

Schedule a call with iPullRank to own the conversations that drive your market.

MORE CHAPTERS

Part IV: Measurement and Reverse Engineering for GEO

» Chapter 12

» Chapter 13

» Chapter 14

» Chapter 15

Part V: Organizational Strategy for the GEO Era

» Chapter 16

» Chapter 17

Part VI: Risk, Ethics, and the Future of GEO

» Chapter 18

» Chapter 19

» Chapter 20

APPENDICES

The appendix includes everything you need to operationalize the ideas in this manual, downloadable tools, reporting templates, and prompt recipes for GEO testing. You’ll also find a glossary that breaks down technical terms and concepts to keep your team aligned. Use this section as your implementation hub.

//.eBook

The AI Search Manual

The AI Search Manual is your operating manual for being seen in the next iteration of Organic Search where answers are generated, not linked.

Want digital delivery? Get the AI Search Manual in Your Inbox

Prefer to read in chunks? We’ll send the AI Search Manual as an email series—complete with extra commentary, fresh examples, and early access to new tools. Stay sharp and stay ahead, one email at a time.

Want the AI Search Manual

In Bites-Sized Emails?

We’ll break it up and send it straight to your inbox along with all of the great insights, real-world examples, and early access to new tools we’re testing. It’s the easiest way to keep up without blocking off your whole afternoon.