Authoritative Intelligence:

Evolving IR, NLP, and Topical Evaluation in the Age of Infinite Content

By Jeff Coyle
Head of Strategy at Siteimprove & Co-founder of MarketMuse

In his SEO Week talk, Jeff explores how the explosion of AI-generated content has changed the search landscape and what it means for SEO. He breaks down how search engines now evaluate trust, authority, and topical relevance at scale—and why editorial excellence and human perspective are more critical than ever.

Is FOMO hitting you hard after Missing SEO Week 2025? It's not too late to attend in 2026.

SEO Week 2025 set the bar with four themed days, top-tier speakers, and an unforgettable experience. For 2026, expect even more: more amazing after parties, more activations like AI photo booths, barista-crafted coffee, relaxing massages, and of course, the industry’s best speakers. Don’t miss out. Spots fill fast.

Jeff Coyle

ABOUT Jeff Coyle

Jeff Coyle is the Head of Strategy for Siteimprove and the Co-founder of MarketMuse. Jeff has more than 26 years of experience in the search industry. He is focused on setting the standard for Content Quality with Content Strategy, AI and Content Lifecycle technology solutions.

OVERVIEW

In his SEO Week presentation, Jeff addresses the challenges and opportunities in today’s AI-saturated search environment, where infinite content can be generated with ease…but quality remains scarce. He walks through the evolution of search, from simple keyword matching to complex hybrid retrieval systems powered by dense and sparse vectors, passage ranking, and semantic analysis. Jeff emphasizes how search engines are now prioritizing trust, expertise, and site-level authority as they adapt to the overwhelming volume of mediocre AI-generated content.

Jeff also discusses the shift from traditional blue links to AI Overviews and RAG; how Google synthesizes answers instead of merely retrieving documents. He makes a compelling case for why every page matters and why editorial strategy must adapt with input metrics, diverse perspectives, and structured planning. Ultimately, he urges SEOs and content strategists to focus on differentiation, point of view, and building content that truly deserves to be trusted and surfaced.

Recommended Resources:

DOWNLOAD THE DECK

Talk
Highlights

Search engines are prioritizing trust and site-level authority 

To navigate the overwhelming influx of AI-generated content, making every page’s quality and reliability critical in ranking decisions.

Modern search is powered by hybrid retrieval methods 

Combining lexical (text-based) and vector (meaning-based) analysis—to understand context, intent, and relevance beyond keywords.

To survive in the age of infinite content, content teams must lead with… 

…editorial excellence, incorporating input metrics, differentiation, and early-stage AI integration to build truly valuable and trustworthy content.

Presentation Snackable

Is FOMO hitting you hard after Missing SEO Week 2025? It's not too late to attend in 2026.

SEO Week 2025 set the bar with four themed days, top-tier speakers, and an unforgettable experience. For 2026, expect even more: more amazing after parties, more activations like AI photo booths, barista-crafted coffee, relaxing massages, and of course, the industry’s best speakers. Don’t miss out. Spots fill fast.

What’s one thing you didn’t get to share in your talk that you’d add now?

Jeff Coyle: I wanted to get into more detail abut how AI Overviews worked and separately how SERP feature changes and algorithm changes are being conflated with AI Overview and AI Mode changes.

Has anything since SEO Week changed how you’d frame your talk on AI Mode or SEO today?

Jeff Coyle: I mentioned query fan-out and defined it in the session, but, since then I’ve spent 100+ hours decoding it and building product specifications that’ll be part of Siteimprove solutions in 2025.  I have made some revelations about the importance of query processing WRT AI Mode and AI Overviews.  How the query is initially interpreted and processed plays a big role in understanding the output of AI Overview and understanding the additional versatility (and magic) of AI Mode.

Transcript

Mike King: So Jeff Coyle, one of my favorite people in the SEO space. I mean, I guess you can prepend that for every speaker that’s up here. They’re all my favorite people, But Jeff Coyle, he founded and he owns a brewery in Georgia that won medals at the World Beer Cup, and that’s why he’s gonna talk about SEO, of course. No, the the great American beer festival and the US Open, and yes, he’ll buy you a beer and talk about it for hours. Jeff actually built a focused crawler-based vertical search engine platform in 2003, way before RAG was cool. He was once interrogated by Mexican police for transferring escrow from a domain sale, while on vacation with his family. All inclusive of course. Presenting authoritative intelligence involving IR, NLP, and topical evaluation in the age of infinite content, Please welcome, Jeff Coyle.

Jeff Coyle: Sweet. Alright. We’re back. And thank you so much to Mike. This is an amazing lineup. You’re all gonna be talking about that you were at the first SEO Week for the rest of your career. The rest of your career. I’m excited to welcome you to this amazing event, this amazing signals.

 I got vectors in the screen. There’s lots of vectors being talked about. And we’re gonna be talking about information retrieval, that’s IR, NLP, natural language processing, and topic modeling and topic authority in an age of infinite content. Because we’re living in a moment where everybody with a few clicks can generate infinite amounts of content. The barrier isn’t publishing anymore like it was. We’re getting into ways to say, what content truly deserves to be trusted? And today, I’m gonna walk you through how search engines and us, SEOs and strategists, were adapting to this reality. We’ll look at how search evolved. We’ll talk about vectors, like everybody else will on this chat. We’re gonna get into some detailed giveaways from me to you today, which includes the thing I give to editorial teams so that they know that I’m not coming in to rip their hearts out. And I’m gonna help them get their expertise on the page. I’m also gonna walk through my human in the loop checklist so that you can make sure that you’re getting AI involved in your content earlier in the process, and you’re not just generating drafts and editing them, which kills everybody. And don’t do that.

But who am I? As mentioned, I have a brewery in Georgia. I like to talk about beer. All of my examples are about beer. I’ve been doing this for way too long. I’m proud that I got led in the building as a boomer SEO. And, I I I love content strategy. I love AI. I grew up in Jersey, a little bit of, little bit away from here in Monmouth County in Long Branch, but I live in Atlanta. I went to Georgia Tech. So any of those things and beer, we will have that conversation. And I work for Siteimprove, my company, MarketMuse. I have a lot of customers in the room. MarketMuse, thank you so much for everybody that’s here. We were acquired by Siteimprove last year. And I’m currently building some of the things that Mike said SEO platforms don’t build, including things like claims management on all that fun.

But this is about content today. The amount of content on the web is skyrocketing. Agentic workflows, they let us generate articles, videos, and posts in seconds, And we’re living in the age of infinite content. I like to say we’re living in the age of infinite, mediocre content, because it sounds great. This stuff looks really great, but it’s a double edged sword. And as a result of it, search engines have had to change. They’ve had to figure out, how can I evaluate content and figure out who to trust? But the key is, as you know, if you’ve studied these patents, if you studied this world, it’s always been about trust. They’ve always been trying to figure out who they should vouch for and who they should get into the picture when they’re gonna give you their best bets. Their best bets used to be candidates, and then they would rank them and re-rank them. Now their best bets are the books they’re gonna do to write you a book report.

I was telling Sanakrishnan earlier that I I gave this speech to my 8-year-old son. He’s a pretty bright kid. And he told me at the end of the thing exactly how AI Overviews works, which was really cool. But the funniest thing he said, he goes, “dad, it can be either be a really good good book report or a really bad book report.” And that’s what we’re gonna talk about a little bit today too.

Because finally, economics 101. What happens when something that’s a scarce resource becomes an infinite resource? Everyone has to change. The producers change, and the evaluators change. And you and me, we’re the evaluators too because we are changing our behaviors. There’s an entire day that’s gonna be talking about that this week. Right? But search engines are also having to change the way that they think about these things. You have to consider things like, wow, there’s a conspiracy about them bringing down costs to serve with their, with their, core web vitals. They’re also trying to give hints to reliability. And thinking about site level vectors, site level reliability, site level understanding, so they can say, “these pages deserve to be here.” They had to change that because they’re dealing with a world of infinite content, where this can be built at scale. They have to be able to evaluate those things sooner than later.

But now we’re gonna get into, hey, now we’re gonna get into how software has changed. My dad ran grocery stores as I was growing up. He used to say, all all the aisles matter. If someone spills a jar of pickles in aisle four and someone falls and slips and fall, it’s gonna be a million dollar lawsuit. We all go down. And that’s what I’m telling you today. You’re working for larger companies. You’re working for your customers. You’re working for the clients. You need to be telling them that every page matters. I don’t care if it’s my pages, your pages, landing pages, short pages, long pages. Every page matters because we’re looking at site level evaluation.

He used to also say, “In order to appreciate tomorrow, you have to understand today.” And that’s what we’re gonna dive through today. Some of these things have been covered today, but we’re gonna get into more details about everything from lexical analysis, to vectors and keyword matching and how it started out. So simple keyword matching is how we started out. So content was crawled and stored in a giant lookup table. And an inverted index would connect keywords to pages. This is the basic stuff. Right? So if you search for Mexican lager history, the engine would break it down into keywords and find documents containing those words. And ranking was pretty simple. It was easy. Right? You didn’t just have to refresh AltaVista. You can just throw those keywords in, refresh, and you’d rank. That’s actually how it worked for a while. But based on keyword frequency, term frequency, you hear a lot of people say TFLDF. That’s a common, common algorithm that was used for those types of things. Term frequency. How frequent is that word in that page, in that document? 

We’ve advanced into other methods. We’ve advanced into situations where you could say, PageRank. Oh, what was PageRank? It was them trying to say, oh, it’s a natural crawl of the web. But it was their first crack at at, at IP that would tell the story of how can I trust someone more than someone else? They can use those links to say these pages are slightly better. This site is more authoritative. This page is more authoritative. It was basic, but it was efficient. And it got us to the first wave. The first wave was a candidate set of pages that you should choose from. Right?

And it was better than just text. I remember building a text search engine for an intranet. It was really easy because it was an intranet. But when we expanded that to the web, we had to figure out a way to judge these things, not just from the content on the page. And if you look at the expansion from that into AI use in search, we go from keyword matching all the way from strings, you’ve heard this a million times, to things. But this isn’t just speech. Right? This is going from machine learning and natural language processing, allowing search engines to interpret context, not just literal words.

So New York City is NYC with a text-only search engine. That’s not gonna work. It’s also the big apple with a text only search engine. That’s really not gonna work. And an apple, it could be a fruit. It could be a company. If any hockey fans, any ranger fans in the audience I know there’s probably a few. Oh, here we go. I see one in the back. Not so much this year. They struggled. But an apple an apple could also be a great assist. You pass the apple and you’re not getting the goal. And the a is an assist, apple. That’s not gonna come through in a text based search engine. Right? So Google introducing vector based query understanding.

We saw the 3D map earlier. The 3D map, similar topics get related to one another in 3D space. Vector search allows us to move to meaning very quickly. It allows us to say, when I search for water common, brew water common beers that are brewed with corn, that’s actually similar to Mexican lager recipes. Wow. That’s meaning. Right? You’re going from asking a question or a thought, and it’s close to something else in vector space.

Passage ranking, which we’ve talked about a little bit here, allows us to break long content items into chunks. That chunking is extremely important, especially when you’re looking at long form content. Because you’re looking at something that’s 3,000 words down the page, and they have a section that tells you how you can use asparagus in this particular pasta recipe. Right? If that’s the most interesting thing on that page, and it’s useful, I can use just that passage to be part of this. 

I love that there’s a talk a little bit later about hybrid search optimization. I’m so excited for that, by the way. Because one of my favorite topics is hybrid retrieval. So we’ve talked about lexical analysis, text. We’ve talked about vector. That’s meaning. What hybrid allows us to do is take the best of both worlds and say, I’m gonna sprinkle in some of this and some of that. I’m gonna map how much of A and how much of B I should use. And this was a huge breakthrough. And it’s important to understand that this hybrid state still exists on top of the search engines today.

Before I go into that, though, vectors, as I as I promised I would. This is a numerical representation. Turns words, queries, and documents into numerical vectors. So you think about it like mapping every concept. In 3D space, similar ideas land closer together. Right? So we can compare query and document vectors. It doesn’t sound so exciting when I say that. But it’s the reason why when I say, when was Barack Obama born? And I also say, how old was the 44th president? Those two things land in similar areas of that vector. That couldn’t be done before. 

So vector search allows us to do things even if the wording differs. We can immediately handle synonyms, paraphrases, context, and this is referenced as dense retrieval. If you’re ever reading these books or these things. There’s also sparse retrieval. That would be, the lexical analysis. Right? So that hybrid becomes an alchemy. Blending these things together gives us the best of both worlds for the search engines because it’s not always good to use vector only. Why? What are some real examples? It’s very specific queries, maybe acronyms and jargon.

Those are things that require me to use hybrid to get started because it’s not gonna always find great matches in my vector. It requires the vector requires me to analyze large corpus, corpora of documents in order to be able to build meanings. This is actually built with the word2vec all library of unigrams, which is one word terms. And the word brewer is aligned in brewery. So you can see here that breweries, brewery, Guinness, brands, beer, draft, carbonation. I go further all the way to the top, and I’m looking at meat. Right? That’s what tells the story of the top 700 relationships in word2vec all here, and how you can connect it to one another in 3D space. So I love seeing examples like this because you can understand, woah, woah, woah. Am I still thinking about words? I need to be thinking about the query, how the query is processed, and things that would tell the story of expertise. 

When I built MarketMuse, the topic modeling technology, the goal was to bring high quality content to the world, but it was through the lens of topic modeling and vector analysis. So I can say, if I were an expert on everything there was to know about the New York Rangers, I would know a lot about Mark Messier. Right? These are things that I absolutely would know. So if I don’t have content on my site that tells that story, I’m not representing a New York Rangers expert on my site. 

And when we build the better relevance, I tried really hard to be cool with this graphic because Mike’s DJ. So I actually got this guy, and that’s as close as I can get him to look cool. He’s holding a bit of BM25. He He’s holding a bit of vector embeddings, and he’s trying to figure out how much of each he should use for this particular query. He’s making that perfect recipe. Right? So query expansion was covered earlier, with on the Microsoft discussion, but it’s something I love. I love I I probably sent many of you in this room examples of query expansion, where Google is clearly not just using the query you typed in. They’re doing things that will expand that, synonyms, things behind the scenes, words that aren’t even part of your query. And I figured out many times ways to show me what Google’s using. But the key is they’re rewriting this. They’re expanding it. They’re refining it using the context or personalization before they actually process and get into their rich information retrieval. They want to make sure that they can predict that they’re going to build candidate pages that are actually good. Right? They’re trying to get better and better at using the query. This is going to be really, really important later, by the way, when we talk about AI Overviews. But they’re trying to figure out how can they expand that query. Right? So this, again, makes a lot of correlation studies a little bit bunk. Right? Because they’re looking at query and they’re trying to expand it. You don’t even know what words are going into the box. Right? How can you say that how can you look at the tail wagging the dog outcomes and say that that’s the way you should do this because the query is being processed differently? So hybrid retrieval gives you the best of both worlds. That’s the way to think about it. It really challenges correlations. 

And neural re-ranking, a scoring relevance in context allows us to say, yeah. This is a really great answer for this question. Or it can say, this is pretty good. It’s not on the nose. That’s the level of effort that they’re able to do. And they can do this really quickly and cheaply. Five or six years ago, these things would have been wildly expensive and slow. Now we’re able to do them with microseconds for micro pennies. It allows them to do signal retention and say, this authority and trust system is even more important. What’s that mean? Right? That means where I’m saying, is this site reliable? Is it fast? Is it appropriate? Speaking of topicality before, I was writing a lot of things about financial services, and all of a sudden, I wrote an article about New York City that’s not really relevant. Right? Is it topically appropriate, or does it have a lot of signals of topic drift? Right? These are the signals of authoritativeness, breadth and depth of coverage, page rank, off page. Right? These are the things that by reranking this beautifully x the the the beautiful x x out of the things that came from our hybrid retrieval, I’m able to layer on logic and build a better search result. So hybrid, and then put all the magic dust on top, and I’m gonna get results that are better and better.

So we started in lexical. We go to semantic. We’ve got hybrid. We get into generative. That’s why we’re all here. Right? What generative is doing is it’s synthesizing results based on its information that it retrieves. Search engines and answer engines still depend on authoritative human created content. But instead of just retrieving documents or links, it’s generating an answer for you. So it’s not fetching only. It’s synthesizing. It’s pulling from documents and composing a response. This is a crazy difference. You have to get the fact that this is a completely different model. It’s a completely different detail, and we all just lived through it. We lived through it with AI Overviews that didn’t have citations. We went we lived through this by using these various incarnations of LLMs and realized, wow, that isn’t very good. Oh, it’s getting better. We want to be challenging ourselves to know why it keeps getting better. And one of the ways that search engines realized they can cover their bets early was by not losing the trust of their users. Early on, they lost the trust of their users. They absolutely did. We all made fun of them. I made Bard thinking it was ChatGPT. I made ChatGPT thinking it was Bard. I did all these humorous things to poke fun. They said, I’ll show you. I’ve got this really, really good search engine. This really, really good search engine provides hybrid retrieval. It returns results. We sprinkle on a little authority, some PageRank, some EEAT on top, and I’ve got great results. How about this, Jeff? I’m gonna use those results as my candidate. That’s the first wave of retrieval augmented generation. You’re typing in the search. It’s pulling from the top ranking pages. So I’m already starting with something pretty good because Google’s pretty good. Right? Then they said, we’re gonna continue to refine this. We’re gonna make sure that it’s factually grounded, so semantically grounded. We’re make sure it’s factually grounded. And we’re gonna start realizing that we can deliver more than just an answer sometimes.

We can synthesize a really great response. And we can synthesize this by pulling not just from the pages that are ranking. We can do some additional steps in that process and start to win and start to produce better results more often with our retrieval augmented generation. And the coolest thing you can leave here with is from retrieval augmented generation, now y’all know a whole lot more about what the r stands for because it’s information retrieval. Right? So we can go home and say, I know what the r stands for in RAG, and I know why they’re doing it that way to deal with the fact that there’s an unlimited supply of really bad content.

And how is this work behind the scenes? Right? This is the critical part that I want you to take away. When we’re looking at traditional search and vector databases storing this content, we’re processing this with queries. We’re processing this with NLP techniques, natural language processing techniques, Delphic models. There was a there was a research report showed earlier, and classification driven LLM selection. This is what’s coming with with your money or your life topics. Do we have a refined LLM that we can point to when we’re processing that query? Again, knocking correlation into the dust. Right? Because we don’t even know what LLM is gonna be picked when we type the search engine. How are we gonna judge this by tail wagging the dog? Right? And document tree, we’re gonna fetch these rec these relevant documents and passages with our hybrid search and then try to answer using a coherent response from retrieved context. I love context. Context is how I built a content planning technology that allows you to see every possible editorial angle that you could have. Right? So if the search engines can look at your topic and say, here’s all the editorial angles that exist in the entire world for that topic.

What of those does this writer, does this site take on? And what can I learn from them by seeing that they tend to write only in the top of the funnel? Are they really experts? No. It’s the person that covers the entire journey that’s actually the expert. I wanna figure out ways to reward them with my technology. So when I’m acting as an answer generator, I’m also doing journey evaluation in these contexts. And those last steps, building with confidence and understanding ways that I can attribute. So when I’m adding citations, I’m adding my special filtration, my sauce, and my logic. I can better validate quality. And we can all make fun of it while it’s being built because they’re basically building while the plane’s flying, but it quality is getting better and better and better every day. And we’re going to be to the point where they have a confident a confident system that they’re proud of, and they may even show it to us. Probably in leaks, but we’ll talk about that later.

But before we get into this, why is this a problem? Everybody says hallucinations. They don’t really know what it means. Why is bias and trust a thing? I talk about bias. I mean, the original foundation of vector science was biased, gender-biased. The original science was built based on all the corpora of information technically was biased. It was biased towards the content that was used. Right? So in the age of AI, truth and trust aren’t negotiable. Right? These are not negotiable topics. Bias is not negotiable. It will hurt you. I work now focused on accessibility and how technical SEO it’s technical SEO elements are influencing outcomes for large businesses, not having screen reader, compatible content, not having contrast checking appropriate, dramatic impacts on performance.

These are ways that search engines now are able to say this is a more reliable site or a less reliable site. But attribution was a critical piece of this that they forgot. They forgot that the whole point of their search engine having PageRank was to build trust, that I’m gonna give you a better response. But then they went out and they gave us generated answers that were unexplainable. They lost our trust. So a big piece of where they’re focused now is how can I bring back that trust? How can I make sure that I’m attributing these things properly? And also maybe adding value with these attributions. So while I’m decreasing traffic, I may be able to elevate great content more frequently. Because it’s not always about one type of answer. And I’m gonna go into some deep dives here on the AI Overviews in a second. But being able to have the science that delivers nuance for complex queries is required now to manage quality. So what does that mean? I need to know if an if a, if a response we always knew they had QDF, right? Query deserves freshness. Does this require a new topic? But now I need to know, does this require a simple answer? Does it require a long answer? Should I put a short answer in there and then also give them a little bit more? Should I show them the next five links? These are the questions that they’re having to answer. They’re navigating what we’ve had to navigate as SEOs for years. Right? They’re doing the same things we were trying to figure out. What are the next five questions? That’s what they’re having to do now because they’re having to write content. It’s kinda funny when you look at it that way. And bias and safety, guardrails, buffet versus blend issues. Right? So this is where, what if a search result is ambigu- what if a search query is ambiguous? And I don’t mean topic, fracture. I don’t mean intent fracture. I mean, do type in the word bat. Right? Bat could be flying. Bat could be swung on a baseball diamond. It’s other you can bat a ball out of the air. Right? What do they do? Should they show a summary at all? And if they do, they have the buffet problem. Right? I have to show you a buffet of things. You could say, hey. This could either be a flying bat, a vampire bat, or a baseball bat. Blend those wages together? I’m I’m a mess. There’s a guano everywhere. Right? So the the that’s where we’re dealing with now. 

Right? So this is my easy view of AI Overviews that my son was able to tell me how it worked as a result of. And I talked through retrieval. I talked through generation. The key pieces here are alignment and citation. These were hard to figure out. How do we get to factual alignment? I’m gonna use query dependent signals for text relevance, part of retrieval. I’m also gonna use query independent. Query independent would be off page factors. Do I trust this site? Do I not trust this site? All the things that go into how I’m gauging reliability. But what they figured out really quickly though, well, that wasn’t good enough. My hybrid wasn’t good enough. I had to do more analysis of the query. I had to understand whether a factoid was the proper response or whether I had to synthesize many opinions and many perspectives to do this retrieval augmented generation.

This, again, is why you cannot put all the queries in one box when you analyze this, because you’ve got queries that don’t deserve generation at all. You’ve got queries that deserve factoids. You’ve got queries that deserve very complex synthesis and varied, varied opinions and diverse opinions. And all of those are mixed in when you’re doing analysis of AI Overviews.

And here’s the anatomy of the AI Overview and a rank brag for me, obviously, because that’s why we’re up here. Right? I think I still own rankbrag[.]com. But, the, the this is factoid versus complex synthesis in, in action. So let’s just think about a factoid. Which is faster, a cheetah or a sloth? Okay. All the pages on the Internet, we would hope, are gonna say the cheetah’s faster. That’s an example of consensus factoid response need. I want I expect that 99 out of 100 pages are gonna say, yes, cheetah faster than sloth. Right? Now I, as an AI Overview’s product manager, need to determine, do I just write cheetah faster than sloth, or do I give them a little bit of a book report about it? Cheetahs go 120 kilometers an hour. Sloths go 0.125 kilometers an hour. Do I tell them about why they exist? Do I I gotta get into all the details, much like, you you know, that long form recipe we were talking about. I answer the question, but I give them a little more. That’s the current market for factoids.

But what happens when you have a complex query? What is a Mexican lager? On the surface, that kinda looks like a definition, doesn’t it? Right? But it’s not. If you’re an expert in that topic, you know that this is a widely debated topic. These are things this is there’s there’s nuance. There’s cultural nuance. There’s fact that their Mexican lagers can be seven or eight different styles of beer. Right? So if I read an article and it says a Mexican lager is this, and that’s all they do, that’s all the article says, that person is not an expert. A real expert would have discussed the nuance, and they would have delivered differentiated unique value in the article. The search engines can now evaluate that that information plus, that that unique differentiation is good. And for this query, it deserves diversity.

Diversity in perspective and diversity in the form of differentiation. So when you’re evaluating AI Overviews, you need to be evaluating, is it a factoid? Is it a complex topic? Does your article bring something special? Right? On the flip side, if it’s a factoid and you’re the one the one, diverse opinion, that’s called marginalization. Right? That actually hurts AI review. They don’t like that. So if you got that weird marginalized view, right, and they consider it to be a done and done easy obvious thing, that can actually negatively impact you. We talked about query fan out. I can’t believe we’ve already talked about query fan out. I thought that was the thing I was gonna bring up only. But basically, that’s that query rewriting, but we’re expanding with synonyms. We’re in this case, where it’s a a complex synthesis, it’s expanding think about it. It’s expanding through the buyer journey. It’s expanding it so that you it’s sensing that you’re in the top of the funnel. It will bring in some queries that may better represent the middle of the funnel to try to supplement. That’s a good way to think about it. There’s also some technology from focus crawling, for radius discovery. So looking at links and looking at, the next pages that people might look at, to see if those are gonna be good candidates to source from for our really awesome book report. And determining when that differentiation adds value or hurts value is a tricky game. I love to poke fun at it, but it’s really, really hard. I built a modeling technology similar to what Mike built, with that with that website, and it’s really, really hard to get it right and figure out which queries do deserve differentiation and which ones would be marginalized by that differentiation and make you sound silly.

Impact on content is unbelievable. We’re dealing with situation where SERP features and the ten blue links don’t matter, in some cases. We’re dealing with a situation where we were ranking number one. I’m still ranking number one. And now I’m on page two. Right? I am three viewports in some cases. Three viewports on desktop below the fold with my number one ranking page. That’s terrifying for everybody. 

So there’s a big huge thing for click equality. Click equality would mean how much is that click worth? If they actually make it to me, they’re super qualified. Right? If they get to me as a result of a type in after getting my brand mentioned, how are we gonna ever measure that? How are we gonna adjust the way that we attribute to what Mike mentioned earlier? That’s how we have to change. This SEO community needs to come together and set those standards so someone else doesn’t set it for us.

Someone else doesn’t say, oh, yeah. Well, now you’re saying, like, SEO doesn’t work for those brand thing. You guys are just talking. You don’t know what you’re talking about. We’ve gotta figure out instances like Profound’s doing with their product where it’s a brand mention, it’s a topic mention. How are you gonna do that into a way that’s gonna be actionable and justifiable with data? And so adapting to the logic also then becomes very, very controversial in our space. So many oh, it’s just like SEO. Right? It’s not. It’s not just like SEO. It’s completely different game. There’s completely different rules, and you can, at least for now, influence the outcomes.

But you shouldn’t bet on always being able to influence the outcomes, because their science is progressing faster than you will be able to type and then you will be able to generate. So understanding the buyers, understanding the readers, and understanding how the evaluators are changing are the way you’re gonna win in the long run. I promised this, before, and feel free if you wanna snap this. This is what I deliver to editorial teams. This is how you can live with AI if you are a 20-year writer and editor who has amazing skills. You understand developmental editing. You understand holistic evaluation. You understand that fact checking is not optional.

These are what change enterprise teams. I work with we have 6,000 customers. I work with so many customers building enterprise content strategies. These are the types of things that bringing AI to them, it allows this to be adopted quickly. Don’t forget developmental editing. That is one way to rip the hearts out of editors. Developmental editing is when the editor in chief talks to theirs talks to the team member that submitted the draft and tells them how to make it better because we’re both experts and we’re trying to win together. Don’t forget those steps. Don’t skip to the end and hand editors drafts built with AI if you didn’t use AI throughout the process. And this is the second one.

Point of view is the most important thing. Your content needs a point of view to deliver differentiated value. Your content briefs, your content plans need to be built for you only with your goals. Doing this earlier in the process saves you tremendous waits and tremendous rewriting and pain later. Get this early because you can’t edit your way to excellence. You have to plan for it.

Personalization can get you really far too. It can also break silos. Looking at things like understanding maybe your marketing team did this data study. We all love our data studies. Come on. Look at this room. Right? Make sure that is baked into the plan. Make sure those are in the briefs so that when you hand it to the writers, they know exactly what’s expected of them. So you bring science to them. You don’t expect them to edit science.

And thank you so much today. Getting ready for this week was a huge undertaking for everyone involved. They’ve done such an amazing job. This is such a beautiful venue. Remember that in a world of infinite content, only editorial excellence survives. And also, give. This community needs people to give more. The epics of search, I’ve been doing this for a long time, where there was a lot less giving is where we regressed. When there’s a lot more giving and there’s a lot more service in our space, people win, and we can change the way we approach generative engine optimization or whatever it is we wanna call it. Thanks a lot.

CATCH EVERY PRESENTATION YOU MISSED

Filter By

Watch every SEO Week 2025 presentation and discover what the next chapter of search entails.

What are you waiting for? Get SEO Week Tickets. Now.

As AI rewrites the rules,

read between the lines.

AI is reshaping search. The Rank Report gives you signal through the noise, so your brand doesn’t just keep up, it leads.