Manick dives into his team’s extensive research on decoding Google’s algorithm, revealing insights on topical relevance, local SEO ranking factors, crawl behavior, semantic content signals, and AI-driven optimization. He emphasizes the importance of collaboration within the SEO community to keep pace with evolving search technology and shares free tools, case studies, and data-backed strategies. The session encourages SEOs to focus on their specific niche, leverage scientific methods, and adopt a holistic, research-driven approach to outperform in an increasingly competitive landscape.
SEO Week 2025 set the bar with four themed days, top-tier speakers, and an unforgettable experience. For 2026, expect even more: more amazing after parties, more activations like AI photo booths, barista-crafted coffee, relaxing massages, and of course, the industry’s best speakers. Don’t miss out. Spots fill fast.
Manick is the Founder and CEO of the Search Atlas Group and CTO of Search Atlas, an award-winning AI SEO platform. A 3X INC 5000 founder, he also leads LinkGraph, serving brands like Shutterfly and Samsung. With 10+ years in SEO, he helps companies scale through AI-driven strategies. Featured in Forbes and VentureBeat, he speaks at major events like TechCrunch Disrupt and BrightonSEO.
Manick shares how his team has worked to decode Google’s algorithm through months of rigorous, data-driven research. Covering topics like topical relevance, semantic content scoring, local SEO factors, crawl behavior, and AI-driven optimization, he reveals actionable strategies to sharpen competitive advantage. Bhan challenges outdated SEO assumptions, shows how topical authority can outweigh traditional authority metrics, and offers practical methods such as pruning irrelevant content to improve site focus – backed by large-scale studies, case examples, and free tools like “Patent Brain.”
Manick’s core message is that SEOs aren’t competing with each other as much as they are with the search engines themselves. He calls for greater collaboration, knowledge sharing, and scientific rigor to close the gap between current industry practices and the advanced methods used by search engines. By embracing a holistic, research-based approach that blends content, technical, link, and user signal optimization, he argues, SEOs can not only achieve better rankings but also prepare for a future where automation and AI strategies dominate.
Topical relevance is highly predictive of rankings
More so than traditional authority metrics like Domain Rating, making niche topical dominance a powerful SEO strategy.
Local SEO rankings are most influenced by…
…proximity, keyword relevance in reviews, and business category, with weighting varying by industry.
Holistic, research-driven SEO
Combining content, technical, links, and user signals can be automated and scaled to deliver significant performance gains.
SEO Week 2025 set the bar with four themed days, top-tier speakers, and an unforgettable experience. For 2026, expect even more: more amazing after parties, more activations like AI photo booths, barista-crafted coffee, relaxing massages, and of course, the industry’s best speakers. Don’t miss out. Spots fill fast.
Mike King: Frontier SEO Research and Data Science, please welcome Manick.
Manick Bhan: SEO Week. How we doing? What’s up? Mike, thank you for having me. It’s great to see you, brother. And I’m really excited to share with you guys some research today. But first, I wanna do a quick crowd poll. Where are my white hat SEOs at? Up here in the front? Actually, white hat? Come on. Keeping it squeaky clean? What about my black hat SEOs? Where’s our Snake? There he is. Awesome. Never underestimate the power of the dark side. And for those of you that said nothing, I guess you guys aren’t hat people, and that’s fine. But you’re one of these, I guarantee it.
We kind of think of of SEOs, as Mike alluded to, there is no one thing that is SEO. We all sort of look at it a little bit differently, and we have different approaches. We we have SEO sheep, we have SEO parrots that just repeat what they what they see on Search Engine Journal, or anything from Danny Sullivan. We’ve also got other people on here. We’ve SEO babies that are still, you know, reading the elementary Moz blog out there, and then we’ve got clown hat SEOs that are still pounding their sites with GSA backlinks, like it’s going out of style. But what I think we need more of today than anything else, all jokes aside, is we need more scientists and researchers like JR, like Jori, like the people here that are sharing their knowledge with the rest of us so we can all advance together. And I hope that we all feel inspired by what we’re learning and that as a result of some of the research that’s being shared, that we collaborate even more fully to learn more about SEO. And for those of you that don’t wanna do it, there’s always easy mode, which is I’ll just wait for Mike King to tweet and do whatever he says and still get good results.
The thing that’s exciting today is that if you widen your aperture, search is more dynamic and interesting and more of an opportunity than it ever has been before. And this really gets me excited because search is not just about Google anymore, there’s all these other places that we can focus on, that we can create visibility. These are the platforms where the game is gonna be played. Search is based on information retrieval science and mathematics. And the algorithms behind it, I would say they’re not pure, they’re subjective. Putting Reddit into the search results was not objectively the right decision. There’s someone that’s deciding what they think is right, what creates the best search experience. And obviously, we get user signals to validate or contrast that, but there is no objective, correct algorithm like the Pythagorean theorem for search. It’s being discovered, it’s being written, it’s being understood live in real time. And I think all of us here believe that we know important parts of how this algorithm works.
The thing though is that the search volatility is at all time highs. We have never seen volatility higher than what we’ve seen in the last one year. Beginning with November, the scale of that volatility was unprecedented. And, what is becoming very clear to me and to I think many other people in the room is that Google is playing their own game. It’s not the SEOs in this room against other SEOs in this room. I bet you, if we surveyed the room, there’d be very few keywords that we’re competing with each other over. It’s us against them. And so, in order for us to triumph and win, we have to collaborate, we have to work together, we have to share our research and knowledge to advance the industry forward because I believe, as an industry, SEO is at least a decade behind the science that the search engine is doing. We’re seeing patents or books in information retrieval systems published in 2009 that we’re only now starting to talk about as an industry. And so my mission as the CTO of Search Atlas is to help decode the search engine and to share what we’ve learned with the community and to adopt an SEO approach that stems from holistic SEO. It’s not just content or links or technicals, it’s all these things, including click signals, and to bring the industry forward. And so, today, I hope to take you guys through my research and I hope that each of us, everyone in the room will learn something from this that will help you in your practice, whether you’re an agency or you’re a brand, that we can advance all of our SEO together through this research.
So today, let’s step into the SEO lab. I’m gonna take you through the research that we’ve been doing to decode Google for the last six months and I’m gonna give you a lot of free stuff that we built. It took us a long time and I’m excited to share this stuff with you guys. So buckle up, buttercup. It’s gonna be a wild ride. Let’s do it. So first, I’ve been doing SEO research for the last seven years. I believe that decoding this algorithm is worth over a $100 billion. Right? It’s highly, highly valuable. And so, to decode that algorithm is gonna be an exercise not in futility, but in great triumph if we can figure it out. And so, we’ve been doing single variant tests and sharing our research. We spent over $30,000 a month on the team that’s doing this work. And I spend most of my time engaging with them and pushing that research forward. Today, I’m gonna give you guys some of the work we’ve done for free. Okay?
My GitHub has a variety of things on it that may be of value to you. One of the things that I’m really excited to share is Patent Brain, which is a GPT with 9,500 Google patents, as well as some other stuff on here. So, if you are interested in that, check out my GitHub, and I’ll take you guys through what we did. So, 9,500 patents. And, you know, I myself have probably read about 200 patents. I would have read more, except my wife is like, please put the patents down. We want to see you. We want to hang out with you. So, thankfully, we can train the GPT with all that data, and we can learn. We can survey that information. We can get way more insight, way faster. If you guys wanna try it out, you can go to Patent Brain on Search Atlas where we’re hosting it and ask it questions like these questions. These are some of my favorite questions to ask that I found to be the most revealing, the most insightful, where we can actually get from the patent literature algorithms, the exact algorithms that are being used mathematically. We can ask it questions about which ranking factors they’re using. And this is all theoretical initially, but that theoretical moves forward into science.
Let’s start with semantic distance. Much has been said of topicality here, rightfully so. It’s an important aspect of the algorithm that we’ve seen. We’ve seen it empirically through case studies that people have done like Core I, amazing case studies of topical authority, but we’ve never quantitated it. And so, to begin, we wanted to understand if we could plot and visualize the topicality of a website much in the same way that Google is doing it, putting all that content into vector space and the nodes that are close together are thematically related. And so, can we actually do this? And we know that this is important because we can see it in the leak. Thanks to Irfan and Mike and Rand sharing that leak with all of us, we’ve learned that there’s three variables that are interesting that should be studied. The site embedding, which is that vector space Who is it that presented that earlier today? Jeff. Jeff, where are you? Jeff presented this fascinating work on visualizing that vector model. We also see a variable called site focus score and site radius. But our industry, we’ve never even tried to quantify these things yet.
And so, the first thing we wanted to do was try to quantify that. So, this is an analysis using HDB scan of the page embeddings of my website. You guys can take my code and plug in any site you want. You can plug in your own sites and you can analyze them from a topicality perspective and look at how your sites are organized. The first analysis we did was based on heading vectors. So, is actually how we calculated it. And then, you guys can actually get the code where you can look at it line by line and see how we did it. But we used this library that I can’t kind of pronounce the name of. It’s like Traffalatura. Nice. Anyone know is that Italian? And that allowed us to remove all the crap from the from the documents and then extract their their real meaning and then run them through sentence transformers to then calculate the site radius, which is the distances on average of all the nodes from the site centroid, which is really interesting. Then, as we did more research, and you guys can put this into patent brain, ask it how Google actually looks at documents and you’ll see something called truncated term vectors, which was an interesting concept for me. We looked we looked it up and we actually tried to understand mathematically how are they deriving that, and then we replotted it based on those TFIDF terms, the top twenty. You can change that to top ten if you want, just change one number inside the code and you’ll see a slightly different visualization. So, is really interesting.
There are two applications of this. One is reducing your site radius by pruning. This is an approach that we know works. We have a case study where we took a very large online casino, gambling site, and we we noticed that what happened to these guys is they polluted the site with a bunch of irrelevant content. Right? They polluted its topicality. Hey, Lily. Hey. They polluted it with with a bunch of irrelevant content. And then what we did is we actually used this approach to remove that irrelevant content and we saw rankings go up because the site radius went down, the site focus score went up. And you’ll see another term on here called DCO, which I’ll explain later in my talk. The second application of this is to benchmark sites and to analyze their topicality and to see visually if we can actually understand how do these sites compare, who has more depth of topicality in a certain space. So, what we did is we – this is an example of this concept where we took two brands, Mudwater and Everyday Dose, and here you can see Everyday Dose actually has more content, more documents in that cluster than Mudwater does, which is one of the reasons that it outranks Mudwater. So, that was cool. Let’s go deeper. So, the next step was, can we actually calculate topical relevance? Right? That’s actually what Google calls it. They don’t call it topical authority. There’s no reference to that. It’s called topical relevance of websites. And the topics themselves are called topic identifiers. And the Twiddlers use those topic identifiers and the site score to re rank the information retrieval scores of the documents. So, we learned about how they’re looking at this from this important patent by, I think it’s Numbneet Panda, about user context based search engine. And so, essentially, they take the Internet and they organize that information into knowledge domains, and they build terms lists for each knowledge domain. So, that was interesting. Then we went deeper and we found other patents through Patent Brain that were related to that.
And so, these are the four that I think are the most important. So, if you want to go deeper, these are the patents to focus on. From these patents, we were able to get the algorithm that we believe that they’re using for topical relevance. Remember, as an industry, we know that it is important to quantify this, but we’ve never done it before. We don’t even know what our topical relevance scores are versus our competitors. So, this was something we felt was important to do research into and try to quantify. So, what we did is we discovered the sort of two approaches that Google talk about. One is an engram analysis of the terms on the website itself and its content. The other approach we see them reference is doing this on the keyword data. In the patents, they’re not always very clear which approach. They they kind of postulate about different approaches. And so, you have to kind of trace each approach and then run a correlation study, I guess, to figure out which one’s right. So that’s exactly what we did. We actually wrote the algorithm, we scored all these sites on the x axis. These are all SEO sites.
And, one of the things that wasn’t clear to us is how granular are Google’s topic identifiers. It’s not like they’re just going to hand it to us on a silver platter and show us what their topic IDs are. I wish. So, we had to try to derive them. And so, we actually did multiple levels, and I’ll show you that in a second. But what we found, you’ll notice there’s a site here, I don’t think I have a laser pointer, but on the far left and the bottom is LinkBuilder. Which was acquired by my friend Mark at the Hoff. And that site is very interesting because they outranked every other site for link building related keywords, even though they had almost no backlinks. And we’re wondering, how are they doing it? What type of dark magic are they doing to outrank all these other sites, including us? And then, when we did the topicality analysis, the story became clear that this is something that Google is using. So then, here’s another way to look at it. If you’re more of a bubble person versus a table person, but the story is the same, we can actually build our topical authority in these granular clusters. But is this BS? Like, is that mathematics just nonsense and bunk? Like, you shouldn’t trust my scores just because I created this thing and Mike has me on stage. You’ve to ask the question.
First is, how strongly does that topical relevance correlate with rankings? That’s an important question. Then the second thing was we wanted to know how does the topical relevance score compete relative to domain strength scores. Right? Which one is more effective or important? So, did the correlation study. 57,000 SERPs for every SERP keyword URL position, and we have three levels of topical scoring. We have the granular topic, then we have the hypernym and the hypernym above that, which is the knowledge domain. As I said, we didn’t know how granular Google is actually taking topicality. So, for example, is SEO as an industry, is it just SEO? Or is there SEO services and SEO tools and link building and all these other more granular contexts? So, we let the correlation be our guide, And this is what we found out. We found out that the topical score, the most specific topical score that we have, was the most predictive of rankings. And the crazy thing is the p value was .001, which means this is not chance. This is predictive, highly predictive. So that was absolutely astonishing. Like, it was so cool to see that. You rarely see that type of statistical significance when you’re doing this type of analysis. And the other thing that was really cool is that we saw that it’s actually more correlated with rankings. The topical signal is more correlated than our authority measurements. We looked at domain rating, which is from Ahrefs, and then domain power, which is something that my team at Search Atlas created. And it turned out that domain power significantly outperforms domain rating when it comes to predicting rankings. Domain power is a metric based on traffic. We believe that Google is no longer using PageRank in the classical way that our SEO vendor tools are spoon feeding it to us. As Mike said, the toolkit needs to evolve, and yet, it has not. We’re still living in the stone age. But thanks to this sort of research, we can actually open our eyes and learn some new insights.
So, as I mentioned, the statistical significance was very high, which is really exciting. And what we learned also is that don’t look at topical relevance as a generalized thing. It’s very granular. It can be very granular and you can win. You don’t have to dominate SEO as a whole, you have to dominate your slice of it. And SEO could be skiing, could be snowboarding, could be insurance, could be anything. You can be granular. And we also wanted to validate that we can actually use this knowledge to translate into ranking results because that’s the other important thing. It’s not just knowledge, you’ve got to take it and apply it and see if we can actually get something practical from it. And so, it worked very well for us. Have many case studies, but this is another one that showed that this actually really does work.
Okay. That was fun. How many of you guys learned something? All right. Good. Let’s shift gears. Who are my local SEOs in the house? All right. Mac’s up in the front. Right here. Cool. Cool. All right. So, I’ve all I’ve for a long time believed that the local Like Google Maps, the ranking equation’s very simple. Way simpler than search. So, we’re like, let’s let’s break it. Let’s figure it out. Can we break it? So, those were our key questions. We wanted to understand, can we actually build the ranking equation? And then, the second thing is we wanted to know, is it the same ranking equation for every industry? Is it one or do they have sort of different weightings of the factors for each industry? So, we had about 10,000 GMBs, 20,000 keywords. We segmented those keywords by intent type. We removed all the branded keywords from the dataset because we knew that those weren’t going to give us a clean signal. They’d throw the numbers off. And you can see on the right the variables that we were measuring. We had to create these variables. We had to create these relevance scores. And so we did vectors based mathematics to understand relevance for the keyword versus a bunch of different components of the GMB. What do you guys think is the number one most correlated variable with rankings? Shout it out. Uh-huh. Interesting. Yeah. See? We don’t all agree. Whoever Who said proximity, distance? Nice, Lily. Sweet. What’s your name? Matt. Matt got it right, guys. Matt right over here on the left. Good job, Matt. Here’s what we actually found. We found that distance is the most correlated across all industries and sectors. But what was second most correlated, it wasn’t just reviews, it was the keyword relevance of the review. It’s not just how many reviews do we have, it’s does the review specifically bring about relevance to the keyword we want to rank for, which makes so much sense. Right? I’m a vegan, so when I go to dinner, I’m looking for restaurants that have good vegan food. If the reviews talk about vegan food, obviously, it should influence rankings. So, that was interesting to see this. Business sector is very important as well. Who said business sector back there? Was that you, Garrett? I was in category. Category? Same thing. Same thing, basically. Yeah. Well done.
We also saw some other interesting findings. From Patent Brain, we asked the patents, what are the correlation factors? And one of the factors that we got was this thing called web score, is what Google calls it for for GMB ranking purposes. And it essentially is the relevance of the website for the keywords the GMB is ranking for. That was really interesting. So, we see the content of the site is actually important for rankings. It’s not just the GMB, it’s also the site it’s linked to. We’ve known that. I’ve believed that for a long time, but I’ve never seen data to prove it. And this was really cool, that now we can finally tell our for the local SEOs in the house, tell the clients, don’t just focus on the GMB, you’ve got to also do the site. Right? Now, can show them data that says that it’s important, which is cool. And the other one that I thought was really interesting on here is domain rating is at the absolute bottom. It’s not about backlink authority for GMBs. Okay? It’s not. Google has enough authority measurement signals on the GMB itself, they don’t really care that much about the strength of the backlink.
There’s one additional piece of research that we didn’t get a chance to do, which I’ll mention in a second, which is the anchor text, really, of the backlinks. We know that they’re probably important from what we’ve learned from Patent Brain, but we didn’t get a chance to include them in the study. So, there’s more work to be done. We also then looked industry by industry to see, is it the same generalized ranking factors for all industries? Turns out that there are variations. And for some industries, like car repair, it’s actually less about the distance. And there, it matters more, the reviews, which is interesting. So, it doesn’t seem like it’s all one unified equation. They’re weighted differently depending on the business, which is interesting. If you guys are wondering what’s what’s the, let’s say, the weighting model for the industry you guys care about, all of this data is in a Shiny app. So, you guys can play with it, you can set your industry, you can set which SERP you want to keep in the analysis and run it for yourself. And I’ll give it to you. Just It’ll be on my GitHub. So, I hope that helps.
So, plenty of interesting insights. And next directions. One of the things we also want to test is the importance of Q & A and posts, GPT posts, because we know that they’re related. We also want to expand the data panel. All right. So, can we actually take this knowledge and improve our GPT rankings? Hell, yeah, we can. So, what we did is we we actually took this knowledge and we tried to implement it in an automated programmatic fashion across those GBPs on behalf of the businesses. The results were amazing. This is all automated SEO, essentially. And I’ll show you guys how we automated it at the very end. But plenty of of success.
Okay. Next up, crawl behavior and GSE data correlations. So, Jory showed something really cool where she showed us all how we can By the way, Jory, where are you? Are you here? Jori? Yeah. Phenomenal work that she did with that GTM JavaScript code using GA 4 basically as the database for who’s crawling you. I’d never seen anyone present that before. We actually wanted to understand more about Google crawl behavior and how whether it’s predictive of rankings or how Google is viewing a website. So we took our JavaScript pixel and we started downloading all the data from the sites and we built a massive database of crawl hits across a population of 25,000 sites with over 200 million crawl hits in the database. Because all of our data is centralized, we can now run correlation studies with this data. So, this was super cool. This is the relationship between daily crawl frequency, Okay, and the amount of impressions at the page level. Can you believe that Google would recrawl a page ten thousand times in a day? Crazy. I couldn’t believe that. And you can see the strength of this relationship is so strong. It’s a very strong, like, linear signal here that we can see that the crawl frequency is very much determined by the search demand of that document. The more eyeballs that are searching it, the more they need to recrawl it to make sure that the document still exists, that the data is accurate. So, very predictive. This is including even the outliers in the data set. And you can see, even at the tail ends, for pages that are crawled 80,000 times a day, the relationship still holds. Looks to me like there’s a linear equation inside their database based on the search impressions, how many times they need to recrawl the page. Seems like it’s pretty clear. It’s pretty cool.
We then looked at it with traffic, very similar relationship because obviously traffic impressions are related. We looked at it by average position, no relationship. So it has nothing to do with the average position. We also looked at the response time versus traffic and rankings. And as you can see, pages that are fast, way more likely to have traffic. Pages that are slow, way less likely to have meaningful traffic. So, we do see a relationship between response time of the page, and we’re able to get that, by the way, inside the JavaScript. Right? So, one thing that you guys can do if you do use Jori’s code, try to get the page load time inside that JavaScript and send that also into GA4 because that’s another data signal that we can use. You guys get it? Cool. We also looked at the page size. Like, how does page size influence rankings? And what we see is that once pages get beyond a certain size, they’re not likely to rank on page one of Google. We’ve always sort of known this, but it’s cool to see this in the data.
All right. This part six is about content semantics and correlations. This is I’m really excited about this because much has been said about semantic SEO and content. I personally think we’re at an age where there’s a lack of respect for the written word. When LLMs can gibberish split out content left and right, a lot of people in SEO don’t really look at the word the same way that we used to. We just treat it as like this byproduct that’s just something that ChatGPT is spitting out. But can we go back to first principles and understand, even though this is subjective and content quality is subjective, how did Google make it objective? How did they do that? And so, to help us understand that, we essentially try to derive all of the different types of signals that they might be using. We know that they’re using a neural network that they’ve trained on content quality. And so, can we identify features of that neural network that can help us improve our ability to execute SEO campaigns?
So, we had a 52,000 keyword data set, about a million URLs are in that data set, and we extracted semantic triples, factuality. We looked at the micro semantics of the content, we looked at the content vectors, We looked at the entity scores, the heading vectors and query relevance. And what we see is that the score, the higher the Scholar score, the better the rankings. Right? So, if you have a kind of a compilation of all these factors together, you’re more likely to rank well. So, that’s really interesting. It means that there is predictive power of that model. And some of the components were very predictive, like query relevance scoring. So, query relevance is essentially a metric we made up that is taking the keyword, making it into a vector, taking the headings of the page, making that into a vector and looking at the dot product, the similarity between the two. And we see a very strong relationship here between query relevance and rankings. That’s encouraging. We also see user alignment being very important, user intent alignment. So, the type of content that we’re seeing dominate in the SERPs. If you align with that, you rank better, which is interesting. I was surprised because one of the things we know about Google is they like a diversity of perspective. They don’t want to just have one type of page in the SERP. But, if you know what is the dominant type of page in the SERP that’s ranking the best, it’s better to align with that. And also, to create a page for each of the other intent types because you can rank really any of them in that SERP. We also saw domain power. I’m sorry. The desktop interactive millisecond time be somewhat predictive as well of rankings. We got that from PSI. So, we ran all those pages through PSI. And we also ran them through domain power. We see, actually, site traffic does correlate with site with URLs that are able to rank more highly on Google, which is great to see that. And when we put all of it together, we see that there are several components that are predictive, but there’s also many things we tested that are no longer predictive that we can see. Word count, not predictive at all. Actually, triples, like the number of triples in the content actually negatively correlated with ranking well. So it’s not just about filling your content with factuality. If you do that, it actually seems to negatively impact your rankings. So it’s very interesting.
What about off page? Well, we compared all the different off page metrics that we could. We looked at the domain rating from Ahrefs, we looked at trust flow, citation flow from Majestic, we looked at just OT and domain power and domain power one, hands down. I think this is leading us to understand that it’s not a page rank driven backlink graph anymore, So, it’s time to move on beyond those page rank concepts. The other thing that was interesting is we cross sectioned this by search intent type, and we found that the correlations were stable on all the search intents except for navigational, where things work differently. That kind of makes sense. Right? Like, when you’re navigating to a site, we’re not expecting to see the same type of signals inside the search results.
Looks like I’m running out of time. All right. LLM visibility and interrogation. So, how do we get our content into LLMs? And how does ChatGPT determine information consensus? Oh, Okay. So, this is essentially how ChatGPT brings its data in. And one of the things that we did is we said, can we actually interrogate it? Can we ask it 135 million questions and get the answers from it? And then can we also ask it, well, ChatGPT, where did you get that data from? Can we actually get the URL, the source document that they got that information from? And so we were able to do this and pull the source documents from ChatGPT. And we learned a little bit about where they’re getting their data from. The thing I think that’s most interesting is that it’s Common Crawl. Common Crawl is very important. I’ve got about 700 sites, publications that I bought over the last five years. And a lot of the sites have no OT, but they are in Common Crawl. And the LLMs have that content, even though those sites are totally bunk, the LLMs are not doing a very good job of filtering the content quality that’s coming in. So, you can get yourself into Common Crawl, you can get yourself into the LLM. If you wanna know if your site is in Common Crawl, you can use this free tool on their site to plug in your domain and you’ll see if you’re in there.
Krishna talked about IndexNow. I think it’s everyone in here should be using it. We were able to get 160% increase in Bing traffic. Some other ways of creating consensus, think it’s really interesting, is actually guest posting and getting on sites that are in Common Crawl. You can also use press releases, you can use Reddit, you can use your branded assets. You can also use Bing results to get into the 40% of searches that are enabled on on search GPT. You can also do petri-dish SEO where if you Google, Snackachusetts – how many of you guys have been to Snackachusetts? It’s this 52nd state right outside of Massachusetts that I made and turns out that if you create the content on Google, you can then ask Gemini and Gemini is none the wiser. It believes that this is something real. So, we can actually start to play with this and see how long until Snackachusetts shows up in ChatGPT. I’m waiting. Not yet. We’re going wait until they update their content.
And I want to show you guys one last thing, micro agents. So, micro agents are essentially prompts that run and refactor your content. And so, we created these 10 micro agents that we know are related to reducing the cost of retrieval scores for search engines. So, if you can run content through a battery of these micro agents, you can objectively enhance it according to that PQ scoring that we know they’re doing with the neural network. And so, case study, DCO was dynamic content optimization, where we were pushing the content through the micro agents and republishing it on the site without really doing anything by hand, ranking improvement. Because the content, we reduced the fluff, we increased the decisiveness, the specificity, the granularity and significantly improved the results there. And then, sorry, I guess I had a 10th thing about SEO automation. We built a bridge to bring AI models to the site and attempted to automate now about 70% of holistic SEO. And we implemented this on 152 websites. Every single site got a pop. So, we’re seeing that SEO techniques, they can be automated and they will be automated. We’re starting to step into that brave new world. It’s a little bit uncomfortable, but actually Max was sharing with me a case study of his own where he was using our technology and and got incredible results for a dentist, I believe. For a dentist? Dentist. And I saw it on SEMRush right after I used Auto and Search Atlas. Awesome. So, it’s not just reported as improving in our tool, it’s also on other third party tools. And more case studies. Okay, guys. Thank you. I’m a little bit over. This is super fun.
Watch every SEO Week 2025 presentation and discover what the next chapter of search entails.
Sign up for the Rank Report — the weekly iPullRank newsletter. We unpack industry news, updates, and best practices in the world of SEO, content, and generative AI.
iPullRank is a pioneering content marketing and enterprise SEO agency leading the way in Relevance Engineering, Audience-Focused SEO, and Content Strategy. People-first in our approach, we’ve delivered $4B+ in organic search results for our clients.
AI is reshaping search. The Rank Report gives you signal through the noise, so your brand doesn’t just keep up, it leads.