Diving into Deepseek 🐳

Generative Search Optimization

By Crystal Carter
Head of SEO Communications at Wix Studio

Crystal explains how reasoning models like DeepSeek use a ā€œmixture of expertsā€ to deliver smarter, logic-based search results. She emphasizes the need for brands to create nuanced, expert content and stay visible by aligning with how these models pull from websites, current events, and user-generated content.

Is FOMO hitting you hard after Missing SEO Week 2025? It's not too late to attend in 2026.

SEO Week 2025 set the bar with four themed days, top-tier speakers, and an unforgettable experience. For 2026, expect even more: more amazing after parties, more activations like AI photo booths, barista-crafted coffee, relaxing massages, and of course, the industry’s best speakers. Don’t miss out. Spots fill fast.

ABOUT Crystal Carter

Crystal is the Head of SEO Communications at Wix and an SEO & Digital Marketing professional with over 15 years of experience.Ā  Her global business clients have included Disney, McDonalds, and Tomy. An avid SEO Communicator, she hosts SEO webinars and podcasts and her work has been featured at Google Search Central, BrightonSEO, Moz, Lumar (DeepCrawl), Semrush, and more.

OVERVIEW

In her SEO Week talk, Crystal breaks down how reasoning models like DeepSeek and Perplexity – built on a “mixture of experts” (MoE) architecture – are reshaping search. Unlike traditional LLMs that rely on pattern recognition, these models use logical reasoning and selectively activate expert pathways based on the query. Crystal explains how DeepSeek’s open-source launch shook the industry by offering competitive performance at a fraction of the cost, and how these models analyze, chunk, and deduce information before generating responses.

For SEOs, this means content must go beyond basic answers. Brands can become ā€œexpertsā€ within these models by publishing nuanced, layered content that addresses real user questions. Crystal points to examples like Levi’s, who leverage multiple domains to cover diverse topics and gain visibility. Her key takeaway: content that’s helpful, opinionated, and current is more likely to be cited by AI, so stop writing for keywords, and start writing like an expert.

DOWNLOAD THE DECK

Talk
Highlights

Reasoning models like DeepSeek and Perplexity use a ā€œmixture of expertsā€ (MoE) approach…

…activating specific expert pathways based on query intent, and relying on logical reasoning rather than pattern recognition to generate answers.

Websites can become the “experts” these models cite…

…so creating nuanced, timely, and opinionated content, especially for branded or complex queries, significantly increases the chance of being surfaced in responses.

Traditional SEO frameworks like keyword research or basic ā€œWhat isā€ content are no longer enough:Ā 

Marketers need to listen to real user questions, embrace current events, and focus on content that genuinely demonstrates authority and relevance.

Presentation Snackable

Is FOMO hitting you hard after Missing SEO Week 2025? It's not too late to attend in 2026.

SEO Week 2025 set the bar with four themed days, top-tier speakers, and an unforgettable experience. For 2026, expect even more: more amazing after parties, more activations like AI photo booths, barista-crafted coffee, relaxing massages, and of course, the industry’s best speakers. Don’t miss out. Spots fill fast.

Transcript

Garrett Sussman: Crystal is the head of SEO Communications at Wix. She’s the other Mrs. Carter and has been referred to as the SEO Sasha Fierce. Crystal has been to 15 different countries and hopes to make Barbados the next one. Can’t blame you. And she’s currently on a mission to make hitting her protein macros less tedious. I don’t know if ChatGPT or maybe DeepSeek can help you out with that. But presenting, Diving Into DeepSeek Generative Search Optimization, please welcome Crystal Carter.

Crystal Carter: Hello. Hello. Hello. Hello. SEO Week. This has been a phenomenal event. Big hand to the iPullRank team for pulling off this incredible event. I expect nothing less. I hope Busta Rhymes delivers. I’m sure it’s going to be fantastic. I cannot wait. I should say, you did get me in trouble with my family because they’re now like, you’re just going to see Busta Rhymes. But there you go. Anyway, so yes, let’s get into it.

I am Crystal Carter. I am a bit of a busybody. So I do SEO Comms at Wix. We’ve got webinars. We’ve got articles. We’ve got all sorts of stuff. I also bother them with enterprise SEO theories and activities, so I get involved with that. I do talking to people in different places, comms leadership, all that sort of stuff. This is a lovely picture of me. They did a really good job on that. Yeah. You see Beyonce, you see me, you see me, you see her. Same. Basically, they’re the same picture. Anyway, so she’s going to be helping me with this. I know you think she’s out on her tour, but she’s actually going to be helping us talk about DeepSeek today. So today, we’re going to cover what’s the deal with DeepSeek. And we’re going to talk about how reasoning models treat search. And we’re also going to talk about optimizations for reasoning models, like R1 from DeepSeek.Ā 

So what is the deal with DeepSeek?Ā  Well, in January, DeepSeek dropped an app. They dropped a GitHub repo and some Hugging Face documentation. And suddenly, the world went crazy. They were causing all that conversation, as they say. So they were talking about, oh, things going on with China. They were the top app in the app store. The stock market got shook up because of it. They’re all over there. They basically just put out this open source thing. And it was essentially this. You go to DeepSeek. You type in your question. And then when you hit it, you get all of this thinking here. So Dale was talking about this as well, the rational information that goes into it. And this was something that really shook everyone up. It really took all of Silicon Valley by surprise. And so with their model, what they had was they had something that was basically on par with OpenAI’s most recent reasoning model. And it hit most of their benchmarks. And it was doing really, really well. But they were able to do it at a fraction of the price. So opening, I was like, yeah, it’s like 15 bones to get this. They were like, we can do it for pennies, pennies, which really shook them up. And they were able to get this hockey stick effect pretty much straight out the bat. So this is data from SimilarWeb. And they dropped that repo at the end of January. And by February, they had really taken on the market. So they were pulling in 600 million visits to DeepSeek, outtaking people with much bigger pockets. So Gemini, for instance, had 284, Perplexity, Claude, all of these people were way further along with them.

But DeepSeek somehow swooped in with all of this. And I found this fascinating. And the other thing that was really interesting was that if you look at all of the LLM traffic that was going through in November, ChatGPT had about 87% of that traffic. By February, that had shuck into 77% because of DeepSeek. They just jumped right in there. So what was so unique about their approach? Well, basically, you essentially have two main LLM types up to this point. You have your static pre-trained data LLMs, like Claude first version, GPT-3 when it first dropped, Gemini 1.0. They were essentially static pretrained LLMs where essentially they have a big, big pool of data that they’ve downloaded from all these different stuff. And then they go through all of it to give you the answer to your question. Then you have your retrieval augmented generation LLMs. And these are ones that have all of that but also have access to the web. And so we’re very familiar with Perplexity and how that works. And Gemini 2 now works like that. Copilot was doing that straight out the gate. And then GPT added search into it. Now we have the introduction of a mixture of experts. And with that, the path forward is a little bit different.Ā 

So what happens is when you input your query, the next thing that happens is a gating network. And that means essentially this is somebody who goes, Okay, this person wants to know about this question. Who do we need to help answer it? Which sets of data do we need to help answer it? And they don’t tap into every single set of data. They tap into the specific sets of data that are relevant to the question. And then they generate the final output. This is the way that Perplexity visualizes this. So you can see the color codes. And you can see that there’s one path on the left. There’s one path in the middle. But they don’t activate the ones all the way on the right or the second one because they don’t need them. And the way that these experts are organized are in a couple of different ways. So you essentially have things that are task based, like things that are the way that the LLM does in its working. So things like translation or coding or math or writing, but also things like grammar and smaller things that are less front facing. Then you have topical things. So if somebody has gone through all of the medical records, all of medical journals, for instance, that’ll be one set of expertise. Somebody who’s got all of the wherever they put all of the sports almanac, all of those information, that’ll be in another set of expertise. And so when you have your pre-trained data LLMs, then they’re using essentially transformer architecture. And they’re using pattern recognition.

A few people have talked about this today and over the course of SEO Week. And they are using pattern recognition to generate responses. They do this in the way that where they look at the way that the words are structured and things like that. Reasoning uses logical reasoning to generate their responses. So if this, then that, which actually works really well because algorithms are based on logic and things like that. Another way to put it is you think of static pre-trained data LLMs as somebody who’s read the encyclopedia and doesn’t know anything until they get a new version of the encyclopedia, which might take a year, might take a month, might take whenever. And if you think of retrieval augmented generation as somebody who has read the encyclopedia but also reads a newspaper every day and is up on all the current events. Then LLMs are essentially someone who has a corpus of knowledge but also has a bank of a mixture of experts who can help them with various different things.

Now, everybody doesn’t need to come to the Super Bowl, right? So if somebody asks this expert a question, they might ask Willie Nelson, they might ask Post Malone, they might ask Jay Z. But they don’t need to ask everybody all the time. So everybody doesn’t need to come all the time, but they have all of these people, all of these experts at their disposal. And the reasoning is just reasoning. You just figure it out. So that’s the reasoning that goes through that. So the thing that’s important to know about DeepSeek is that DeepSeek didn’t invent this. They did not invent a mixture of experts. They did not invent LLM architecture. But they did make it mainstream because they put everybody on notice when they dropped that GitHub repo. So it essentially took about a week. So GitHub drops their repo. Then Perplexity adds R1 into their models. Perplexity was like, bet. We’re on this. Fine. And they’ve doubled down on this since. And so they’ve added that straight into tool set. And then ChatGPT, who had actually launched the one Mini in the autumn but to low fanfare, suddenly decides to make everything available to everyone. And then after that, you see Grok, Mistral, Anthropic, adding these things in later as they go along. So essentially, this means that mixture of experts reasoning is now available and it’s now commonplace. And to be honest, when I was putting this thing together, I’d been looking at it for so long, I was like, it kind of feels old now because we’ve kind of seen it now. It was kind of new. So, here’s the way the user journey goes.

If somebody like me were to ask why, why, why hasn’t the Renaissance film been released for streaming? B, Come on. DeepSeek will think for 41 seconds and then it will go through all of this thing. The dialogue I find really fascinating because it’s like, okay, so the user’s asking me why the Renaissance film hasn’t been Let me try to break this down. It’s speaking like it’s talking to me like it’s my bestie, like it’s part of the hive. And basically, the way it breaks it down is like chain of thought. So it goes, query. Okay. So the user is asking me this sort of thing. Then it will chunk the request, literally saying, let me break this down. Then it will activate Experts. And here, will look through it, and it’ll say, which Renaissance film is it talking about. So it’ll go through all of the lists of films that it’s got. And then it’ll look at what it knows about movie releases generally. So it’ll look through that corpus of knowledge. And then it will look at what it knows about theatrical release of Beyonce’s Renaissance film. And then it will look at the past ways B released Lemonade, various things, other things like that. Then it will do logical deduction. I should consider this, this, and this. I should factor in that, that, and that. Is it possible that this, this, and this? Maybe this would be this, that, that. And then it will generate a response. And in the response, it will pull in caveats based on the reasoning. It’s essential to present the possibilities while clarifying that without official confirmation, these are educated guesses based on industry norms. And that’s the way the train of thought goes. And so you have the train of thought, and then this is the output at the end. And it’ll say, oh, well, it was this reason. It was that reason. Was that. And you might say, that’s all cool. That’s fun. But I’m a search marketer. What does that have to do with me?

Okay. So these are the things where we think about for optimizations for a mixture of experts. If I was asked the same question with search enabled on DeepSeek, then I would say, why hasn’t the film been released? And before you get to the thought, it will say, found twenty results. And in there, the chain of thought changes. So when it chunks the request, instead of saying, let me try to break this down, it says, let me look through the search results to find the relevant information. The next thing that we as search marketers should be aware of is that when it gets to the Activate Experts, instead of it going through all of the knowledge that it has, it will say it will start using the websites as experts. And this is pulled out through their train of thought. So it will say, web page two mentions this and that. Web page six from Forbes says this and this. Web page three says that. Web page seven says that. And it mentions the brand names and all of that sort of stuff. The websites are the experts when you have the search enabled. So this means that you’ve got your experts there. Then you have your expert websites by name, which are all listed in the links. And then when it generates the response, by the time it gets to that, it’s much more certain. It’s less caveats. And it’ll say, there is no official statement because it’s gone through all of these facts. It’s gone through all of these websites. So it knows there’s no official statement.Ā 

And then it will include links to its responses. And it will qualify everything it says based on the experts that it has, which are websites. And this will come through. And you can see all of the links. And this is what the citations look like. And they’ve got the favicons in them. If you don’t have favicons on your website, which you probably do, but if you don’t, you should definitely have them, then that’s really useful. You’ll also notice that they’re not meta descriptions. They tend to be the first part of the website. So they’ll pop those through. And then you have your citations there. And then you can also see the links within the citations. Perplexity is doing a similar thing. Perplexity is interesting because they also offer a breakdown. And they break down the search query that they’re doing as well. So they’ll have the query. They distill the query. They don’t say, oh, I need to think about this and this and this. They say searching for information based on the status of the film release renaissance for streaming. So, they’ll tell you exactly how they’re breaking it down. Then, they tell you the search queries that they’re using. This, I think, is fascinating.Ā 

As a search marketer, this makes reverse engineering, how everything is showing in there, a lot easier. Because you can go through and you can look what’s ranking for what. You can look what’s ranking online, online for those things. You can look at what content you have that satisfies those keywords, etc., etc. That part, I think, is really fascinating. And then, you have your logical deduction. And it will go through and it will make those assumptions there. And we have that all as well. So, think what’s interesting here is that content for a mixture of experts is nuanced and multidisciplinary. We spend a lot of time writing a lot of, what is x? What is y? What is q? What is – and tons of those articles. But I – that Mixture of Experts articles are more like less what is and more why, more how come, why is this happening, why is this going on. And somebody who’s doing really well and I don’t know if Aleta is still in the building, but I mentioned this last night. And apparently, Levi’s is one of her clients. And I think that they’re crushing this, actually. So one of the things that they’re doing is they’re distributing their knowledge across their primary domain, levis[.]com, a topic focused subdomains, for instance, like Levi’s Secondhand, and also their investor site, which has a lot of their brand lore, like, oh, Levi Strauss, he was a great man, blah, blah, blah, that sort of thing. And when you put in mixture of experts type questions that are multimodal, bring in lots of different elements. So for instance, you have like, how did Levi’s that’s one expert leverage literary depictions that’s another expert of their jeans in marketing campaigns that’s another expert in the ’90s. They’re showing up for 19% of those queries. And then we have another one. What is the difference between all the Levi’s jeans styles, like 501, 511, 550? And which one would work best for someone with an athletic build who wants a modern look but needs room in the thighs? I mean, the struggle is real. And so there, they’re showing 45%. Then we have another one that’s a very multidisciplinary, very layered question that they’re also showing up for 30% there. And one of the things I think is interesting, and I will add this to a later deck actually, is that you can use AI tools. I used Gemini to help me generate some of these questions. So, you have user personas and you literally say to them, can you help me generate some questions, some prompts that might trigger an MOE response, they will help you to break down the kinds of things that might be relevant to your brand to do this. And I think it’s really interesting.Ā 

So to this, I think content marketing is not dead, but it should be smart. We should have content that’s relevant for this. And Levi’s are able to do that. So people that are able to do that. But you also shouldn’t be making content for content’s sake. You should be making content that shows your expertise in order to show in some of these models. This was something that came out earlier in the year as well. And SEMrush published a study. I contributed to it. Mike contributed to it as quotes. And they were talking about the things that we know and love, the four pillars of intent, right? Informational, navigational, commercial, transactional, we’ve all put them in our decks. We’ve all talked to clients about them. And then there was this giant sea of purple, this giant sea that they called unknown intent. I’m going to be honest, I kind of call bullshit on that because when I’m on ChatGPT, when I’m on an LLM, etc., etc., I know what my intent is. I know exactly what my intent is. It’s not unknown. I know. But it did make me think, we need new frameworks, we need new tools, We need new reasons for users to engage that aren’t just informational, navigational, transactional, etcetera. We need to think a little bit differently. So not to toot my own horn, but I think I’ve got a little bit of an example of this. So I’ve been talking this topic for a little while. I wrote an article on SEO for brand visibility in LLMs.Ā 

I then wrote another article on KPIs for LLM visibility. And then my colleague George, folks who have lots of experts who are speaking here, including Garrett, Boop. And talk to them about LLM optimization versus SEO and all of that sort of stuff. And you know what? It seems be doing really well, actually, in some of these tools. And I’ll show you that later. But one of the things I’d like to point out is that we didn’t do any keyword research for these. I didn’t do any keyword research when I wrote these articles. I wrote these articles because people kept asking me this question, because clients kept asking me this question, because colleagues kept talking about this. And it was something that I thought we needed to look into as an industry. And it was something that I wanted to write. So, what I’m thinking – when I bring this up to say that when we think about make surface experts content, when we think about the content that we make that might surface in this kind of thing, one of the things that’s really important is that when we look at our clients, we look at our customers, etcetera, we need to listen to what they’re talking about. Not just keywords. We need to listen. Okay? So, there’s an example. So, taking into consideration tactics, KPIs, outcomes, that’s a whole area of expertise for brands. Is LLM optimization the same as SEO? Pull through lots of different things. Guess reference? The girl who did no keyword research. None. None. I did none. No keyword research. And, I’ll point this out later, ChatGPT is incredibly stingy with citations, the stingiest of all of them. But, yeah, we got two out of the seven citations on this and we’re hitting multimodal things. And, again, like I said, I did no keyword research. We went to Perplexity and we were cited in every single expert call. There’s three different rounds of expert calls on this particular one. They’re looking at search for tactics for LLM optimization, search for the definition of tactics and SEO of the brands, search for comparing LLM and SEO tactics, blah, blah, blah, blah. We’re excited in all of them. There are three of the top fifteen citations. And the one that’s there, the last one, was published two weeks ago or something.Ā 

Listen. Make content that actually matters to your users. Make content that actually matters to what they need. One of the people who was speaking here earlier was talking about this as well, about that it shouldn’t just be some checklist. We all know that. But make the content that actually need and actually answers the questions that you’re hearing, that your sales team are giving you, that will genuinely demonstrate your expertise. And if you’re looking to source these, as I said, AI systems can be super useful. At Wix, we just launched a tool called ASTRO. It’s like an all singing, all dancing, agentic AI thing. It sits on the side of the CMS. You press a button and you say, help me do stuff. And it goes, yeah. So I asked this little bot. I said, can you help me generate some topic ideas for this particular project that I was working on? It’s like an agency based thing that might trigger an MOE response in an LLM. It was super explicit about it. And they came up with some really nuanced topics. And I was like, that’s a pretty good start. And obviously, can bring in some of the contents that you have from your users, from your folks, and from your sales team, from all of those sort of things. And then you can use some of these tools to help you bring them together at scale. User Q & A is incredibly valuable, but also joining the dots. So you know the kinds of things where there are gaps in knowledge across your team. And it’s important that you’re able to join those dots. You’ll show in some of those things.Ā 

So reasoning models. One of the things that’s important for these is to give them a reason to link to your content. They need to justify what they’re saying. They’re the logic part of this whole thing. And they need to justify what they’re saying. So one of the things that’s really useful and if you’re paying attention to John, pay more attention. Embracing current events is super useful. Having an opinion seems to be useful. Optimizing your videos and owning your expertise. So when we think about current events, of the things that’s important here is that LLMs need to ground their facts for recent news. And the current events are less likely to be part of training data, and the links are more likely to show because they need to justify what they’re saying. So for instance, if I was to sit to look up and we all love Dolly Parton, right? We love Dolly. So if I said, why is Dolly Parton considered a great musician? Then DeepSeek thought for 30 seconds and said, let me start by recalling what I know about her then just gave me an answer because they know Dolly is great. Everybody knows Dolly is great. She’s great for her whole life. She’s amazing. So they don’t need to look out in the news to find that out. So no links. And this was with search enabled, so there were no links on this. But then if I said to her, how did Dolly Parton influence Beyonce’s Cowboy Carter album? And we know because she was on the Jolene collab. Basically, it thought for 14 seconds and said, let me start by going through the search results provided. Okay? And then it came back and it found 47 results and had links all throughout them. So, you go from zero links when you’re talking about a general topic to 47 links when you talk about something that’s more timely and more relevant. So, news and trending topics on content and relevant current events is more likely to show in reasoning models than just general information.

Have an opinion. So LLMs also need grounding for multiple perspectives. And links are more likely to show for timely spicy topics and queries. And also, UGC is frequently used. So, I looked at about 1,000 different citations. I had to pull them out manually because you can’t scrape them, so you’re welcome. And I looked at 20 different non-branded questions. And as I said, ChatGPT is the stingiest with the citations. They’re coming with an average of eight citations. DeepSeek is like they’re just throwing everything at it. So they’ve got 29 different citations in their reasoning model. But what I found was interesting was that ChatGPT, the most they would put out in terms of citations was 12 for a non-branded query. And the thing that I seemed to find out was that they’re really, really interested in gossip. They just love that tea. So, I looked up something that was around a spicy topic. And this is something that people If anybody who’s in the beehive knows that there’s a little bit of a conspiracy about Beyonce and horses, she might just like horses. And they’re saying, why are there so many horses in Beyonce’s imagery? And ChatGPT had 12 different citations. They didn’t have any UGC, but they had 12 citations. DeepSeek had 30% UGC because they have a lot of TikTok, and there’s a lot of TikTok theories about it. But yeah, they were really interested. So that was one where they had a lot of links.Ā 

And then don’t come for me on this next slide. So then I looked at this other thing that I saw that’s also been a little bit of a hot topic. Which tour is better, Beyonce’s Renaissance or Taylor’s Eras tour? And this is, again, a spicy topic, but don’t come for me because they’re both queens. So, basically, they also had twelve links on this because this is an opinion. There’s lots of different people. There’s people. There’s Swifties. There’s a Beehive. There’s this. There’s that. They need to qualify what they’re saying. They don’t feel confident just saying yes or no. They need to qualify something because it’s opinionated. So if there’s something with an opinion, you need to pick a side. You need to pick a side and get involved. Okay. So there we go. Managing UGC. You cannot put everybody on mute. Beyonce can, but you cannot. So you need to manage your social media. You need to manage those things. So reasoning models use social media posts. I was saying that for some of those DeepSeek queries, they had 30% was UGC. And that includes Quora, Reddit, Facebook, even LinkedIn, all of these different things. So they’re looking at posts, comments, media. But they’re also looking at video citations. So video citations show up a lot in Perplexity, for instance. They also show up a lot in DeepSeek. And they’re also looking at the transcripts of those. So if you’re doing video, if you were listening to Phil and you’re doing video, make sure you also get those keywords in your transcript. And make sure you get them in early because that’s also something that they’re taking into consideration.Ā 

So be active and strategic with social media. Craft your scripts to support your AI LLM goals. And then own your expertise. Okay. So, reasoning models want to surface your branded content for branded queries because your brand should be the expert on your brand. I’m going say that again because I don’t know if it comes as a surprise to some people. Your brand should be the expert on your brand. Okay? Not somebody else talking about your brand, not somebody who has this other side website that’s not your website that’s talking about your brand. You should be the website on your brand or the expert on your brand. So I was talking about Levi’s. This was that one that was talking about so there was one of those questions about sustainability. And it was saying, I’m trying to figure out if Levi’s sustainability claims are legitimate. Can you compare the water reduction technology? And think of all the different expert breakups we have here. Water technology, the recycling materials, denim brands, etc., etc. Levi’s are showing up in the top four citations for this answer because Levi’s did not come to play. Levi’s has this on their website. Their own customer facing website. They have it on their investor facing website. They have it in a couple of different ways. And they’re answering this question in lots of different ways. And that means that they’re able to address this.

So when I looked at 1,000 different citations for branded queries for all different brands LEGO, Levi’s, Starbucks, Toyota, Logitech I found that brands have a 20% share of voice in branded queries. And this is something that’s really useful. You should make sure that you have good content on your brand. If you have anybody in your company I’ve heard rumblings of anybody in your company who’s like, oh, don’t need to make new content because of LLMs like blah, blah, blah, blah. Yeah, you do. And you need to make content about your brand on website. And you need to do it well and do it in a thorough way. And the average LLM has about 18 citations per response, with DeepSeek giving the most. And for LLM searches where you’ve got brand, the average brand visibility for LLMs with reasoning and MOE is about 20%, which is ever so slightly higher than Google. So that’s really important to make sure that you’ve got that presence. And there’s an average of three brand citations per LLM response, which again is sort of on par with Google. And you can again see ChatGPT is the lowest, just the lowest. They’re just giving us the lowest. Give us the absolute bare minimum. But yeah, DeepSeek is giving the highest average. And branded content on your domain has added value for LLM optimization. And that’s super, super important to think about.Ā 

So I think these models are really, really fascinating. I highly recommend that you go through it and have a look at some of the ways that they break down and chunk some of the requests because you can do a little bit of reverse engineering, which is super, super fun. And I think that if you think about all of these things and have a look at these models and have a go, then I think that you should be able to have a go at some of these reasoning models and to make sure that your visibility has a lot more bang. There you go. Thank you very much. Good day.

CATCH EVERY PRESENTATION YOU MISSED

Filter By

Watch every SEO Week 2025 presentation and discover what the next chapter of search entails.

What are you waiting for? Get SEO Week Tickets. Now.

As AI rewrites the rules,

read between the lines.

AI is reshaping search. The Rank Report gives you signal through the noise, so your brand doesn’t just keep up, it leads.