In his SEO Week talk, Dale shares how Fire&Spark is using AI agents to boost SEO productivity, cutting task time in half, while still relying on human expertise to train and guide them. Instead of one-click automation, Dale shows how structured analyses, integrated tools (like GA4 and Search Console), and continuous training turn AI into powerful assistants. The future of SEO, he argues, isn’t about replacing SEOs but equipping them to manage and scale impact through AI.
SEO Week 2025 set the bar with four themed days, top-tier speakers, and an unforgettable experience. For 2026, expect even more: more amazing after parties, more activations like AI photo booths, barista-crafted coffee, relaxing massages, and of course, the industry’s best speakers. Don’t miss out. Spots fill fast.
Dale has been an SEO specialist and AI consultant to Fortune 500 companies and venture-backed startups around the world for two decades. His clients include global brands such as Citizen Watch, Nestle, Raymond Weil, Exxon Mobil and Bulova. He applies his graduate school work in artificial intelligence to search engine marketing and speaks at marketing industry conferences.
In his SEO Week presentation, Dale, founder of Fire&Spark, breaks down how his agency is embracing AI. Not to replace SEOs, but to dramatically enhance productivity and scale expertise. Rather than relying on automation alone, Dale and his team have developed custom SEO agents – AI-powered assistants trained to perform structured SEO analyses using real data from tools like Google Analytics, Search Console, and Data for SEO. These agents, built in platforms like Windsurf and Claude, don’t work out-of-the-box; they require intentional training, clear guidelines, and human oversight. Dale likens them to “Harvard interns” (incredibly smart, but inexperienced) emphasizing that the real value comes from continuous refinement of their behavior through trial, feedback, and iteration.
Dale’s team has already seen productivity gains of 30–50%, using time savings to deepen quality and explore new analyses. Instead of focusing solely on prompt engineering, they’ve moved toward creating repeatable, high-value workflows, like identifying quick SEO wins or performing intent-based keyword analysis, that AI agents can execute consistently. The key, he argues, is not to be distracted by early missteps or mediocre outputs, but to treat the agent like a junior team member in need of training. Dale sees the future of SEO as belonging to strategists who can design, guide, and manage these agents effectively, unlocking more value from data while preserving the creativity, communication, and high-level thinking only humans can provide.
AI agents require training, not just prompting:
Off-the-shelf AI tools often produce generic or flawed outputs. To be useful, SEO agents must be trained with structured workflows, guidelines, and human feedback, much like onboarding a new team member.
SEO roles are evolving toward agent management:
As AI becomes more capable, SEOs will shift from doing manual analyses to designing, training, and overseeing AI agents that handle repetitive, data-heavy tasks efficiently.
Productivity gains are real, but should fuel quality:
Fire&Spark improved analyst productivity by over 30%, but rather than doing less work, they used the time savings to deepen research, enhance deliverables, and explore new strategic insights.
SEO Week 2025 set the bar with four themed days, top-tier speakers, and an unforgettable experience. For 2026, expect even more: more amazing after parties, more activations like AI photo booths, barista-crafted coffee, relaxing massages, and of course, the industry’s best speakers. Don’t miss out. Spots fill fast.
Dale Bertrand: Here are some important things my team has learned about building SEO Agents since my SEO Week talk:
a) We learned AI agents struggle to do arithmetic because their models are built to process language, not do math. They’re language models, not calculators. They process text by predicting the next token in a sequence. So they understand the language of computation but can’t actually do computation.
When an AI encounters “2 + 2,” it doesn’t perform addition; it predicts what typically follows this pattern in text. This leads to mathematical hallucinations where the model generates plausible-sounding but incorrect calculations (because it’s not doing math). Our solution: Generate analytics data for our agents that is pre-calculated, and push calculation to MCP tools.
b) Context matters when asking an AI agent to do anything. Many people ask AI to do a task, then get frustrated by the generic output it generates. Choosing the context for an AI agent to complete its task is a challenging task. With too little context, the AI agent generates generic outputs. Too much context confuses the AI agent.
Sometimes the context should be broken down into a context chain that goes from a stated goal to accomplishing the goal. For example, instead of one massive prompt, break it down into multiple prompts that are refined and given to the AI agent based on the output generated from the previous prompt.
c) An AI agent is only as useful as the data and MCP tools it has access to. MCP servers are far from perfect, and relying on 3rd party MCP servers for your AI agent could potentially leave you vulnerable to security and server access/resource-allocation issues.
Garrett Sussman: I’m excited. Dale Bertrand is gonna blow your mind with some of the things that he’s doing with AI. Dale is the founder of fire&spark and a seasoned digital marketing strategist with two decades of experience helping brands grow online. But his talent for creative problem solving started long before his career. Okay. So apparently, and I don’t even I there’s such a story behind this. In high school, Dale didn’t just think outside the box. He built a whole new one, founding his own he founded his own religion to challenge conventional thinking. How do you found a religion? He once dominated a cow milking contest without even ever having set foot on a farm before, and later dazzled a crowd by winning a talent show with a full blown circus act. Dale, can we be friends? I what’s going on with your life, man? Today, he brings the same fearless ingenuity to the world of SEO and AI driven marketing presenting SEO in the Age of AI, How to Future Proof Your Strategy. Please welcome Dale Bertrand.
Dale Bertrand: Excited to be here for my first ever SEO Week. But I’m gonna be here talking about AI as you guys know. I run fire&spark. We’re an SEO agency up in Boston and as you just learned, I started a religion in high school. That is true. When I shut it down because it started to get a little dangerous, one of my friends was like, but Dale, now what do I have to believe in? And I realized like how easy it is to start a religion. Like all you need is a set of beliefs and those beliefs might be that duplicate content penalties are real or it might be that the longer your SEO content is the better it will rank or it might be that AI is going to automate the SEO work that we do and put us all out of business or all out of work. Right? Or if you’re an agency owner like me, out of business.
But today, I will be saying hi and talking about AI agents for SEO which I call SEO agents. Now the mistake that I made in early in my AI journey, my AI journey actually started back in grad school. I studied AI in grad school. It was computer vision research in a lab at my school. That was 20 years ago. Ever since then I’ve been following it. So when generative AI – before ChatGPT when generative AI started to become a thing – we started using it at my agency. And what I thought at the time was that we would automate the SEO work that we do. We went down that path and quickly learned that automation just isn’t the right way to think about this technology. Instead, now we’re thinking about it as a helpful tool and we’re building these AI agents that can run off and do things on their own in an automated fashion but they require humans to train them. They require human expertise to vet their work, to evaluate their work, to figure out like what’s usable and what’s not usable.
So we’re headed down a completely different path now than we were two and a half years ago when we started using generative AI for SEO and and I can tell you it’s working and that’s what I’m going to be talking about today. Now, already know who I am. Studied AI in grad school, built a supercomputer for the NSA. That’s a story best told over a beer but it was monitoring people. And I run fire&spark. We have clients. All of our clients have logos. We’re up in Boston. This is our team at our team retreat in Lisbon. Fabulous team. If you want to join, we’re hiring.
Now, when we started our AI journey beginning of 2023, my goal is to improve our productivity by 25% every year. And I’m gonna show you how we did that and where we ended up and then how we’re how we’re doing it now and how it’s working. Now we did the obvious things you guys are probably doing. Choose some AI use cases, experiment with some AI tools, calculate cost savings because we measure hours. So we’re able to quantitatively see how these AI tools are helping us improve our productivity. But of course quality matters. We want quality to improve or at least not be affected at all. And then we update our internal processes.
Now, in the very beginning we identified 17 tasks to accelerate. We do SEO work so we were able to save a number of hours per analyst by basically prompting in the beginning which is all we were doing. And just examples here. We we send agenda emails to clients. We write next steps emails after client emails. We develop content briefs. We’re just very quantitative and structured about how we how we did this work, basically prompting AI in the beginning and what tools we were using and all that good stuff. The results that we saw is in first year we were doing this 30% improvement compared to our baseline of how long it took us to do our work in 2022. And what this means is if if you are an SEO analyst and you’re doing a keyword research, for us we do these deliverables for our clients. Let’s say hypothetically it takes you 10 hours to do that work. In 2023 using ChatGPT and prompting, we could get it done not in 10 hours but in seven hours. Now we’re able to And we have more automations now that we use and we have custom GPTs and other types of AI assistants. We’ve got that We’ve been able to cut that time in half. And our goal for this year is to get another 25% so that 10 hours of keyword research in this hypothetical example would actually take two and a half hours using the SEO agents that I’m gonna show you today and I’m gonna show you how we built them.
Now one really important point is that we’re not actually spending less time. We’re using that productivity to do higher quality work and new analyses and to do keyword research and add intent analysis on top of it and all that good stuff. And we’re also using the time savings to do more R and D, which is how we built out what I’m presenting today. Now, I want to understand from from the folks in this room where you’re at in your AI journey. For us, we started out in the beginning prompting a large language model like basically using ChatGPT. That’s the apprentice level. Then we upgraded our skills to like technicians where we’re creating automations. For us it was in make[.]com and Zapier and now in Cassidy AI. These are automations to automate things like client onboarding and all that. And then we started building assistants like custom GPTs and now we’re deploying AI agents. So I’m really curious you guys think about where you’re at in your journey. Like how many of you guys are at the apprentice level? Means you’re doing prompting, ChatGPT prompting. I see a couple of people. And then technician level, you’re creating automations now. I can see a few people there. And then master would be that you’re creating custom GPTs and that’s where you’re at and you’re deploying those. And then how many of you are deploying AI agents? Anybody there? One one two Okay. So three people there. So I see this is a good distribution in this crowd where you guys are at every level in terms of where you’re at. For us, we were apprentice in 2023, creating automations in 2024.
The beginning of this year in Q1, we really focused on custom GPTs and and assistance in in Cassidy and then now we’re just getting started with deploying these AI agents. But this is where we are at. So one really important point is that the AI that we’re using is trained on everything that’s been written on the web. So it’s pretty generic information especially when it comes to SEO. So this includes a lot of generic SEO advice. Advice that isn’t very useful to us who actually know what we’re doing when it comes to SEO. So what matters when you’re using these tools is your expertise and your data that you can bring to the table, integrate with the large language models to get them to do work, to get to do actually useful work. But that’s very important. So I always think of AI as like the world’s smartest intern, like super smart but they don’t know what the F they’re doing because they’ve never done this work before. So if I were to hire a Harvard intern, a super smart person, I wouldn’t put this person in front of an SEO tool or in front of a client on day one. They need training and that’s important. The SEO agents that we’re working on, they’re not going to produce very good work. Like they’re going to get it wrong. But we need to put the time and effort in to train them and they become very very useful over time. Now, like I said, we started at my agency layered prompting. You guys have probably already we got pretty far we got a 30% productivity increase just focusing on this. Then we did our automations.
This is our client onboarding in make[.]com. Then we graduated to assistants in custom GPTs and GEmini gems and clawed projects and cast the AI assistants just trying to figure this out but this helped us a lot too. We found that the assistants we’ve made I just looked at the top 25 assistants at my agency that we are using and what they’re use What we’re using them for. We’re using a lot of them for content creation and social post creation and then we’ve got some for strategy and technical SEO. This is just the distribution of what we’re using the custom GPT AI assistance for. Now, today I’m going ask you guys to make the leap from pushing buttons in AI tools to training agents and I’ll I’ll show you the foundational technology. And this is really the most important concept to get out of my presentation today which is about large language models and how they can do tool calling. So what we are used to using ChatGPT or any large language model is that the model can predict the next word. And we’ve gotten really far using these models for all sorts of things where they’re generating words for us. Answers, outputs, content, whatever it is. But it also turns out that many of us aren’t aware that there’s also tool calling capabilities within large language models. And tools are things like when you use ChatGPT and it searches the web or it uses code interpreter or it generates an image. Those are tools that are not part of the large language model but it’s able to call them. So when a large language model has tool calling capabilities which all the modern ones do, Not only can it predict the next word but it can call one of these tools. It can use one of these tools and it turns out that this is the important thing that we need for AI agents. Now you guys are already using this. If you open up ChatGPT and you type in something like, who won the NBA Finals in 2024? Search it up for me. It will do a search and when it’s doing a search it’s using a tool. It’s not just the large language model making up an answer. It’s using a tool, getting an output and then giving you an answer based on the output that that tool gave it. In this case the tool is a search engine. It’s using Bing.
Now, when we are building these agents we want to make sure that they have a bunch of useful tools. And these tools will be things like access to Google Analytics, access to search a search console, access to the Google Drive where my agency stores all of our projects and documentation. And Claude happens to be an app that has really good tools that are super useful. We use their Google Drive integration which is tooling Claude. And then also Gmail and Calendar. We really like using Claude but I’m going to show you guys what we’re doing in a different app. So these are the tools that we’re using. We get data from data for SEO. But just having access to these tools within your large language model makes them so much more useful.
MCP is model context protocol. Now it turns out that some of the tools that these large language models can use are built into the platform. Just like the search GPT is built into ChatGPT. But what model context protocol allows us to do is to use external third party tools like SEMrush soon will be able to do that. But also data for SEO now and Google Analytics and Search Console which makes these large language models and the AI agents that we’re building much more powerful than just chat ChatGPT. Now when we’re using Claude and the reason why we’re using Claude is because Anthropic, the company that built Claude, invented mono context protocol so it has the best integration with all of these new tools that are now available. There are some tools like the Google Drive tool that are built into Claude. There are other tools that are MCP tools that might be running on your computer or running remotely. Those are the three types of tools that we’re using. It turns out that for our setup that we have, Google Analytics search console and data for SEO are tools that are running in the cloud that we’re giving our AI access to.
Now I’m introducing Windsurf[.]ai which is a different tool. Windsurf[.]ai is a coding tool. It’s basically like if you were writing code you would use the windsurf editor. I’m a coder so I’m used to using these types of tools. The powerful thing about Windsurf is that it has native access to these MCP tools like Google Analytics and Search Console. It has some of them built in which is awesome because ChatGPT doesn’t have that. And then also what it has is an agent that they call Cascade. And Cascade is a smart agent that you can prompt. You can tell it to do a keyword research, write a monthly report. Or I told it to look at the latest blog article on the iPullRank blog, grab an outline and then write a new version of that article based on everything that I’ve ever said that it can find online. Exactly. And it works. It just it just does it. It just does it.
But the reason why we’re using Windsurf is because it has It plugs into these MCP tools and it has this AI agent Cascade that works really well. So within Windsurf, we gave it access to the Google Analytics search console. There’s another one called sequential thinking that forces it to think through a plan before it actually starts to do the thing you ask it to do which is very useful. And then gave it access to Google Drive. I asked it like what’s my team been doing for this client and it just goes through all the documents in Google Drive and gives me like a one paragraph summary of what they did last week which is really cool.
So as an example here, the MCP server that we have access to in Windsurf for GA4, it has one tool that’s very useful that my agents end up using a lot which is the analytics run report in GA4 tool which basically runs a report with whatever filters it decides to use, whatever parameters it decides to use, grabs all the output just as if you were using it through the UI and then the agent can just go from there and do what it needs to do with that output. A lot of what I’m using these agents to do is to grab data from different sources and run an analysis.
So here’s an example. Like imagine if you wanted to do keyword research, you wanted this agent to help you. There are many many things you would do but I’m going to choose three. One of them would be to list the current keywords that your website is ranking for. When you told it to do this it would use one of the tools, one of these MCP tools. It would use the Search Console search analytics tool which grabs data from Search Console, the same data that you would get if you were using the UI and running a report with a specific set of filters. Then another next step might be that you ask it to find the difficulty for each of those keywords. And what it does is it uses another tool which is the data for SEO tool to get keyword difficulty which just grabs a spreadsheet and dumps it right into the into the tool that so the agent can use it. Then what you might want it to do is to find the top converting pages on your website and it’ll use a different tool for this which is the GA four run report tool which grabs a GA4 report and dumps that into the large language model. All of that is context which is why it’s called the model context protocol. Now what I did, I’m about to show you how it looks when it runs. What I did was I took this slide when I was building this presentation and I said, oh cool. I’ll give this slide to Cascade and Windsurf and I’ll tell it, just do what’s on the slide and do it on my website to see to see what happens. So you can see that the slide is up there because I gave it the slide. And what I told it was I said just do the steps in the slide. And I told it not to use sequential thinking because it would just run slower. Now, when I hit go here this thing’s just going to scroll because I didn’t want to force you guys to sit through a demo.
But what you’ll see is it’s thinking through what it should do next. And then it’s calling a tool which is list properties in search console. So what it’s doing there is it’s trying to figure out which website properties does it have access to because it decided, I should probably figure out what websites I have access to before I run this analysis. So when I hit go, it’s just gonna scroll and I’ll I’ll tell you what it’s doing. So it’s using the tool. It’s thinking about the properties it just got. Now it’s grabbing some data from Search Console for the Fire and Spark property. Here’s some of the data that it got. It’s not showing all of it. And then it’s using another tool now to grab search difficulty and it’s it’s it grabbed it for a bunch of keywords that it decided to choose. And then the next thing that it’ll do is find the top converting pages because I put that on the slide. Now it’s using another tool which is a Google Analytics tool. This is the call to the tool behind the scenes. And you can see and I think it’ll show the data. It did not show the data. Or actually it’s running the tool again, the GA4 tool with corrected Oh, that’s interesting. It failed. It said, oh, that must have been the wrong tool. So it went back and tried the GA4 tool and now it has its results. And now it’s giving me kind of the outputs that it found. It found a bunch of tasks Sorry. For the first task it found a bunch of keywords. It did the difficulty analysis and it did top converting pages. This is just very very simple in terms of me just saying, hey, do the three things that I gave as examples on this slide. And you can see how it just kind of runs when you ask it to do things.
Typically the way I will use it is I’ll say, I’ll give it a website and I’ll say, find some SEO quick wins for this website. And that works really well because it’s a structured analysis. Like it knows what to do and that’s really important. What human SEOs are good at is creativity and vision and basically thinking at a higher level like what should the strategy be. What these SEO agents are really good at is if you train them on a structured analysis. Like for example, I like to look at a website when I first start working with a client and I want to look for quick wins. These are pages that are ranking on the second page of Google search results for a keyword that’s really valuable. Maybe it’s a page that’s getting a lot of traffic but not a lot of conversions or it’s getting a of conversions but not a lot of traffic. This is a structured analysis where I could just basically tell it how to do that type of analysis and then every time I ask it to find some quick SEO wins for fire&spark dot com or some other website, it just does it. And that’s the type of thing that this is good at.
So we’re not automating our SEO tasks, we’re really coordinating with SEO agents and that’s important because this is not automation. Like you need to be an expert to prompt this thing. It’ll make you a lot more productive but you need to know what you’re doing because you need to have you need to know what to ask it for. You need to train it on these structured analyses. And you also need to evaluate its outputs when it makes mistakes or if it gets stuck.
So I’ll show you guys in Windsurf like the reason why I like Windsurf is because it has these interfaces, these tools built into it. You can just select the ones that you want. Not all of them. Not all the ones that we need because Windsurf wasn’t built for marketing. It was built for coding. And then the real trick for us, like we were spinning our wheels trying to get this stuff to work because it’s it’s new. It doesn’t work all the time. But the the what ended up actually working is using a website site called pipedream[.]com. So instead of us setting up and running our own MCP servers to give us this level of access to search console and and GA4, we figured out that you can just go to pipedream[.]com and we haven’t paid them a dime, I don’t know why. And start and just press one button, start the server, it gives you the URL. You plug that into the into the windsurf or clog configuration and it just works. This is what I wish somebody had told me two months ago when we started thinking about this stuff.
Now, the process of training it is important. Like one thing that I believe is that if you’re doing a lot of SEO analyses right now, your current job is to do SEO analyses but your job is about to change and I’m sorry that I’m the one that has to tell you this. Your job for these SEO analyses is going to become, is going to transform into training SEO agents like this to do those structured analysis. And the way that I’m training my agents is I tell them to do something like do a keyword research for whatever website and it’ll do a bunch of things. It’ll pull the data it thinks it needs and it’ll do the analyses it thinks it should do. Sometimes it gets it right. In the beginning it usually gets it wrong. So what I do is I have this file with guidelines on how to use the GA4 tool correctly. How and I have SEO recommendations guidelines. Don’t just rely on one analytic source. Use both Google’s Use Search Console and GA4 when possible. Things like that. And where these guidelines came from is every time I saw it make a mistake because in the beginning its outputs were shit. Just like when I hired that Harvard intern who didn’t know Jack Doody about SEO but he was a really smart guy.
So I gave it an SEO strategy playbook. This is the prompt that I gave to ChatGPT asking it to generate some structured analyses. So ChatGPT gave me 10 different analyses like to find quick SEO wins and I I looked at it. There were three that I thought, yeah, I want my AI agents in order to do these. So I pasted those into the the guidelines So that my as part of the training for my SEO agents.
This is another scroll. I’m not gonna play the whole thing so I don’t go over time here. But when this thing is working, what I asked to hear is I said, find some quick SEO wins for fire&spark[.]com for searches with intent related to converting more organic traffic. And what you’ll I’ll start scrolling. But what you’ll see it doing is it’s doing some sequential thinking which is planning out how it’s going to accomplish this task. And then it’s going to start using some tools to grab some analytics data from various different sources. Then it will run some of these structured analyses against the data that it was able to find and it’ll choose some keywords to focus on, some pages to optimize. It’ll give some recommendations for optimizing those pages grabbing. And then it will also recommend creating some new pages on the site.
And I want to stress that like some of the recommendations are good, some of them are not. But the value, and I can’t repeat this enough, the value is in the training. Everybody in this room has a decision to make because you’re gonna you’re gonna try something like this someday. Maybe I have a PDF I’ll send you on exactly how to configure it the way we configured it. But when you try it, it’s not going to work. It’s going to give you bad advice. And that is the moment that you get to decide, am I going to train this thing or not? If you train it, it’ll work like a dream. Like when it works it’s like freaking magic. If you don’t train it because maybe you’re like, ah, this thing’s shit. It just told me to keyword stuff my articles or something stupid. Then you just won’t get any value from it.
Here’s a recommendation it gave. It’s kind of mediocre. It was just basically telling us that we don’t have a Boston SEO agency page on our website and we’re in Boston and why don’t we have this page? I actually think it’s a good recommendation but it’s not mind-blowing. So controversial opinion. And I think I’ve already alluded to this. Pausing for effect.
Don’t get distracted by the quality of your SEO agents work. Like you’re it’s going to fail. It’s going to give you stupid stupid opinions and recommendations. That is not the point. As I said, the point is whether you’re willing to train it or not. Full stop. If you’re not happy with the recommendations, here I said this recommendation is too vague because it told me to strengthen the job sub domain authority. And I say write instructions that would basically allow you to give me a better more detailed recommendation next time. And then so basically it writes the instruction, the guideline that I put in the guidelines file for its training so that it always has that for next time. And then we’re dealing with errors. Sometimes there’s technical errors. What I love about it is for the most part, it’ll it’ll fail because it’ll use the wrong like it’ll it’ll ask It’ll it’ll try to make a GA report using filters that don’t exist because it made them up. And it’ll do that. When it does that, it’s like, oh man, that filter might not exist. Let me try something else. And it will eventually figure it out. It usually has to try one or two more times and it’ll figure it out. When I see it do that Oh, and I forgot about my muffin. This is muffin the muffin and muffin the dog. I put this up here as a public service announcement to avoid any like tragic muffin baking accidents. I used to work on I used to work on computer vision and this was like the image that we had a lot of trouble with. And I and I intended to highlight the point that AI can get stuff wrong. So you really do need humans to make sure that the recommendations it’s giving you make any sense whatsoever. Like we’re not automating SEO here.
Back to dealing with errors. When it has a technical error because what it’s doing here is it’s it’s trying to access a search console property that doesn’t exist because it’s got to put domain colon in front of it in order for it to work. It’ll try again and it’ll get it right the second time which is awesome because it doesn’t really matter that it failed. But what I’ll do, and let’s see if I have this on the next slide. What I I do is at the end of this I will give it a prompt. I’ll say, give me some guidelines to help you avoid the errors you experienced during this analysis. And then I’ll put those guidelines in as a rule in my rules file for the agent. This is how I train it going forward. So this one was always use the list properties tool before calling any other search console tool to ensure that you have the property ID correct.
Great. That’s a nice idea. So you can see how we’re building this wisdom and these guidelines up over time. And eventually like it freaking works. Oh and here’s the slide. So generate instructions to avoid the error experienced in this threat. And I put that in the guidelines. So a little about training SEO agents. Ask it to do something. Watch it struggle. Update its rules like this is your new job. I’m sorry. Skills you’ll need, you guys will grab my slides so you’ll see this but you don’t need to be a coder to set this stuff up nowadays which is awesome. But you need to know a little bit a little bit about something.
Bianca Anderson, I I just took a picture of her presentation because she gave us a framework, the heavy hitter framework to do a structured analysis. This is the type of thing that these agents are wonderful at because you can tell exactly how to do it and it’ll get it right and you can automate it and have it work on this. So like I’m gonna train my agent on this heavy hitter framework as soon as I get back to my office because this is perfect. So, in the beginning, you’re gonna do a lot of a training and less evaluation of the outputs. Once you get your the training, get that file built out and you’ve got your training, then you’ll spend a lot more of your time just evaluating the outputs to make sure that it’s not making mistakes rather than actually doing the training
This is my You guys will have this do my job description. Like I put together a job description of the SEO strategist that I want to hire. It’s somebody whose key responsibilities are to build, test, and document SEO workflows and structured analyses and frameworks. And this person would train my AI agents on these SEO tasks. And this person would manage a team of AI agents. This person will be – but just based on the numbers that we’ve seen measuring productivity across our AI deployment at fire&spark, this person would probably be five times as productive as we were with a with SEO analyses back in 2022. But this is the job description for the person I need. I can’t probably can’t hire this person but I’ve got some teammates who will transform into this person.
But I go this is a jagged frontier. So you might be asking me like, what should I be using my SEO agents for? And the answer is I don’t know. Like, there’s some stuff it’s good it will be good at. There’s some stuff it won’t be very good at but we kinda have to just try to figure that out. And you guys when you get the slides, there’s a bunch more here in terms of final thoughts. What I think the difference going forward will be like working with an SEO agent to do your SEO work versus working with a human to do your work. But the the TLDR is that we’re always going to need humans to run this thing, to train it, to evaluate its outputs. But the big thing is to communicate to clients or to communicate to other departments. This AI when I have it trained out, it’s good at doing structured SEO analyses but it can’t explain what that means to the PR team. It can’t explain to the developer what how they need to do what they need to do or my content strategist how to update the content strategy based on what I learned. That’s what we need our human strategist for.
And that is my time. I’ve got We’ve got some We’ve built out guides on how to set this stuff up and a video and all that. Grab this one. This is one of the guides but email me to get everything. Dale@fireandspark.com because we’ve a bunch of other How to Set Up Guides that that we put together for for Windsurf just to make this work. And you don’t need to be a coder to make it work. That’s why I’m excited about it. And thank you for your time today.
Watch every SEO Week 2025 presentation and discover what the next chapter of search entails.
Sign up for the Rank Report — the weekly iPullRank newsletter. We unpack industry news, updates, and best practices in the world of SEO, content, and generative AI.
iPullRank is a pioneering content marketing and enterprise SEO agency leading the way in Relevance Engineering, Audience-Focused SEO, and Content Strategy. People-first in our approach, we’ve delivered $4B+ in organic search results for our clients.
AI is reshaping search. The Rank Report gives you signal through the noise, so your brand doesn’t just keep up, it leads.