The Rankable Podcast

AI Integrated Into Your Professional Life

Featuring Christian Ward

Don’t forget to please SUBSCRIBE on YouTube if you enjoy the episode.

Episode Time Stamps

  • [0:00] Intro

  • [1:07] How do you use AI in your day-to-day?

  • [3:30] Will AI change user behavior?

  • [7:30] The exponential advancement of AI and the frictionless abstraction layer

  • [11:15] Will brands and digital ecosystems be siloed in the future?

  • [14:50] Implications of AI on marketers – monologues vs. dialogues
  • [18:15] Data privacy in the age of AI

  • [21:40] Thoughts on IP copyrights – the NYTimes lawsuit against OpenAI

  • [24:31] Rapid Fire Rankings

In episode 130, Christian Ward joins Garrett for a deep dive into the application and impact of AI in both personal and professional spheres. Christian is bullish on AI in daily operations, highlighting its power to enhance learning, decision-making, and efficiency through real-world applications. He discusses leveraging AI for everything from understanding data partnerships to automating tasks and analyses, underscoring the importance of practical usage over theoretical knowledge to unlock AI’s potential.

Christian and Garrett explore the evolving nature of digital ecosystems, predicting a future where conversational AI breaks down the barriers of traditional user interfaces, making technology more accessible and intuitive.

Christian foresees a significant shift in how marketers and businesses approach customer engagement, moving from a reliance on monologues to embracing dialogues facilitated by AI. This change, he argues, will revolutionize personalization, making interactions more meaningful and customer-centric, thereby enhancing the consumer experience in unprecedented ways.

The conversation touches on the broader implications of AI on privacy, content creation, and the digital landscape, suggesting a future where AI not only simplifies but also democratizes access to technology. Christian believes personal AI assistants will manage our interactions with brands and services before long, ensuring privacy and efficiency.

Both agree that the integration of AI into daily life and business is inevitable, urging everyone to embrace this transformation proactively. 

Listen on your favorite platform

THE RANKABLE PODCAST

This Week's guests

Christian Ward

Title: Chief Data Officer of Yext

Bio:

Christian Ward is the Chief Data Officer at Yext & a data strategist focused on the proper data approach to AI. Author of Amazon #1 Seller: “Data Leverage: Unlocking the Surprising Growth Potential of Data Partnerships
🔥 RAPID FIRE RANKINGS🔥 (1)

Top 5 Changes to Marketing

  • Cookie Deprivation

  • Two Choices – Tracking or ZPD

  • SEO Content Generation & Tools

  • From Monologue to Dialogues

  • CMO Metrics Overhaul

Top 5 AI Expectations

  • AI Efficiency at Work

  • Prompt Inversion

  • AI Model Convergence

  • Conversational UI’s Explode

  • Personal AI Usage

Top 5 Tools to Try to Learn

  • ChatGPT: Best best-around tool to learn what’s possible

  • Opus: Brilliant for cutting videos and creating AI posts

  • Swell: Video or Audio AI tool for posts

  • Reflect: Note-taking application and Personal Knowledge Graph

  • Anthropic: Excellent alternative to GPT

  • (Shameless Plug): Yext

Top 5 People to Follow for AI

  • Ethan Mollick (Academic & Theory)

  • Rowan Cheung (General AI)

  • Paul Roetzer (Marketing AI)

  • Allie Miller (AI Applied)

  • Cassie Kozyrkov (Decision Science and AI)

 

Transcript:

Garrett Sussman: Welcome back to another episode of the Rankable Podcast. I’m your host, Garrett Sussman of iPullRank, and I am hyped because this is going to be another one of those really fun AI deep dives.

Today, I’m being joined by none other than Christian Ward, the Chief Data Officer at Yext.

This dude has literally written the book on data leverage. I mean, that’s the name of the book, “Unlocking the Spread Growth Potential of Data Partnerships.” Check it out, about 5 years ago on Amazon. He and his brother have made so many data partnership sales, and data really is the name of the game when it comes to AI.

So, we’re going to talk everything AI today. What’s percolating in his brain? Christian, how are you doing, man? Thanks for joining me.

Christian Ward: Hey, Garrett. Nice to be here. Thank you so much.

Garrett Sussman: So first off, I have been seeing you have these conversations with Paul Roetzer and your brother, talking about AI, and some of your posts on LinkedIn immediately make my brain explode. So, the place I would even want to start is, how are you using AI in your day-to-day?

Christian Ward: So, one of the best ways I would say is probably everything, everywhere, all at once.

So, literally, I don’t think there’s any way not to use it. I think realistically, I’m trying to leverage it in as many places as I can, and I’m doing that mostly because I’m trying to understand what are the ways it can be applied.

This is not one of those things you can read an article and think, “Oh, I get it. Yeah, that makes sense.”

You can’t do that. You actually have to use it, and I think what’s kind of magical about it is as you use it, it unlocks all these other ways of thinking.

For example, I’m a big fan of the Feynman technique of learning. Richard Feynman, the world physicist, just absolutely one of the most brilliant minds of all time, and the Feynman technique is, if you really want to learn something, choose a topic, explain it to someone else like they’re a 5-year-old, and then continue from there, and sort of refine and improve.

So, I was like, “I wonder if I could do that on my morning walk because I have a speech to give later on about marketing and AI.”

So, I pull up ChatGPT while I’m walking, and I tell it, “Hey, I want to engage in the Feynman technique. Can I teach you something, and I want you to just listen, and then after that, I want you to fully interrogate the entire thing, check your work, all those things.”

And it came back with this brilliant analysis of what I taught it, very clearly and well versus where I really needed work, and it mirrored what I thought, but I would have to go grab somebody, a human, and say, “Hey, I need you to go on a walk with me.” Just that sort of thing, where we’re using it as a listener, not just as an ulterior search engine. That’s the type of thing where I’m using it as much as possible to see what it’s capable of, and then I take it and apply it in my professional life.

So, I’m really trying to get people to leverage it personally and professionally because the professional is inevitable; it’s going to happen, but the personal is something you get ahead of right now if that makes sense.

Garrett Sussman: Yeah, and I mean, to that point, there’s something natural about the conversational nature of AI that we’re not used to. It’s like you know, you hear the word voice search, and you’re like, “Okay, I can ask Siri,” but Siri can’t do, at this point, like, and obviously, Apple’s working on that, and it speaks to this idea of behavior change. Do you think that has it become more natural for you to use the tool as you do it?

Christian Ward: It’s more natural, but I think we’re thinking about this all wrong. I see people talking about this way, and it’s reaching frustration levels with this, which is you’re not adopting this; you’re not changing your behavior.

In fact, I would argue this is the unshackling of abnormal behavior that we have done for 20 years, which is instead of going on a walk and realizing I need new sneakers and going, “Shoes, Nike, men’s, near me, size nine,” like nobody talks like that.

But if I could just be talking naturally of, “Hey, you know what? Later on, I need to go by the mall, and I don’t want to show me exactly where to park. I want to pick up some new Nikes,” and I just talk, that’s not adoption; that’s actually freeing us up.

So when we talk about adoption, humans adopt things behaviorally with technology for very specific reasons.

There’s a theory by Fred Davis from 1989. It was his doctoral dissertation, and it’s absolutely famous. It’s been revised many times since then, but it’s called the Technology Acceptance Model, and in academia, this is one of the old OGs you point to in theory and go, “Okay,” and basically what it says, it’s based on the theory of planned behavior, and it basically says people adopt technology for two reasons. The first is how useful do they perceive it to be, and the second is how easy is it to use.

That’s it. Perceived utility over perceived ease of use.

But with conversational interfaces, the perceived ease of use, like the cognitive burden of learning a new tech, is going to zero. You and I have never lived in a world where you could literally, like for example, are you, how are you at Excel? Are you like, “I’m okay,” but I use the internet frequently.

Okay, but so like, so spreadsheet-wise though, you know how to build a spreadsheet and everything else, absolutely, yeah.

Okay, all right, so when you’re in a meeting, typically, there are people like, I used to be in investment banking, so like Excel was our, you know, that was our religion, right? So I look around the room, and I’m like, “They can Excel, they can Excel, they okay, there’s certain people you can tell like that’s how they think.”

But imagine now going and speaking as, let’s say, a search engine optimization strategist or as a marketer and saying something to the effect, talking to it and saying, “Listen, I’d like to build a spreadsheet where you put in all the formulas, but what I’m trying to learn is I want to track based on Google Analytics and then our own dialogue analysis, what topics are working, and I don’t really care about the fat topics.

I’m looking for the long tail, and could you tell me what we’re specifically mentioning in those posts that’s working?” And it starts populating the data. Everyone becomes amazing at Excel. So you have to take a step back and go, “Well, if it used to be how easy it is and how useful it is, and everything’s easy to use, then the only technologies that will actually be adopted in the future are the ones that have the most utility.” And that’s amazing.

And I just, again, I don’t think people quite grasp just yet how big that is. You’re not adopting this technology. It used to be, I like to say, it used to be that computer-savvy humans had an advantage, but now that the computers are human-savvy, that advantage is being democratized. Anyone can leverage it, and I think that opens up a world of entrepreneurial and business opportunities.

Garrett Sussman: Oh, I mean, to that point, you think like the jump from MS-DOS to like a graphical user interface with Steve Jobs is like that was removing some friction. What you’re talking about, I think the issue right now is there’s a novelty of the ChatGPTs and the DALL·Es of the world, but it’s not good enough yet for a lot of people. And like you and I know, we’re at like, we’re at the starting point, even though, you know, machine learning’s been around for, you know, 15, 20 years, we’re at a starting point at a mainstream consumer where it’s not Her, you know, like the movie Her with Joaquin Phoenix, where you can just have that, or you can talk to the spreadsheet and feel confident that the results are accurate.

Christian Ward: Yeah, I mean, until then, yeah, I think so. That’s a great point, right, which is this is the beginning of it, but on a geometric curve, I would say we’re no longer here; we’re here, and that’s the difference between the change of state, something that took 12 years, and now is going to take 15 minutes. So I would tell you that I think we’re going to be there very fast.

In fact, everyone probably watched, or if you didn’t, and you’re not as geeky as Garrett either, there was this thing called the Rabbit company launched the R1, and it’s a device whereby you can speak to it, and it gets things done for you.

And I want to say the keynote was 22 minutes, and I snipped. There’s a 160-second part in that where he explains something called the Large Action Model.

There have been two other similar announcements by other companies since then, but the Large Action Model is, it is analyzing the way an LLM is analyzing how we converse. The L analyzes how we interact with user interfaces, and what he’s saying is, you could now say to the Rabbit, and I don’t think it’s there yet, but the idea here is I could say to it, “Hey, can you go to Excel? I have a spreadsheet on the SEO metrics, and I want to graph it, but I want to put it in Canva, so then go to Canva and paste that in, and then make sure you upload it to my blog that’s at Webflow.”

And what it’s doing is because it has analyzed how you click on these interfaces to do these things, you walk away for three minutes, and it manages the interface. See, that’s a really big deal because what he’s saying is, I don’t care if you have an iPhone or an Android or Excel or Google Sheets. I’ve analyzed billions of human interactions with the software, so he’s abstracting that. You don’t need APIs to do this. He can actually mimic the human engagement, and so when that happens, what you’re saying is, hey, it doesn’t really quite get there, and I’m like, yeah, it’s getting there.

It’s going to be very fast. I think Google and I think Apple are going to launch things this year that just blow people’s minds, but the adoption on this, again, it’s not like adopting your first iPhone.

It’s very hard to explain this, but you have to go back to like Piaget’s theories of childhood development. Like we’re talking about from the age of zero to two, you’re babbling, right? That’s keyword search. That’s Lally, like, financial adviser, Redmond, New Jersey, near me, right? Like that’s babbling. That’s it’s gibberish.

But from two to eight, it’s a different level. It’s actually when children start to have full conversational capability. So that means before we could read and write, you will soon be able to talk with and engage with every technology.

That’s a big deal.

There are two stages after that with AI. I think it’s going to get really scary, but I think following childhood development patterns, like again, time-honored, there’s just problems with every theory, but if you really want to know where this is going, please understand we’re merely mimicking our own development. And so this is a great way to frame it.

Garrett Sussman: That’s nuts. And so it’s funny because it’s like, you know, I come in with some questions I think when I ask you’re already changing my perspective. Like, I was going to ask you, you know, how you anticipate different digital ecosystems like Apple and Google, and you mentioned Amazon. They have their own like moats, at least they did. You’re implying that, you know, the boundaries will not exist anymore with these AI devices. Do you think that will be the case, or do you think these brands will figure out ways to isolate themselves and require consumers to only use them?

Christian Ward: Yeah, so the Walled Garden theories around data marketplaces and data practitioners, I would say, if you have a situation like the R1, the Large Action Model, and that’s just one example, but if you have something where there is an abstraction layer of normal interfaces and the machine can mimic your mouse movements and clicks, which we have, we’ve been doing this for years, actually. SEOs know this really well, right?

Because there’s fake clicks; there’s all these things. If you have that layer, then you could see an abstraction level. I think we’re first going to people will try that, but I think if Google and Amazon and Apple launch something that really lets you leverage the data you have with them, they’re going to be able to really capture a massive portion of the market, and I think that’s what everyone should expect, which is, like for example, my nine-year-old son talks to his Amazon device. I won’t say her name because she’ll start talking to us, but if he talks to her all day, like Greek mythology, you know, how far is the moon from the sun, like all these things, half the time, he asks a question, she has to say, “Well, according to Google.” Do you think Amazon wants to say, “According to Google”?

Absolutely not.

They all want their own knowledge. They want their own objective, both branded and unbranded knowledge, and that’s what they’re building. The more they have that layer, the more they can use the data that you’ve already agreed to share with them, your prior purchases, your emails, your schedule, your calendar, all that stuff, to really help you get things done.

And so I think the internet was like this ideal of we can access knowledge. It’s not. The internet is access to content.

What this is doing is, it’s giving us access to knowledge, which gives a causal relationship between what I want to know and what I want to do, the how and the why.

And I think that it won’t be only theirs, and certainly, their business models, I think Google from an advertising perspective is really in a pinch. But I could see Google coming to you and saying, “Listen, I power everything for you. Your Gmail, your calendar, your this, your that. Pay me 20 bucks a month, and I will make your life infinitely more,” and people will pay, right?

On the other side, the ad models going to need to change. It’s going to have to switch from ads to offers, which is a whole different discussion. But to me, I think some of these ecosystems are so natural.

Like Apple’s commitment to privacy, I love it, but like you said, they’ve got to get conversational AI working there, and as soon as they do, I think they’re going to reap wonders because they’re paid with the hardware purchase. So we’re really going to see a fundamental shakeup of the way the models work. But where it goes beyond that is there could also be where I have my own R2-D2.

So look, two years from now, where I have a personal AI that talks to those AI or brand AI, and I say, “Hey, R2-D2, can you talk to Disney?” I’m acting like he’s right here, but “Could you talk to Disney and reach out and find out if we can take the Brightline up there and stay for the weekend from where we live here in Boca?”

It will do the entire dossier, and Disney will have no idea who’s asking that question. So the privacy will get abstracted one layer further. That is the natural distribution of these capabilities over time.

Garrett Sussman: That’s a challenge. Like, it’s funny, on the one hand, I want to ask you about privacy and your thoughts there because that’s really interesting, but I am curious about, you’re talking about the idea of content, not branded, unbranded, and Wall Gardens.

What are the implications of the ad revenue changing for Google, for content creators, publishers, and businesses? Like, I’d imagine, you, you thought about the implications of that.

Christian Ward: Yeah, I think there are a few things.

Number one, content will still be absolutely vital, but I think we’re going to be living in this world of monologues and dialogues. To some extent, we live in a world of monologues today, and what I mean by that is brands or information services, news agencies, they’re pushing out monologues.

It’s not a dialogue with the consumer; it’s a monologue. The fallacy of the internet for the last 20 years, in my opinion, is it’s sort of like they look at everything and go, “You know what we need is more monologues because we got to show up in Google Search,” right?

Centralized search ecosystems force the infinite expansion of monologuing, right?

And I think of like evil monologues, like in villains in movies, but that’s what it’s there for. It’s their time to tell you why they’re doing what they’re doing. Okay, that is going to start to part ways to where the actual user experience is a dialogue. It’s literally where every digital website says, “Hey, how can I help you today?” And I say, “Well, I’m looking for new shoes.” It goes, “Great, what are you into?” “Well, I’m into marathon.” “Great.”

Like, that dialogue in three back and forths, three smart back and forths, they will know more than what a year and a half of cookie tracking incorrectly made them think they should show me on the homepage.

That’s just wrong because every person suffers a little bit from dissociative identity disorder, which is I’m a different person on this podcast with you than I will be in an hour when my nine-year-old gets home, right? We change.

At the context is not just the context of the discussion. The context is of the moment, right? And so that moment dialogue is infinitely better and infinitely more personal than all the crappy half-personalization monologues we’ve been pushing out.

And I think that’s really the change for marketers. We really need to think about, you got to still push out those monologues because you got to show up, but you need to start preparing for using dialogue. Not creepy tracking stuff.

Focus on the dialogue. The honest conversation and the more you do that, that will tell you what new monologues to push out because you’re like, “Hey, I didn’t know people wondered about that. Wow, let’s talk about that over here, so we attract more of them.”

So it’s a very simple process. It’s not different than, look, the SEO community has been so bright about this for so long. They’re looking at keywords, but ever since “not provided,” Google has basically said, “I’m not going to share with you.” I know that gives every SEO like the willies, like, “Ah!” like get under your desk in the fetal position. “Not provided” was them saying, “I’m not going to tell you this stuff anymore.”

And we need that long-tail stuff because that’s the next conversation to build a good monologue on to attract the next dialogue. So I think that’s the process it has to go through.

Garrett Sussman: Yeah, I mean, to that point, whether you’re talking about like SGE, however that rolls out, or some of the AI tools, you know, there’s not that attribution from the Google perspective. So, you know, SEOs, to your point, for marketers, there’s this fear, this frustration. I do want to tap into the privacy data element, though.

Is based on what you’re saying, it feels like if you want to participate in this, you need to give up your data. Yes, it might be abstracted, but you’re going to be in the system. Is that just something we’ll all just have to be okay with?

Christian Ward: Yeah, I mean, look, I think we have been trying to get the world to wake up. So, you saw the Social Dilemma and all these documentaries, right? And I think to some extent, that did a really good job, but if that didn’t scare you enough to dismantle your Facebook, your Instagram, and you’re right, it didn’t, okay?

People are like, “Yeah, I know, my data privacy, my data privacy, but I want to see cat videos,” right?

Like, it’s not that trade-off was not obvious enough. And I think to some extent, the reason why that didn’t work is when you have legislative bodies and you have courts, and you’re trying to explain Shoshana Zuboff’s book, Surveillance Capitalism, to them, and why if the product is free, then the product is me, and this understanding of all of the depth of someone being able to say, “We’re not selling the data; we’re selling an audience.” It’s totally different. It’s not.

And so the reality is, is that’s too abstract. But I’m going to channel my brother here.

For everyone that doesn’t know, my brother, James Ward, he’s a data privacy attorney. He’s also, just an absolutely brilliant guy, and he lives in Europe, and he does European and American, the intersection of data strategy. And basically, what he would say is AI is sort of like the “Oh my gosh” moment for what data privacy was trying to do. And the reason why he says that is, you’re going to be able to ask your AI, “Hey, what data did you just share when I did that?” And it’s gonna explain it, right? It’s going to open up that.

You could say, “Hey, I want to plan a weekend at Disney, but I don’t want you to share any personal data.” And the companies that try to block that, they’re going to immediately bring down the wrath of the regulators. And here’s why.

Regulators, legislators, courts, they think, as Jay would say, analogizing in front of your store, and you didn’t train them well, and they ask inappropriate questions, and they’re kind of rude, and they actually don’t give the right information to a consumer.

We’ve had those laws for 400 years. Welcome to agency theory, right? Which is, if they don’t act as an agent, you’re at fault as the company. You could get away with this from a data privacy perspective as a big company using crazy technical terms in front of lawyers. You are not going to get away with one AI talking to another AI with a consumer in the middle, and them saying, “What did you just share to each other?”

If you don’t transparently show what’s going on, you’re going to have a problem. So I think the wave of privacy really wasn’t big enough to break shore, but I think AI carries it through.

Garrett Sussman: And that’s a very likely outcome. It will take time, right? But it will get there. But it’s complicated. I mean, we saw a couple of weeks ago with, well, the start of the OpenAI versus New York Times lawsuit, and the fact that these LLMs are already trained on existing, like Common Crawl, that like is out there, and you can’t, you can’t, to your point, put the toothpaste back in the toothpaste container.

Like, it is, you, and there’s data poisoning issues. Like, how does all this stay clean and above board, and it just seems like a mess?

Christian Ward: So, again, I’ll go back to a debate I was having with my brother. It was fun. I got to spend the holidays, our families, both in Scotland together, and this is kind of like a, this is like a Thanksgiving dinner for us. So as we’re talking, I was asking him about that because Japan has gone on record as saying, “We are not going to have a problem with you training on any material.”

Japan is taking the stance of they look at it as it’s no different than a young artist or a young writer learning by reading Hemingway and adopting portions of how they, their style. Now, if you quote specifically or you say, “Write this in the style of Dr. Seuss,” then you’re at fault, and you shouldn’t be in trouble for that, but that’s not what’s really happening now.

Jay says the EU will not adopt that strategy, but now you have another lower tax haven problem, right? Because I can go and have a lower tax by training in Japan and deploying worldwide.

There’s just a ton of IP questions, and again, I’m way out of my depth here. I’m literally just quoting my brother, so maybe you should speak with him next. But the next step of it is, is you, you, you, you have to look at if you’re the New York Times, and you put this information out there, or you’re an SEO, and you’re writing, and you’re writing these content strategies, there’s also a little bit of blame on all sides, right?

So if we want to protect journalism, then we have to solve this problem. But if you’re also an SEO, or you’re a journalist saying, “I want my article to show up,” well, you were okay freely sharing it to a centralized search ecosystem.

And so what then happens is, I guarantee by next summer, there will be a billion generated articles on summer gardening tips, okay? They’re all going to be generated by GPT. And in a way, you’re almost handing Google and OpenAI, you’re handing them the legal basis to say, “But you’re just generating it anyway, so I’m going to generate it myself.” We have to look at the precedent that we’re setting, which is, if you tell me that no journalist uses AI for any research, any use case, anything, then you could say it’s fully human. I think you’re going to have some shaky ground.

Like, they already showed those New York Times articles. I’m not picking on them. I really think they’re doing the right thing of challenging it, but it’s very clear that they are prompting, and it’s almost prompt injection. It’s almost a penetration attempt to get it to give them exactly what they want.

And I understand that they’re like, “No, it’s in here, and we know it’s in there,” but I’m not sure that’s the same thing as saying they owe you for every piece of content you ever created because that means every child that ever read the New York Times in college might owe you something too, and that’s not going to work.

Garrett Sussman: Dude, ah, there’s so many big problems that are going to come with this, but there’s so many opportunities. I don’t, I hate to end on like a scary note because I, you and I, like, there’s so much more that we could geek over, and obviously, we have only a certain amount of time, so I’ll probably have to have you back on at some point. But I do want to segue to the rapid-fire rankings, which we’re going to do a special version.

Usually, we do SEO version, but you, it’s too much fun. Are you cool if we do like an AI version of the rapid-fire rankings?

Christian Ward: Yeah, I look, I have a ton of respect for how you run this with the Rankable. It’s a great idea, but maybe we should stick to data and AI instead.

Garrett Sussman: Okay, well, then we’re going to do it your way, and I’m excited because I just love geeking out. I mean, I’m into this too. So, let’s put the music up, and we are going to get going. First off, Mr. Christian Ward, top five changes to marketing, go.

Christian Ward: I think the first one is cookie deprivation. So, that’s happening no matter what. We’ve all been waiting for it for like 27 years, it feels like. So, cookie deprivation is number one. Number two, every marketer is going to have to make a really tough choice. With cookie deprivation, are you going to keep trying to get around the rules and still use some form of tracking or de-identification scheme, or are you going to embrace zero-party data, where the consumer dialogue is really how you run your personalization? I’m going to give you a hot tip. I would go with the second one, but sure, whatever you got to do. SEO, content, and generation, the overhaul of SEO, will probably be a major theme for marketing for the next several years. There is an academic paper that I saw making the rounds on generative engine optimization. Everyone should read that. Take it, print it, highlight it, get to understand what’s happening there because generative engine, and by the way, we should get away from search engine optimization. It should be search experience optimization because it’s just, it should be from the consumer’s perspective, not from yours, in terms of what tool this is. Remember why we do this. It’s not for you; it’s for the consumer. And then marketers are going to move from monologues to dialogues. So instead of just pushing out content, they’re going to split evenly and use chat, search, reviews, and social to engage in more dialogue so they can learn the actual customer voice. Every brand says that we want the customer voice, but they don’t have the tools to do it. Now you do have the tools to do it. And lastly, and this one’s tough, I think CMOs are in for a really rough ride of rethinking about all of the metrics that they’ve grown so accustomed to, which is they’ve been able to see all these metrics, and they have this belief system in ROIs that I would argue are probably not accurate. We know that there’s a lot of bleeding over from one element to another, but I think if they start to focus on the metrics coming from those dialogues, they’re going to be thrilled with the insights of what people are asking at scale instead of keywords and none of the longtail. So I think the metrics are going to really change. In other words, you shouldn’t be looking at impressions, right? And SEOs and everyone knows this, but you should also not just be looking at engagements. You should be looking at meaningful dialogue. It’s a completely different thing, and almost nobody does it today. But those would be my five big changes I think for marketing.

Garrett Sussman: That’s huge. Oh, so much to unpack there, but I feel like that could be a whole podcast itself. So interesting. Okay, what are your top five AI expectations? That’s fun.

Christian Ward: So, these, so I’m more excited than accurate on any of these things, so take all these with a grain of salt. So, the first one is I think using AI in our daily work is absolutely happening this year. For those people that aren’t using it, you definitely should start thinking about using these tools, but I mean, you’re going to be able to use it in your PowerPoint, your Excel, your Google Sheets, all of that very quickly. Ethan Mik has a great post on this of being careful about people just like, you send me an email, and I just hit auto-rip. I hit the button, and the button writes it, rewrites it, and sends it back, and then you hit the button, and we get to where we’ve had like 10 emails and have agreed to something that no humans actually looked at. We got to be careful with that, but that’s going to happen. The big one I’m really excited for, which I don’t think has happened yet, but it can, it’s starting to happen, is what I call prompt inversion. So everyone got into, “Oh, you got to learn how to prompt. Prompting is the future.” I’m like, “No, it’s not. Stop it. That’s adorable. It’s not the truth.” Prompting is going to invert. So right now, humans are prompting the machine, but in the next six months, the machine will prompt the human. And that opens up AI. It’s exactly what you’re bringing up earlier, Garrett, which is some people don’t know what to do with this. And I’m like, “Yeah, but if it can help you know how to use it, now you’re talking.” And that’s going to happen. I think that you’re going to see AI model convergence. Now, there’s going to be a lot of people upset about this one, but hear me out. Language asymptotes. So basically, the understanding of English, which is a very difficult language. Like, our idiomatic use is appalling. Like, when I say you and I are on the same page, or if I say a chili pepper, like, AI is like, “You mean like a cold pepper? Like, what do you mean?” Like, it’s very tough to understand. So that’s all going to asymptote in the next nine months, meaning from a language perspective, I think you’ll see that almost all the models can understand language equally well, and that starts to get into the decline in the price of it, where you’ll see the real breakouts in the technology will be around multimodality. So, can the AI see better than another? Can it hear better than another? Can it feel better than another? All those things. That’s where I think this goes after that because language will asymptote very, very quickly. The next one, oh, I think conversational UI just absolutely explodes. I think they’re going to be everywhere. I already have them as my lock screen for GPT and my note-taking app, which is Reflect. They’re amazing. You just, you’re on a walk, you hit a button, you talk to it, it transcribes it. It’s like those UIs are going to proliferate. And then I think further out, 24 months, you get into real personal R2-D2 AI. And again, if you’re a marketer, that means you’ve got to stop worrying about showing up just in centralized search. You need to start understanding what are people, dialogues are happening, and can you get some of that data because you’re going to lose touch with what’s really important very quickly.

Garrett Sussman: I mean, that’s why community matters so much. You see so many of these, like, community-led marketing initiatives. Okay, so top five, this is fun. Top five tools to try to learn. There are so many tools out there, dude. What are your top five?

Christian Ward: Yeah, I mean, there are a lot of tools, but like, so you remember maybe it was like four months ago, GPT went down for like nine hours?

Garrett Sussman: Yeah.

Christian Ward: And I posted something funny, and I got a lot of great responses, but I was like, it was like you could almost hear the billions of individual dollars from venture capitals that were saved from not being invested. Like, the number of dollars whose lives were saved. It’s because a lot of these are just, you know, the Scooby-Doo episode where they pull off the mask, and it’s just GPT. So look, I think the first one is they’re still at the cutting edge. Everyone should get a ChatGPT account. Look, you pay $20 for Disney, $40 for Prime, $30 for Disney, Hulu. You’re talking about storytelling and escapism, or you could pay $20 and access in conversation the smartest being you’ve ever experienced and learn how to engage with it for your future of your health, wealth, and career. I think it’s time. Get it. The next one that I love is Opus. So, in, if you do videos or you do keynote speeches or podcasts, Opus is this brilliant idea. It’s also the third tool, Swell, where you can give it a video, and it will actually analyze it and go, “Hey, this part was boring, but this part, Garrett really nailed it,” and it will actually clip it. It will clip it. It’ll give you the transcript, and then you can go from there. It’s actually, it’s very helpful because you all are producing, especially if you’re brands and your clients, great content, but you’re not scaling it well, and the AI is quite good at that. So that’s Opus and Swell. The fourth one, and I am absolutely in love with it. It’s the team that built, or a couple of the people that built People.AI, which was a phenomenal data company. It’s called Reflect. And I think it’s like the Reflect app. You have to go online on desktop first to get it, and then you can download the app. And this thing, it’s a knowledge graph, which obviously, I work at Yext; we’re all knowledge graph. So it’s a knowledge graph of all of my notes, but now it processes auditory. So I can actually just speak my notes, and then it will give you my to-dos. It will connect the to-dos to the other elements in the graph. So as you can tell, I think a lot in terms of academic theories and books and authors, and so it connects it all. So I can literally click on like Daniel Kahneman and see every time I’ve ever talked about him in any conversation, what he’s done, all the papers, because he’s one of the most brilliant thinkers I’ve ever had the joy of reading, and I’m kind of like, that’s the type of thing you want to focus on. So Reflect is amazing. Anthropic, you should just try it as an alternative to GPT. And I’m going to throw in a shameless plug for Yext. Like, everything we do at Yext has AI ingrained in it. We’re improving knowledge graphs. We do reviews. We do content generation. We do all those things. We do it a little differently, which is we’re merging a knowledge graph with the large language model in a RAG construct. So, a Retriever Augmented Generation. Anyone who knows iPullRank, this was probably one of my favorite posts by Mike, which is he did the entire thing, and I’m laughing as I’m reading it. So I’m like, “That is literally what we’re doing.” But it’s great to know. I think we all, there’s many of us that see very similar. Like, history doesn’t repeat, but it rhymes. There’s a lot of rhyming on that effort right now. But those will be the tools I would focus on.

Garrett Sussman: And that’s so cool because you might go back, you guys have known each other for like over a decade at this point.

Christian Ward: Yeah, yes, yeah. And again, from a, like, you know, he, he, he is a, he’s a content mind, right? He’s a creator before all of it, but then he’s technical. And so I love the overlap. But a lot of times, the way he creatively explains, so he’s almost like technical with application. So any of you who have not read that post, I’m sure Garrett can link to it, but it is, it is phenomenal. It’s like cool, Feynman, right? Like, being able to explain to everyone where they actually are interest in, and you do it as well. I think, you know, you’re, it’s very fun to listen to you speak about these ideas because you’re so excited.

Garrett Sussman: Excited. Speaking of, let’s share some other great people. Who are the top five people to follow for AI?

Christian Ward: So, so the number one that I usually point people to is Ethan Mik. So, he is the Wharton professor of entrepreneurship. He’s also an MIT professor. Just absolutely brilliant, but he’s one of the most generous. He basically is posting. So, he is reading and analyzing papers all day and writing academic and peer-review journals, but he just kind of really quickly will say, “Hey, this just came out. Here’s what it means. You should pay attention to it.” Got to follow Ethan. The next one is, if you don’t follow, it’s called The Rundown, but it’s Rowan Chong, and he’s awesome. And it’s basically, he’s like, he’s better than every news agency way ahead of them on what’s happening every single day. And he’s outstanding. So again, LinkedIn, Twitter. I’m more a LinkedIn guy, but he’s phenomenal. Next is Paul Roetzer. You’ve mentioned already. He runs the Marketing AI Institute. Him, along with Mike Kaput and as well as the whole team there, I think it’s Kathy McCormack, like really, really bright. And they also throw an amazing conference, but they’re specifically marketing AI, which is really, really great. And then there’s two more. Ali Miller has a phenomenal career, and she posts. She’s like my favorite for the application of AI. So she talks a lot about what does it mean for careers, jobs, people? It’s a lot of the, it’s the second-order thinking, and I find it really, really refreshing. And then last is probably a lot of people know Cassie Kozyrkov. She is, was the head of decision science at Google. She’s now on her own. It’s really freed her up a lot. But like, the, when you look at that AI is really just the promise of the internet finally coming true, which is it’s actually decision assistance versus content access. Like, one was the card catalog at the library. The other is the smartest PhD research assistant you’ve ever known. That’s a huge change, and she frames it and does phenomenal posts and lectures on it. But I think all five of them are outstanding.

Garrett Sussman: So many great resources, man. Like, you know, as ChatGPT would say, “Game changer.” No, but finally, I got to ask you, what is your generative AI hot take?

Christian Ward: Well, define for me hot take, like, like worst, worst point of view, or best? Because a lot of, what do you think that the mainstream is not talking about, even in your circles? What is your contrarian perspective?

Contrarian-wise, for generative AI, I would say that I think that this is so much more impactful than people understand. There’s something called Amara’s Law, and it was, it was coined, it’s years ago, but Roy Amara wrote that we tend to overestimate the effect of technology in the short run, but underestimate the effect in the long run. I think that’s this. I think that people are very much misunderstanding because every time I hear someone saying, “Oh, are you adopting this?” I’m like, “You’re, you’re, you’re really thinking about this wrong.” This is such a paradigm, and that’s again, hype cycle, people going, “Talk about.” You know what’s kind of funny is the hype cycle where it’s like hype and then it drops? Do you know that’s the same curve as the Dunning-Kruger curve, which is the more people learn about something, they’re like, “Oh, I totally get this,” but actually, as you learn more, you go, “Oh my God, I don’t understand this at all,” and then you come out at the end? That’s this. And if you’re not applying that to everything you’re doing, my hot take is you’re doing it wrong. You need to see that this literally touches every avenue of your life, personally and professionally. And I think that’s it’s bigger than gen AI. I know we say gen AI, but what I’m talking about is the unlock that conversational AI brings to all other things. It goes back to that Technology Acceptance Model. If the cognitive burden of learning your software goes to zero, then I’m going to use it 100% potentially of its use case. Do you know that McKinsey and others say people only use 25 to 30% of every software package they buy? That’s, are you kidding me? So what do they do? They buy more. We call it a martech stack, implying like a stack of crap on your desk that’s going to tip over because you’re not using all those tools. So I think gen AI, that’s the beauty. It’s not it. It’s what it unlocks with no cognitive burden. That’s probably my biggest take.

Garrett Sussman: Ah, inspiring, man. It’s like I want to go and just like have a conversation, use the Feynman technique, do the, you know, do some dictation, and produce all this ChatGPT, play with, you know, Runway, create video, do all that fun stuff. Thank you so much for joining me. This has been an awesome conversation.

Christian Ward: Absolutely, Garrett. I really appreciate it, and I’ll definitely look forward to the next time.

Garrett Sussman: Excellent. My name is Garrett Sussman of iPullRank. We’ll catch you later for another Rankable episode. See you then.

Host: Garrett Sussman

Title: Demand Generation Manager

Garrett loves SEO like the 90s loves slap bracelets.

Each week he interviews the most interesting people in the world (of SEO). When he’s not crafting content, he’s scouting the perfect ice coffee, devouring the newest graphic novels, and concocting a new recipe in the kitchen.

Get insights, stories, and strategies from a range of practitioners and executives leading the charge in SEO.

Enjoy this podcast? Check out Garrett’s video show round-up of everything search engine optimization: The SEO Weekly

Garrett Sussman

Considering AI Content?

AI generative content has gone mainstream.

Discover what that means for your business and why AI generation can be your competitive advantage in the world of content and SEO.

SGE is Coming...

Wed, April 17th, 2024 12:00 PM ET

Discover practical strategies for adapting to Google's imminent Search Generative Experience (SGE). Join Mike King and Garrett Sussman of iPullRank for this critical presentation on preparing your content for a better chance at visibility in the AI snapshot.