top of page
Episode 6: Local Time
Covering the Spread logo on an open magazine spread on a table

Covering the Spread

Episode 14: We Are (Not) The Robots
Covering the Spread Episode 14
00:00 / 41:38

Louanne  Welcome to “Covering the Spread, Magazine Design for the Next Age,” a monthly discussion of all things related to our favorite medium, magazines.

Scott  Whether you’re a seasoned designer, an aspiring creative, an editor or publisher, or just someone who appreciates the art of storytelling through visuals, this is the place for you.

Louanne  I’m your host, Louanne Welgoss from LTD Creative, a graphic design firm located in Frederick, Maryland, and I’ve been working on publications for thirty-two years. You can see our work at LTDCreative.com.

Scott  And I’m Scott Oldham from Quarto Creative, who’s been making magazines for twenty-five years. You can see my work at QuartoCreative.com. And on this podcast, we’ll chat with industry experts, designers, editors, and production pros to uncover the secrets of all things magazine.

Louanne  It’s time to turn the page and what you thought you knew and reimagine the future of publishing.

Scott  Welcome to season two of Covering the Spread. We have a really special episode today on a subject that we have talked about almost for the entire first year of this podcast: AI. And we have two experts with us. We have Derek Newton, who is the founder of Verify My Writing, which he will explain to us in just a moment. And Michelle Jackson, who is the Chief Strategy Officer for Back Pocket Agency, which is a content marketing agency headquartered here, near Chicago. But we find Michelle in the wilds of Massachusetts. Welcome, Derek and Michelle.

Derek  Thank you. Great to be here.

Michelle  Thanks, Scott.

Scott  So, Derek, let’s start with you, if we could. Can you just explain to us what the nature of Verify My Writing is: what it does and why you started it?

Derek  As we know, the emergence of AI has upended a lot in the space of writing content creation as well as publishing, editing — the entire universe. Not too long after AI came on board, AI detection came along, maybe six months later. The first ones were out. These early models weren’t great. There weren’t very many of them. They were okay, but they weren’t very reliable. But over the course of the past several years, as the AI has gotten better, the AI detection has also gotten much, much better. Now, a handful of technology platforms are really, really accurate at detecting AI text — accurate to the 1/10,000 or in some cases, 1/25,000 success rate. Being a journalist who covered academic and workplace and credential fraud, and a writer, I had a special interest in what was happening with the ability to create content that may not be what it claimed to be, or how this might be used for nefarious purposes, and paid very close attention to that. And it occurred to me pretty early on, as the detection technology got better, that, in my opinion, it was being used backwards: that most people wanted to use the technology to be police officers in a sense — to be traffic cops. They wanted it to sit on their dashboard, or have access to it, and whatever came across their intersection as a magazine publisher or a research editor, whatever the case may be, they could check it and start issuing speeding tickets or “Get off the road” tickets. And that works. The problem is that people who have these jobs don’t want to do that job. They don’t want to be police. They don’t want to have difficult conversations with people about questioning the credibility of their work. And we found pretty early on that they weren’t going to do it, even when the tools were provided. There wasn’t a whole lot of motivation — incentive — for people to use the tools that way. So they don’t. What Verify My Writing does is: We take these top-end detection tools and we use them sort of backwards, rather than giving them to publishers or editors to use as radar guns to issue tickets. We encourage writers — people who produce the content, who are closer to it — to go get their work scanned ahead of time, get a score from us and let us certify that the writing is human and authentic and not AI-created as a writer. This made sense to me. Writing is hard and I was proud to have done it. If anybody wanted me to certify that I banged my head on the keyboard for three straight nights before producing this this material, I’d be happy to do it. So it just seemed to me to be a more elegant solution to a very, very complicated and big problem. So that’s why we started the company. So what I buried in there is that we let writers get work certified by us as being human-created and not AI, which will, we think and has proven so far, to help editors, publishers, agents, producers of movies, people who want to know this material but don’t necessarily want to be in the position of holding up a radar gun to every piece of material that crosses their desk.

Scott  Michelle, can you just give us a little background on what Back Pocket does and how, if in any way generative AI has changed your workflow as an agency.

Michelle  Sure thing. So, as you mentioned, I’m the Chief Strategy Officer at Back Pocket Agency. We’re a content marketing agency out of Chicago. We serve mostly associations, nonprofits, healthcare organizations, and other mission-driven B2B organizations. So we establish multi-channel content programs for them, whether it be print, magazine, digital — both paid and organic — content programs. I think our clients aren’t necessarily bringing to us concerns about AI-generated content per se, but I think it’s more of an education that we’re doing for our clients in terms of where AI can fit more effectively, more strategically, and identifying some of the risks that come with fully AI-generated content. One of the more intangible risks that we see for clients who are establishing content programs for their audiences are: You are certainly not differentiating yourself when your content that you’re pushing out to your audience is fully AI-generated. What one brand generates through AI is very similar to another brand’s content that they’re generating. So I think for us, it’s an education for clients as to why this human-led content experience is still really important, and how you can stand out as a brand by not farming out your content team to an AI bot.

Louanne  Derek, there’s an interesting history about where AI originated — get their content. Can you tell us that story?

Derek  The large language models on which these machine-learning algorithms sit, because that’s what they do: They train algorithms to go into a very, very deep — billions of words of material — and pick the words that make the most sense to the model and put them out, in order. And when they were building their large language models, they started with books that were in English. They started with books that were long, and they started with books that they didn’t have to pay the rights for — where they were in the public domain, or the rights had expired. The biggest corpus of those books was English Victorian writing from the 18th century, early 19th century. And there are still fingerprints of that early training in the models, if you know how to look for them. They will occasionally default to British spelling of words like neighbourhood, or they’ll occasionally come up with very predictable hundred-year old British names for male leads and dramas. And then, eventually, the AI companies and the large language models realize that even this entire library of information wasn’t big enough to really get the depth that they needed. So they went to other places. They started taking in online content, they started taking in news content, they started taking a more modern books, which has gotten them in trouble. I’m sure you probably saw Anthropic, one of the big AI providers, entered into a 1.5 Billion (with a B) settlement with authors to pay them for having used their material without permission. So these models are constantly ingesting and absorbing more and more text to get better and better, and in some ways faster and faster. Where they are pushing the envelope to ingest more material is in understanding how chemists talk to each other — very niche, very specific, very technical language. People who use slang because they know it in their profession or this sort of thing, they have to get into the subreddits that are online, and they have to get into research papers written by optometrists and chemists. We know now that three of the top referral sources in AI-generated AI are Reddit, LinkedIn, and YouTube, although I’m a glass half-full.

Michelle  So I would say, if your basis is Victorian novels, at least we started from a good foundation. I don’t know what we’ve done to it since, but there’s worse things you could start from, I suppose.

Louanne  So AI is pulling some of their information from Reddit?

Michelle  Marketers are losing their minds over Reddit because it’s a true community channel. And yeah, Redditors will push you out as if you try to market to it or sell to it. But yet, brands understand that what’s being said on Reddit is fueling some of these LLMs.

Derek  So Open AI that signed a big deal with Reddit to get their content because Reddit was one of these providers that had uncalculatable amount of data that people had contributed over decades of being active on the community. They needed this different sort of way of putting communications together in text. And they signed one of the earlier big deals to get access to it.

Michelle  It’s interesting because the human in the loop is an important element when you’re talking about generated content. Does a human fact check it? Does a human review it? One of the biggest problems with generative AI content is it may not be truthful, it may not be accurate. And Reddit is not established to be accurate. It’s a community. It’s conversations happening. It’s not fact-checked.

Louanne  So that was why I brought it up.

Michelle  I think that brings home the point of: We do need humans involved in this content creation and approval process. We do need to make sure we’re doing our third-party fact checking, because otherwise we risk putting out, not just AI slop, but just factually-incorrect AI slop.

Scott  The favorite parlor game online right now is: Was this generated by AI? And looking for all the fingerprints. You mentioned a couple of them, Derek. What do you, Derek and Michelle, see in the writing that you experience and are suspicious of?

Michelle  The LinkedIn community is taking a “Death to the em dash.” I would say definitively, I have been overusing em dashes in my writing for fifteen years — long before AI was a thought in its parents’ head. But em dashes are the top, to me, spotted tell. The other thing I see is those  quippy one-word question mark, followed by three words to answer your one-word question mark. And then the other last one I’ll mention is “It’s this, not that.” So those are sort of the three. And they all are a little bit overused in marketing content, more so than probably long form thought leadership content.

Derek  AI models tend to overuse the em dash and I had to learn this when I asked a question to one of the AI engineers about this, because I didn’t really believe it. I thought it was just like a made-up urban legend about em dashes. And I said, “They don’t really. Come on.” But he said, “No, AI models use em dashes at 230 times more often than normal humans do.” It isn’t a slight tell. It’s a salt and pepper dash of em dashes in writing. One of the things I look for is subheaders, three paragraphs and a subheader, three paragraphs and a subheader, three paragraphs and a subheader — very blocky writing in that way that visually stands out as being constructed like a building or a house. And then, inevitably, when I see that the third or fourth graf has four to five bullet points. So to me, it isn’t the words that I’m focusing on, it’s the structure of the piece. And when I go to check — because we have access to these systems — when I see a piece that is four to four to five subheaded paragraphs with one of them having bullet points, probably 75% of the time, I’m right that it is AI-generated.

Louanne  That’s interesting that you should say that that’s AI-generated because a designer would be thrilled to see content structured in that way because. for the most part, how often do we get content that is just a sea of content without subheaders, without bullet points? And we desperately want that. And now I think people might be afraid of that.

Scott  Which begs the question: Why? Of all the tics that generative AI could have come up with, why is it things like em dashes and the “It’s not this, it’s that” kind of patterns? Are there any theories?

Derek  The theory that I’ve heard and subscribed to is the first model — the biggest one: Open AI/ChatGPT — was not trained to write. Its original objective was to converse. It was built and conceived of as a conversational assistant. And so its core functioning is to engage with a user in a conversational style. And when you’re writing conversationally, that’s em dashes, that’s short responses, this sort of thing. It’s quippy. If you were doing academic writing or serious investigative journalism writing, you wouldn’t want that. And it can do that as the models get better. It’s very capable, but you have to tell it, “This is what I want.” If you don’t give it the parameters, it defaults to a conversational, very casual, friendly, “Hey, what’s going on?” sort of demeanor.

Scott  So for people who are frightened right now listening to this, who happen to have those tics in their writing style, what are the odds of a false positive in a system like yours, Derek?

Derek  It depends on the length of the document. The people who build the systems are confident that their accuracy is very good on anything 150 words or more. I’m a little uncomfortable at 150 words. Just personally, I just don’t think there’s enough material to test. But anytime you get up over 500 words, the accuracy is astounding. So a book, for example, a novel, a lengthy investigative journalism piece that’s several thousand words, the accuracy is going to be 1/20,000 of making a mistake. But it’s important also to say that that isn’t a mistake on the document. That would be a mistake on a sentence or a paragraph. So to completely flag an entire document incorrectly, it would have to make that 1/20,000 error 20 times in a row or 50 times in a row, depending on how long the piece is. Statistically, that is possible. It can happen, right? People do win Powerball. You can put these numbers together in a way that seems impossible. And it can happen, but the odds of it happening with a good system are very, very bad. If you were just to go to Google and look up AI detector, 75-80% of the ones you find on Google are not good systems. They’re bad. They’re not accurate. They are, in many cases, designed to mislead you. If you write something yourself and you put it into one of these systems you find on Google, it’s very likely to come back and tell you it’s 80% AI and you’re going to get upset and frustrated and think that AI detection doesn’t work. These systems do that on purpose because they want to sell you what’s called humanizing software. Like, “Oh, this is 80% AI. It’s going to get flagged or you’re going to get in trouble. For five dollars, we can humanize it and you can avoid the AI detection.” These are not honest businesses. And they are selling product, not really advancing a complex conversation.

Scott  And so presumably, that humanizing task is handled by an AI.

Derek  Correct.

Scott  So it’s an AI, making your AI writing not sound like AI. We’ve got robots correcting robots.

Derek  Yeah, exactly. Right. And these are a big problem in academic (by that I mean schools) where students are taking this traditional five paragraph or seven paragraph essay, having AI write it, then they’re taking it to a humanizer and having the humanizer rewrite it so as to avoid the detection technology that the school or the professor may be using. If you do go and check your work on your own and you happen to fall into a company that sells a humanizer… The example I use is, if you own an apartment or a home or whatever, and somebody knocks on your door one day and says, “I’ll give you a free window inspection. I’ll check every window for the seal and everything to make sure it’s fine.” What are they going to find? There’s no chance they’re going to come back and say, “Your windows are great.” They’re going to tell you, “You need to replace every single one of your windows,” but they happen to have a sale today.

Disclaimer  The views, thoughts and opinions expressed in this podcast are those of the hosts and guests and do not necessarily reflect the official policy or position of any organization, employer, or company they may be affiliated with. Covering the Spread is intended for informational and educational purposes only. While we explore topics such as design trends, industry practices, and future predictions, the content shared should not be interpreted as professional advice or a definitive guide. Listeners are encouraged to conduct their own research before making decisions related to magazine design, publishing or business strategy. We may reference or discuss third party content technologies or companies. These mentions are for context and commentary purposes and do not imply endorsement or affiliation unless explicitly stated. Additionally, given the ever-evolving nature of media and technology, some discussions may become outdated. We strive for accuracy, but we make no representations or warranties about the completeness or reliability of any information shared. Thanks for tuning in and enjoy the spread.

Michelle  So I learned the other day from one of our developers that, at least in WordPress, if you’re copying and pasting AI-generated text into, let’s say, the blog post format, that there is code attached to that text that you can’t see with your eyes, but you can see in the coded version that, essentially, says, this is AI-generated content. So it gets me to think what the purpose of that is, what it may be used in the future. Are these AI tools going to prioritize or de-prioritize websites that have this code attached to the pages? I don’t know. Derek, do you see that? And what do you think about that?

Derek  I wasn’t familiar with that specific application, but yes, I think search engines in general — the tools we use to access the internet — are going to start sorting for us, or at least marking for us in some large way. I think that’s fraught with peril, honestly, because I think a lot of people are going to complain very strenuously if their AI-modified or AI-edited or human-written content winds up on the dark side of Google, right? We’ve seen this before. Google delists you: You’re out of business. And it almost didn’t matter why you were delisted. It could have been a mistake. Somebody had a bad day. But getting it reinstated is exceptionally, exceptionally difficult. The AI makers, all of them, can, what they call, watermark their text. They can put signals in the text to make it very clearly identifiable at a 100% accuracy. None of the AI makers have done this, although they’re capable of doing it. And this goes back to what I was saying a little bit earlier about how impossible it is to label AI material. There’s a reason they’re not doing it, and it’s because they don’t want people to be able to tell AI content from human content so easily. It really diminishes their business model of producing AI. But yes, long answer to your question. Yes, I think search engines might value and score AI to create a text differently. I know there are some browser extensions in development that will grey or dark out content that is AI-created when it comes across it on the internet. I think there is going to be a prioritization of that, but I think it’s going to be a while yet, and I think it’s going to be sloppy and make a lot of people angry, at least in the short term, even though I would support it — even though, philosophically, I think that’s a good thing because I’m on the disclosure side of the coin. I just think it’s going to be very sloppy for a while.

Louanne  It’s interesting because when we were growing up, we didn’t have AI, so we all had to research the old fashioned way. And in a way, we all kind of still did the same thing that they’re doing today and hope to not get caught. How many times have you copied the exact sentence or paragraph and then maybe rearranged a few words or swapped out a few words and hoped that the teacher wouldn’t catch you? Just so that you could fill up that seven page, double-spaced book report that you had to write. But now it’s not so easy, and it’s got to be tempting for a lot of people to run it through all these things so they don’t get caught. And it’s a lot easier to get caught than it was back then.

Michelle  It’s sort of like, there are no new problems. They’re just slightly rephrased problems. Students had that challenge before. They’re having this challenge now. There’s probably no such thing as truly new content either. Like it’s been done. We’ve been creating content for centuries. But yeah, that’s an interesting point. That was just a slightly different risk or temptation.

Louanne  That brings up my next question: As we go on and on and on through this, everything is kind of borrowed from something. Nothing is new. I mean, I’ve thought to myself, “Good God, if they did this with graphic designers for every check to see if it was originally created by you and if anybody else created something, I’d be terrified because we all kind of borrow from somebody in some way, shape, or form.” So how do you prove that something is truly yours?

Michelle  The type of content that we create for our clients, or that we encourage our clients to allow us to create for them, are more first-hand stories. The only thing that is original — that’s not influenced by something else that’s already been done — is your personal story. For example, we work with a lot of associations, encouraging member stories to be told through the magazine or through their digital platform. I think that’s the one area that just can’t be AI-generated. Your own story is going to be the only thing that’s unique. Maybe not the only thing, but truly unique.

Louanne  Something that nobody else would know about, right?

Michelle  Right.

Louanne  It’s not already online.

Michelle  And at the same time, it’s the type of stories that we, as humans, want to hear. That’s how we’re building connection now in this AI-generated world: through these stories that connect us, and we see commonalities. So I think there’s so many reasons to lean into that type of content in 2026.

Scott  So Derek, how are your clients — the writers who are using your system and company — deploying that certification once they achieve it? What are they doing with it?

Derek  There are two ways that our certifications get used. One is what I call an internal certification. If you have a work that you are passing along to your agent or editor, or you’re hoping to find an agent or a movie producer and you know they’re going to ask, right? Everybody asks, everybody’s reading for, “How much does this feel like AI?” So that’s a document that can go along with the material, and it has data about it: when it was scanned, what the score was, obviously how many words are in it. So it’s a little more linked to the underlying document. The other one we have is a more public-forward. It’s more like a social media badge, frankly, but it has a QR code. So anybody could hit their QR code and it’ll go to the certification for that document where they can see all that other information, but it’s shorter. And that’s meant to be in magazines. It’s meant to accompany the material when it’s published, whether it’s a magazine or an academic journal or even a book. We’ve had several writers put our certifications — the badge one — on either the front or back cover of their books. It’s just a way of saying, “Even if you don’t care, as a writer, I care. And so I took this extra step to demonstrate that I care and that I, to the standard of the art today, got a clean bill of health that I can share with you.”

Louanne  Michelle, do you think that your clients, at some point down the road, might ask for you to verify that the content that you’re giving to them is, in fact, verified as your own?

Derek  Michelle, I’ll give you a heck of a deal.

Michelle  I’ll take it. I’ll take it. You know, I think that eventually that will be a reality. I attended a webinar last week. The topic was, “Are your contracts with your clients setting you up for risk because of how you are or are not incorporating AI in them?” So what I mean by this, as an agency: Most contracts are going to state that the content at the end of the day is owned by the client, whether it’s going into a magazine or going on a digital platform. And if that content is fully AI-generated, legally, where does that ownership lie? So there it was: an attorney doing the presentation, and she noted that there’s a whole lot of gray area there and not a ton of precedence at this point. So I could see this is a great tool for agencies to show, “Hey, you do retain that ownership. It’s clear that we created it for you on your behalf and you have the ownership of it.” So I definitely see a future where, potentially, there’s a clear need and a clearer demand.

Louanne  Yeah. I have to wonder if websites or magazines that are published might be putting a seal of approval on their website or their publication to assure readers that this is not AI-generated content.

Derek  Several magazines are going to start doing this. I can’t name them yet, but I know it’s coming in communication. Now there’s basically two groups. There’s information that is factually-driven, like what happened — news essentially, or the weather or stock market indices. This audience probably doesn’t care much at all whether it’s AI-generated. The fact is the fact. There should be a human in the loop to make sure it’s not made up or that it is misleading in some way. Newsletters are the best example. If you’re receiving a newsletter that says what happened in the stock market today, you probably don’t care much whether it’s AI-generated. You care that it arrived on time and that it’s accurate. But for the other bucket of content — magazines or newsletters, or any content creator that is based on expertise, voice, context, explaining what happened, maybe providing new insights or connecting things — this sort of content, I think, is going to be a lot more sensitive to AI creation, because what I’m getting in that newsletter is, “I want to know this expert’s view of what happened in the market today. That’s the value I’m seeking to get out of that.” But whether it does or not, that’s really the place that we’re going to focus on AI content creation, because what those creators are promising to deliver is different.

Louanne  That gets back to what Michelle said earlier about member experiences as well. So it could be related right back to the association world.

Michelle  You know, within our space, some of our clients, too, are in a higher trust type of industry. And I think what you’re saying, Derek, means even more for them. When we’re thinking about healthcare organizations, the perspective of your doctor — it’s important that that’s coming from your doctor’s perspective. We talk about stock market updates, but more financial service advisory — that’s another high-trust space that I really need to know that the person giving me this information is the person giving me this information. So there’s a spectrum, I think, that exists even on that other bucket that you’re talking about, where there’s an even greater need for authenticity and accuracy and expert-led, human-led content.

Derek  The phrase “Human in the loop” on AI content creation, I think, most people accept is good and necessary and beneficial. The challenge from history, however, doesn’t lead us in a good place for a long time. The biggest example was in banking. The machines started calculating mortgage interest rates and whether you got a mortgage — and they did it with a very complex formula — and in theory, the mortgage lenders were allowed to override the machines and say, “No, I disagree. I’m going to go ahead and give the mortgage.” The problem is, in a lot of studies going for decades, that never happened. The mortgage lender or the banker deferred to what the machine said because they didn’t have the confidence to take the risk and override the algorithm. And there are lots of other examples where simply having a human in the loop, unless that human is an expert — really confident in their decision-making and is not going to be fired for overriding the machine. Simply having a human at the desk, we’ve seen in a lot of contexts, probably won’t be sufficient in this case, because we’re talking about AI content to rewrite AI content or to flag it and not publish it. That’s tough, and I’m not necessarily skeptical. I’m just a little worried that in some ways, we’ve seen this before.

Michelle  I think one of the ways we use generative AI — and this is a little less generative and more data interpretation — but we’ve got a long form piece of content that we’ve created. Maybe it’s a report, maybe it’s a multi-source interviewed piece, and let’s say we want to promote it in social and in email. We might create a closed loop custom bot to say, “Hey, take this content and give me a few effective subject lines for my email that I’m going to promote this content in.” So asking AI to interpret the data — the data, which is the long form piece of content. Rather than saying, “Write me this 2,000 word expert-led content, featuring these five experts and make up some quotes for them, by the way.”

Derek  Yeah, totally. Totally agree. Maybe it’s too late, but I want to try to say I’m not anti AI, right? I’m really not. I call myself AI-agnostic. I recognize that there are lots of places — many places, thousands of places — where it is value-add, where it can take a lot of the work away from people who previously had to do it, where it can open up things we never even thought about before. And I have nothing at all against any of that.

Louanne  How does Grammarly fit into all of this? If I were to be reviewing, say, an article that somebody wrote and Grammarly is assisting me in fixing basic commas and things like that, but it also sometimes wants to change the tone, or it wants to rewrite a sentence or even a paragraph, how does that play into all this?

Michelle  I’m hearing that it’s giving more and more of those suggestions that are almost making it sound more AI-driven. And so our team is having to reject more suggestions from Grammarly than it did in the past. “I just want to know if there’s mistakes, if there’s misspellings, if there’s grammar issues. I just need you to make sure my content is clean. I don’t want you to rewrite it.”

Louanne  I mean, even the emails that you write in Google will offer to rewrite them for you. And it’s to a point now where, if you were to actually have a live conversation with somebody, I wonder if you were as fluent and elegant as you were in your email.

Michelle  That’s a good question. Probably not.

Scott  I read a statistic that I, personally, at least, found alarming, the other day, which is that they estimate now that better than 50% of the content that comprises the internet is, as of today, AI-generated, which suggests that these learning language models are now feeding on themselves more than they are on human contributors. And what does that mean for generative AI, at least as a textual tool for the future?

Derek  We’ve seen for a while that large language models are starting to ingest their own content. The smart people who pay attention to these things are worried about this, because it could lead to what they call model collapse, where the models stop functioning properly, or it gets really over-indexed on a very specific sort of response, because it gets it over and over and over again and has a hard time breaking out of that. Those things can be adjusted by the programmers who control those large language models and the AI algorithms that run them. The public is starting to come around — a general sort of filter. When we see things that most consumers now ask themselves at some point while they’re engaging with that content, whether it’s AI or how much confidence do they have in that. And this is to me, another part of the problem, where I like our approach, because you can’t label AI content. There’s too much of it. The people who are producing it don’t want it labeled, so they’re not going to participate in the system to have it labeled. They want it to look like everything else. Even if you could force them into a labeling system, the economics are to contest that system at every turn. There really isn’t a good business model or even plausible approach to label AI content at a mass scale. One piece here, a photo there, a video clip there. Yes, we can do that. And we should. But to have some sort of system where half the internet has some sort of designation that it’s AI-created, if that was ever possible, that’s gone. And that’s why I continue to believe that there’s going to be a second lane, if you will, for content that is created by a human, represents context and understanding and experience and view and voice, and that, if we can assure people that that’s what this second type of content is, it’ll have a very prosperous market, because I think there’s enough people out there who want that — who will respond to it and reward it with their time and, whatever their financial contribution is, whether it’s seeing ads or subscribing or however they engage.

Michelle  I don’t know that the pendulum is all the way in, swinging in the other motion, but it’s headed that way. Audiences are wanting that human-led content. Again, this the storytelling — the first-hand experiences, the true thought leadership perspective. With an AI-generated world that is informed by generative AI content, that world sounds like way more AI slop. It’s the content that you don’t connect with that sounds the same in every channel and every brand that’s delivering it. I think that’s going to push that pendulum a little further in the direction you’re talking about, Derek. Which is a good thing for all involved.

Scott  Do either of you think that the medium has a huge impact, or an impact at all? On whether people are initially suspicious of the origin of the content to things that, for example, are printed out as having more veracity just intrinsically given to them — deserved or not — versus what people find on social media or other online content?

Michelle  The longer it took to put that together, the higher trust you have— the magazine that went through all the designers and the editors and the writers and the publisher and the printer… there’s a lot that goes into that that’s human-driven. I think the shorter form, quicker content, like what you see on social — there’s a high distrust of LinkedIn content that the LinkedIn community will tell you about… the level of impact that went into that product, whether it was a social post or a print magazine, affects the trust that people inherently have with it. Certainly, I find myself trusting printed content more for that reason, I think, without even really thinking about the reason why I’m feeling that way.

Louanne  I think people generally feel like printed is human just because it always was. Whereas the internet has never been really human. So it’s just a mind type of thing.

Derek  When a company such as Amazon allows people to publish fully AI-generated books on their book platform without any sort of designation, you really sort of get into a very confusing space quickly. “Is this a book that I think of as printed and therefore has high trust value? Or is it a digital piece of content that was made 20 minutes ago?” And I can’t necessarily have that trust content while it’s on Amazon. What does that mean to me now? It’s on my Kindle. Do I trust that? I think I think in certain places, even that designation isn’t as clear as certainly I would like it to be.

Scott  About the future of the game, is there anything more, Derek or Michelle, from your perspective on what is next and what people should expect?

Michelle  Very few things are very black and white and yes or no or good or bad. I think what we try to encourage our clients to do in the agency setting is, if long form AI-generated content isn’t the answer, where are more strategic points that we can work in AI that are going to bring that value, that are going to make us more efficient while bringing that additional value? So things like data analysis, like I mentioned: taking long-form content and using AI to create some more snippet-type short form content from that long-form content. So identifying within the content workflow, if we’re not going to use it for long form generative AI in that aspect, where can we use it? And mapping that out, documenting it so that it’s repeatable. Because if we just say no AI and we have a strict no AI policy, we are going to fall behind. But we want to do it responsibly. We want to do it strategically, and I think putting a lot of thought into what that looks like is really where organizations can have success and do it in a way that they can be proud of, and own as well.

Derek  It’s building awareness — building, establishing a brand that people know: what it is, what it means, and that they can put trust in it. That’s a lot easier said than done. It takes time. So we’re on the long build of doing that. You’re talking about the future. Magazine publishing isn’t my specific area of expertise, but I have seen many people mention, as it relates to magazine publishing specifically, that the most valuable currency going forward is going to be reader trust: establishing long term relationships with your readers almost at a one-on-one level, but where they know what they’re going to get, where they trust it, where they come to rely on it in some fashion is going to be maybe the only thing that survives this AI transition. And if that’s true, it, to me, doubles down on everything you can do to amplify the uniqueness of your voice. Give it credibility, invest in it, right? How you build reader trust is different than how you build subscriptions.

Michelle  Yeah. Yes, yes.

Derek  Certainly how you build page views or clicks. The market is undergoing shift, the likes of which, probably, none of us have ever seen. Maybe not since the internet itself came on as a place. I don’t know that I or anyone knows how this is all going to play out yet, but I certainly think it’s fascinating, and I’m maybe more of an optimist than some might think.

Michelle  I’m optimistic, too, that that’s where we’re headed. Hopeful marketers are feeling the same way that trust that builds that long term connection — whether you’re a magazine publisher or whether you’re an association trying to build more lifelong members and reduce your drop off rate — it’s the connections and the trust that you feel with that content. And that, certainly, purely AI-generated content isn’t going to give that to you. I’m in your camp. I love that, and I think that you said it beautifully.

Derek  Well thank you. I clearly overshot the mark.

Scott  All right. So Michelle, can you tell folks who are curious how to get in touch with you and Back Pocket?

Michelle  Yeah. So the best way to find me is going to be on LinkedIn. I’m on LinkedIn at Michelle Bowles Jackson. And you can find Back Pocket at backpocketagency.com.

Scott  And Derek, how do people get in touch with Verify My Writing?

Derek  That one’s easy as well: verifymywriting.com.

Louanne  Okay. Thank you.

Scott  Well, thank you both. Fascinating discussion. I feel like we’ve just touched the surface, but we would love to have you both back in maybe this kind of forum or maybe even more people. That’ll be the goal for season 2.

Louanne  Come back in 30 years to do this.

Scott  Yeah.

Michelle  None of us will be here. It’ll be our robot versions.

Derek  No, that’s probably right.

Michelle  It’s definitely not. This was really fun. Thank you guys.

bottom of page