On the Topic of AI Blog Posts: When Agencies Don’t Care

This week I had the pleasure of onboarding a new local client who, despite having separated from their high-end digital marketing agency, were reasonably pleased at the work the agency had done despite the disagreements they had that lead to the split. The client eagerly showed me their blog, which was full of months of weekly blog posts, all in the ballpark of 800-1000 words each.

A weekly blog schedule with that many words for any business is already a steep schedule to keep, especially when the blogs are topical, relevant and come from the business directly. It’s why I’ve opted not to stick to a schedule and simply write whenever a topic interests me, catches my eye, or gives me enough for more than a draft. That was the first red flag.

The second red flag was when I took a look at the blog posts themselves. They were all highly clipped and clinical, lacking any real tone, consistency or sense of authenticity. They also all had a fairly uniform list format common with ChatGPT-generated writing. We went beyond red flag territory and approached air raid siren status when I sincerely asked the client if they used AI to write these blog posts.

The client seemed thrown at this, having been told by the agency that the agency itself worked with “fulltime content strategists and marketers” so she assumed the answer to this was no. In other words, the client was never told the blog posts were written by humans.

I popped open ZeroGPT, copied the most recent blog post into the ChatGPT checker, and presto – the blog had a rate of 97.6% AI-detected content. The agency hadn’t even bothered to change the wording around in an attempt to hide the AI-generated nature of the content (which itself rarely works, but separate issue). I proceeded to do the same thing with another blog post, and another – every single one of them had AI-detected content north of 90%. One went as far as a perfect 100% with zero original content inserted by the author of any kind.

The client was aghast and mortified. She admitted that this hadn’t factored into the split with the agency, but that this certainly now wasn’t prompting any second thoughts or regrets. I was far less surprised given how unfortunately often this comes up.

I already wrote back in 2023 about my thoughts based on my initial impressions of ChatGPT and other chatbots, where I facetiously asked ChatGPT to explain why using AI-written blog posts in personal or especially business use is a bad idea. At the time I said that the glamour of the initially bedazzling technology quickly wore off once you discovered its errors, typos, limitations and penchant for misinformation; and what I said still holds up.

Look, I also get it: Writing is hard. Everybody assumes that writing is easy just by virtue of physically being capable of reading and writing. It’s the baseline skillset fallacy – it would be like assuming that because you can add and subtract numbers that you’re capable of being a financier or an accountant. There’s a difference knowing how to write and doing the work of writing. I wrote a separate post on this very topic on how content development is much, much harder than it appears at a glance.

That’s still not an excuse for what’s been happening, and there are two issues I take issue with here. One is a concerning issue and the other is a massively unethical problem.

The first issue is one I’ve already alluded to. Self-styled marketers, agencies and content strategists have gone beyond using ChatGPT or other AI programs as an “assist” tool and now have it do work in lieu of employees themselves.

On its own, this is technically fair enough in the sense that Google doesn’t necessarily penalize AI content on its own. They have a detailed guidance page on AI-generated content, which makes it clear that they reward original and high-quality content regardless of whether it was human or AI-written. Google notes that about 10 years ago there was a rise in mass-produced human generated content but that Google would have never considered outright banning all human-generated content. I would submit that content production by computers is vastly different than content even mass-developed by actual human beings, but I do get the logic of Google’s position here.

The issue is that this becomes ethically questionable for agencies with some nuggets that Google drops. See if you can spot them:

Google’s ranking systems aim to reward original, high-quality content that demonstrates qualities of what we call E-E-A-T: expertise, experience, authoritativeness, and trustworthiness. We share more about this in our How Search Works site.

Our focus on the quality of content, rather than how content is produced, is a useful guide that has helped us deliver reliable, high quality results to users for years.

Notice how Google emphasizes the quality of content and E.E.A.T., also known as expertise, experience, authoritativeness, and trustworthiness.

ChatGPT and other AI-tools can generate surface level content at best. It’s why AI-generated blogs start to look so similar fairly quickly. Unedited AI “writing” tends to be so generic and thin that reworking and modifying it to be meaningful is the equivalent of destroying a house and rebuilding it from the ground up.

The lack of E.E.A.T. is also a huge problem. The same surface level content that Google rewards can simply never be replicated by a computer algorithm. Remember that ChatGPT isn’t intelligent – it’s artificially intelligent and doing nothing but drawing existent content from more reliable sources. ChatGPT will never write legal briefs with the expertise, consistency and style of a lawyer – there’s an actually example of that one. It will never understand a business as intricately as a business owner does regardless of its industry.

The problem should be apparent by now. AI-generated content may not outright penalize you in search results, but mass-producing slop churned out by a computer algorithm? That’s not going to increase your standing with Google. Google will in all likelihood ignore it. Even if as a client you’ve signed off on approving ChatGPT blogs, you are essentially throwing time or money into a void, and if you’re an agency, you are delivering subpar work for your client at that point. Even if you’re not strictly developing content to rank higher, why are you farming valuable writing work out to a computer program? If you can’t handle your own content, what are you even doing on behalf of clients at that point?

The second, much more problematic issue is the one that I spelled out already in my story that provided the lead in for this article: The lack of disclosure.

I can all but guarantee this agency selectively did not disclose to this client that AI-generated content was being used, which leads to one of two very troubling scenarios:

  1. They have a rogue content strategist who is farming out a lot of their own work.
  2. The agency is selling content development without disclosing the AI-generated nature of it.

Unfortunately, based on my own experience, #2 is worryingly common and will continue to be. It’s also dangerously close to misrepresentation. This would be like if you hired a bakery to develop a wedding cake and they showed up with a store-bought one from the grocery store up the road in terms of expectations versus the reality.

“But they’re not doing anything illegal or breaking anything in the contract!” This is a moot point; from an ethical standpoint you simply do not lie by omission like this. It’s like claiming you built a website when all you do is download a pre-existing WordPress template off ThemeForest and upload it without modifications or even changing the stock photos. At that point you are deliberately misleading people on the amount and quality of the work they are paying for. Especially when the agency in question has a financial incentive to produce more content faster.

It doesn’t matter if this legally could constitute misrepresentation or fraud. It is wrong, immoral and utterly unethical, and if you don’t want to “risk” it, you write blog posts by, y’know, WRITING them.

What frustrates me most about this is how little appreciation these sorts of agencies have for the work they do. Being able to write on behalf of other clients may not be exciting for you – but it’s something a lot of people wish they could do it. It’s very clean, safe office work in a profitable if very crowded industry. Despite my own concerns about the decline of content marketing and its effectiveness lately, it’s still a field that many people strive to break into.

I love what I do, and incorporating content into websites is a big part of that. The day I stop being incredibly enthusiastic about this work is the day I stop doing it, and it’s why it’s so immensely disheartening to see agencies and self-proclaimed “digital marketers” so apathetic that they’ll farm out a job that many people would apply for to a computer algorithm. If you don’t care about what you make for your clients in whatever creative role you find yourself in, step aside and let other people fill the void, because this industry is already crowded.

Writing is something that should be fun. That’s why I do it, and it’s why I ramble here whenever topics interest me. If it’s not of interest to you as an agency, fine – but find somebody who is. You are not using ChatGPT as an “assist tool.” You are using it as a self-writing quill ala Harry Potter, and those banned at Hogwarts examinations for a reason.