Looks like Google's sick of AI-generated sludge filling up its search results, too.
The tech giant announced a substantial overhaul to its spam policies for search this week, introducing new changes that the platform estimates will reduce the prevalence of "low-quality, unoriginal content in search results" by a considerable 40 percent.
What's more, the massive undertaking strongly appears to be a response to the rising tide of mass-produced AI-generated content that's quickly filling the open web, polluting and eroding the quality of its search results.
Managing AI within its search results has proven to be a challenge for the search giant. Over the past year, we've watched as AI-generated images have climbed to the top of Google rankings, replacing real images of real figures and confusing historical facts. Elsewhere, a recent 404 Media report revealed that AI-generated content was creeping into Google News — ahead of a deeply consequential election, no less. Google has meanwhile been forced to increasingly play Whac-a-Mole with AI-generated spam, the likes of which range from fake news hits to the proliferation of phony obituary sites.
Per Google's amended spam policy, the crackdown appears to focus heavily on AI content.
One area of particular concern for the tech company is "scaled content abuse," or websites and creators that churn out high quantities of low-quality material designed to glean lucrative clicks. As Google Director of Search Elizabeth Tucker acknowledged in Google's announcement blog yesterday, the use of automation to power these kinds of spammy operations is nothing new. But generative AI-powered chatbots like OpenAI's ChatGPT — and Google's own Gemini-formerly-Bard — have made it cheaper and easier than ever before to mass-produce content designed to game SEO, as opposed to actually providing helpful or useful material.
Further, according to Google, it's also getting increasingly hard to judge whether today's "more sophisticated" scaled content operations are entirely automated, ultimately making it much harder for Google's search algorithms to sift through mass-produced digital muck.
Google is "strengthening our policy to focus on this abusive behavior — producing content at scale to boost search ranking," Tucker writes in the blog post, "whether automation, humans or a combination are involved."
When we asked Google what they believe would make a piece of AI-generated content truly helpful, Tucker provided a few examples of "positive applications": non-native English speakers using generative AI to "create content for their local business' website," for instance, "or content creators using AI as an "editor" to make their work better, not worse.
It's worth noting that those are very narrow. Nowhere did she mention grinding out entire articles with AI, which we've seen publications ranging from CNET to Gizmodo try and fail at this year.
"Generally, a hallmark of higher quality AI-generated or AI-assisted content is that it involves people producing original, value-added content," said Tucker, "and AI is used to power additional levels of creativity or insight." And "even with the rise in interest in generative AI," she continued, "we can say that the level of spam in Search has remained very low, and has remained steady."
Tucker also noted that "like any tool," AI can be "misused." That illustrates a tension at the heart of Google's new counter-AI efforts: it's actively building generative AI products, and is even working to integrate content-creating AI into search — at the same time that its search team is dealing with all the garbage that the tech's generating online.
Another area where Google is concentrating its spam overhaul is "site reputation abuse." In its amended policy, Google defines this content as third-party pages "published with little or no first-party oversight or involvement" and designed to "manipulate search rankings by taking advantage of the first-party site's ranking signals." In other words: seeking to leverage its solid SEO standing for some extra clicks, a well-known publisher might allow a third-party contractor to publish content — which may or may not be related to the website's beat — under its name-brand title.
In its updated policies, Google gives a few different examples of what this type of third-party material might look like. One of their hypotheticals, though, is extra familiar:
"A sports site hosting a page written by a third-party about 'workout supplements reviews,' where the sports site's editorial staff had little to no involvement in the content and the main purpose of hosting the page is to manipulate search rankings."
If that rings a bell, it's because this is exactly the kind of lousy third-party material that we caught Sports Illustrated — you know, of "sports site" fame — publishing under the bylines of entirely fake writers with AI-generated profile pictures. And as for why Google finds this content so offensive? According to Tucker's blog post, "such content ranking highly on Search can confuse or mislead visitors who may have vastly different expectations for the content on a given website." Extremely fair!
Google's also cracking down on "expired domain abuse," or a familiar con of spammers spinning up old websites with established search presence to churn out material. This is a practice that's been around for a while, though we've certainly seen many of these operations crop up in our now-AI-laden internet, and sometimes with bizarre consequences. Last year, for example, we found a website filled with pages of AI-fabricated quotes attributed to entirely real people, which was leeching off the domain authority from a since-abandoned URL.
These are just a few of Google's many changes, and overall, they feel like a very big deal. Previously, much to the money-eyed glee of SEO spammers, Google mostly seemed to beavoiding the issue of AI sludge filling its results pages. But as of today, according to SEO experts buzzing online, Google is already taking action against those newly considered spam offenders.
"I'm seeing AI spam sites getting fully deindexed left and right right now," Gael Breton, the cofounder of an SEO firm called Authority Hacker, wrote in an X-formerly-Twitter post this morning. "It's going to get interesting."
Ultimately, the full impact of these changes remains to be seen. And don't get us wrong: they do feel like positive steps forward in the quest to protect the web from succumbing to a pile of toxic, automated slime. Still, while we're glad to see it adapt to a new online landscape, it's important to remember that Google itself continues to play an outsized role in pushing forward the very AI it's fighting in its search results.
More on Google and AI: Google Quietly Paying Journalists to Generate Articles Using Unreleased AI
Share This Article