By Ellie Glover
Even if you’re not an avid user of mainstream social media, you’ve certainly seen AI content at some point across LinkedIn, most recently the flood of AI action figures filling your feed, AI content has leached into our daily doomscrooling habits- more than we know, but the problem with AI content runs far deeper.
Prominent American YouTuber James Pumphrey (aka. Speeed) recently unveiled the increasing avalanche of AI “slop” swamping the car enthusiast space on YouTube. Pages and pages of AI-generated cover images appeared on his search results, announcing (re)launches of seemingly impossible feats of automotive engineering. He delved into why these have grown so parasitically. Most are created not to inform or entertain, but to generate revenue from ads or affiliate marketing, while others are used for propaganda and misinformation.
Using the content stat tracker Socialblade, he suggests that one channel might earn between $1k-10k a year on its AI-generated content. This might not sound huge until you consider they likely own tens of pages churning out up to 10 videos a day across multiple outlets. These pages have become known as “content farms” and operate at a pace any human couldn’t possibly match.
The inherent problem fuelling this growth is YouTube’s algorithm, which rewards channels for consistency and volume, pushing them up in the search results and compounding the issue. Volume over substance. Output over originality.
Content creator Gerard (aka. Visual Venture) explored the prevalence of content farms among children's content in particular – easy to mass produce, since kids like consistency and familiarity. Looking at some of these channels, I was horrified by the overstimulating, bright, and loud videos often spouting gobbledegook without any real benefit to child development. Not to mention the bombardment of popups within the video, which suggests that liking or subscribing will impact the storyline, ultimately boosting their content to other vulnerable viewers.
This brings us to a personal loathing of mine – AI search results. Google’s deal to purchase all of Reddit’s data, as an example, has led to some wildly inaccurate AI-generated search results. Based on perhaps a Reddit thread or comment from many years ago, often tongue-in-cheek, Gerard mentions search results that suggested that Elmer’s glue can be used to thicken pasta sauce, or more ludicrously, “geologists recommend eating one small rock per day,” a headline written by satirical media outlet The Onion.
YouTuber Drew Gooden called it a “degradation of the internet as a resource database” and contributes to the “Dead Internet Theory”, where, with more content produced by AI, these AI tools will be training themselves on other AI-generated content, and large parts of the internet will ultimately be consumed with nonsense.
One emerging theory that suggests Spotify are using AI to generate original songs and covers to populate their playlists. Futurism explored how, upon digging, bands with tens or hundreds of thousands of streams had absolutely no other social media presence, and were represented by a label with an expired domain name and equally inactive socials. The leading theory here suggests that this is to avoid paying royalties to real artists.
Kate Knibbs of Wired points out just how much ‘reputable’ media content has been created by AI, noting that ESPN was recently criticised for using AI-generated articles (hello 2014 Buzzfeed), and associated it with “lower quality and less reliable media”.
Futurism also investigated the shady use of AI content in media with an exposé written about AdVon Commerce. In a training video shared by a former employee, the trainer created in seconds an article titled “Best Bicycles for Kids”, along with a series of Amazon product links. If an output doesn't make sense, such as conflicting pros and cons of a product, the trainer tells workers they should simply generate a new version. "Just keep regenerating," she says, "until you’ve got something you can work with."
It seems that AdVon, according the Futurism, struck deals with surprisingly prominent publishers – Mashable, Good Housekeeping, InStyle, Sports Illustrated, and Better Homes & Gardens, to name just a few – intended to suggest that these are real recommendations from real people, when in fact “journalist Damon Ward’s” profile picture can be found on a site that sells AI-generated headshots. “It blurs the line between journalism and advertising to the breaking point, makes the web worse for everybody, and renders basic questions like ‘is this writer a real person?’ fuzzier,” say Harrison Dupré from Futurism.
AI-usage has been an extremely contentious topic of late, one I don’t to venture into lightly. The original intention had been to shine a spotlight on AI abuse within the social media space, but the further I dug into this topic, the more horrified and dazed I became, with it becoming rather clear that AI has quietly been fuelling much more misinformation and “slop” for many more years that previously thought.
The Drum suggested that “early data paints an uneven picture”, with AI influencers having around 1.5% higher engagement rates than humans, but follower growth is flat, suggesting that AI influencers “intrigue audiences”, but it’s possible the human element wins out in the long term.
Equally, with brands paying for advertising space in the same vicinity as fake content, only engaged with by fake users, will they take their money elsewhere, and when will the house of cards fall?