Social media is increasingly a nauseating cesspool (and AI is to blame)
AI-induced digital garbage is infesting social media, which is also failing miserably at properly labeling this type of content.
Social media has never been a particularly clean place, but recently, since the resounding arrival of AI, it’s become overflowing with digital garbage, and the stench has reached absolutely unbearable levels.
According to a recent report by AI Forensics, so-called «AI Slop» or digital garbage emanating from AI is effectively infesting 2.0 platforms, which are also failing miserably when it comes to properly labeling this type of content, thus allowing full-blown opportunists to rely on «fake» content to potentially reach millions of users.
In its research, AI Forensics specifically focused on AI-generated videos spreading across social media, which boast a disturbing blend of surrealism and realism. These videos, classified as «AI Slop,» are mass-produced and often uploaded by automated accounts, with the ultimate goal of dominating search results (usually linked to popular hashtags) and fueling disinformation. Their popularity poses a major problem for the marketing and advertising industry, as it inevitably obscures the visibility of truly genuine content created by real humans.
The AI Forensics report reveals that 25% of the top 30 pieces of content ranking for searches associated with hashtags like #Trump or #History on TikTok are «AI Slop.» On Instagram, this proportion is considerably lower, dropping to just 2%.
It is also particularly relevant that approximately 80% of AI-generated videos spread through social media are extraordinarily realistic and so deceptively real that users find it difficult to identify them as «fake.»
Content disguised as real but actually completely «fake»
These types of videos are particularly problematic when they masquerade as citizen journalism and appear to depict alleged explosions or contain on-the-ground interviews that are actually completely fake.
By disseminating this type of content on their domains, 2.0 platforms would be violating not only the provisions of the Digital Services Act (DSA) but also the EU AI Law. Both regulations stipulate that AI-generated content must be labeled as such. However, both TikTok and Instagram rely on creators to take responsibility for applying such labels (something they rarely do, however).
Only about half of the «AI Slop» videos on TikTok are equipped with hashtags, while on Instagram this proportion drops to 23%. It’s also worth noting that such hashtags are often difficult to find and, on Instagram in particular, are sometimes not visible on all desktop computers.
The widespread dissemination of «AI Slop» videos on social media is driven primarily by automated accounts. On TikTok, 80% of digital trash is rooted in profiles that use AI tools to automate content creation and test the platform’s algorithms. The goal is none other than to manipulate these algorithms so that the videos ultimately achieve widespread reach.
AI Forensics also warns that the creation and dissemination of «AI Slop» content could be carried out entirely by AI agents in the future, making it even more difficult to control. For this reason, AI Forensics urges social media platforms to urgently regulate these types of accounts.
The boom in «AI slop» content inevitably puts brands in a difficult position, as they must deal with a new and dangerous competitor in 2.0 ecosystems when it comes to stealing users’ attention. Furthermore, the massively generated content with the help of AI could end up poisoning and bending social media algorithms to such an extent that truly genuine content from brands and influencers could be doomed to irrelevance on these platforms.
Source: www.marketingdirecto.com