Since the advent of artificial intelligence, there has been a proliferation of zero-generated content on the web. These fake sites, articles, or videos are everywhere, and especially where no one expects them. According to the New York Times, when “Google suggests adding non-toxic glue to make cheese stick to a pizza,” or when you come across posts that seem to pop out of nowhere from your news feed, it’s likely AI-generated content.
Appearing on online forums, slop refers to “low-quality content, fake news, or fake videos and images generated by artificial intelligence (AI), which can be found on social networks, in art or literature, and, increasingly, in search results,” summarizes NSS magazine. Theorized in particular by British programmer Simon Willison, the phenomenon is becoming increasingly worrying.
Unlike a conversational bot like ChatGPT, this AI-created content looks like human-made content and plays on “clickbait” (prompting a user to click on mediocre content, but with a catchy and often misleading title) to generate ad revenue and fool the search engine’s algorithm.
Like spam before it, slops promise to become ubiquitous in our browsing habits. In fact, the phenomenon is already well established, but Internet users still struggle to identify it as such. The problem is that since May 2024 and the implementation of Gemini, Google’s conversational bot that draws on web resources to build its answers, it has been largely inspired by slop content, claiming for example that astronauts had found cats on the Moon. And this is the danger of the phenomenon: between an article of origin and artificially generated content, without any human supervision or verification, it is impossible for the bot to notice the difference.
On May 30, Google announced plans to reduce some Gemini features, in order to improve some errors and inaccuracies, and to highlight the shortcomings of its chatbot in terms of data verification.