Having been in the world of computer science, I have noticed that the term “AI” (Artificial Intelligence) has morphed as a concept. In my earlier days it was simply attempting to get computers to “reason” or “think” about things and possibly automate a lot of moribund tasks. AI now seems to be a catchphrase for a plethora of weird and wonderful concepts.
Table of Contents
Examples of AI Slop:
- AI for automation still exists, and is helping out in a lot of areas where large data analysis needs to be automated.
- AI Slop for lack of a better term is what is appearing on your social media. Most of the “photos” you see are from AI.
- Endless quote images with fake attributions: AI-generated inspirational quotes using stock photos and misattributed authors (e.g., “Einstein once said: ‘Follow your dreams.’”).
- Rewritten Wikipedia content: Articles repackaged with slightly modified grammar to evade plagiarism filters, offering no new insight or verification.
- Emotion-bait threads on X (Twitter): AI-generated threads that pose moral dilemmas or false “heartwarming” stories designed to trigger engagement without factual backing.
- Fake product reviews or summaries: AI-generated comments on Amazon or Reddit summarizing items or media the author never interacted with.
- Deepfake news summaries: AI-written headlines or summaries that distort source content, often with misleading conclusions.
The Danger of AI Slop in Social Media:
The proliferation of AI Slop undermines information integrity and public trust in digital communication. On platforms where engagement rewards visibility, AI-generated junk content often outcompetes legitimate sources due to its sheer volume and emotional manipulation tactics. This crowds out meaningful human voices and nuanced discourse. According to The Atlantic, the internet is facing a potential collapse under the weight of this “garbage flood,” as algorithmic incentives prioritize quantity over quality.
— Insane Facebook AI slop (@FacebookAIslop) June 13, 2025
Social media’s virality mechanisms make it particularly vulnerable to this type of content. In a 2024 MIT Technology Review, researchers highlighted how AI Slop erodes users’ ability to distinguish between genuine and manipulated content. This degrades public discourse and facilitates the spread of misinformation, especially in politically sensitive or crisis-related situations. Worse, AI Slop can be repurposed by malicious actors to amplify propaganda or disinformation campaigns at scale, making it a national security concern as well as a cultural one.
Shortcomings of AI
AI, like all programming and writing, has biases. Whoever programmed it will infuse their biases no matter how hard anyone tries. You can tell my bias easily in this article. What do I mean by AI biases?
- The programs being built to “guess” how people will think that are aimed at “Young White Folk”. Not having a diverse data set to build from will mean that Financial AI, and even Medical AI to not take into consideration differing cultural ideas and even genetic makeup.
- Financially attempting to build more risk into the decision making process than most folks will want. What could this lead to? If you have a bunch of “High Risk” taking programs trading stock, a sudden stock drop can easily happen, causing catastrophic financial results.
- In control systems, just not understanding the possible risks in the system they are monitoring and attempting to control.

AI Slop For Profit?
This continues to be a theme out there which is really starting to get under my skin. As you can tell I am writing this, with some help for spelling and grammar. The new trend I am seeing is using AI to “write” stuff and then passing it off as your own to make money. We had a term in the past about this, it was plagiarism. AI is not “writing” anything, it is simply finding work on the topic, possibly rewriting it a little, but it is nothing new.
If I copied an article from Wired, and then put my own ads into it, I am sure I might get a call from Wired’s lawyers. Some might argue Wired is already using AI to write, but that is for someone else to argue.
The worst “make money fast” scheme I have seen, is the following scenario.
- Go to your favorite AI and have it “write” a book on a topic that you want to try to make money on. Let’s say, “How to make money from AI” (there is a fun bit of recursion).
- Maybe look it over, and make sure it actually makes sense, and also hasn’t added some excerpts from “Mein Kampf”.
- Find another piece of AI, that will turn this text into an AudioBook. Yes, that is out there, and it doesn’t sound like the late Stephen Hawing’s voice simulator, it actually does a nice job.
- ”Publish” this book on Audible, and then “… rake in the royalty cheques”.
You haven’t created something of value, you are simply scheming to make a quick buck. Luckily the theme of the book is that, so you will be hitting your target audience.
AI Slop Profit Conclusions
I have no doubt AI is already far too ingrained in our systems to be able to remove it, but it needs to be closely monitored. Can “SkyNet” happen? Maybe, but this “get rich quick” twattle is really a slap in the face of real content creators. I don’t count myself in this group, but there are folk who really write the English language well, and they are the victims of the “AI Plagiarism” game. The same is true for skilled programmers (and there aren’t that many out there).
I am confident that the AI at the search engines will push this little tidbit into oblivion, but it might be funny if it ever appears in an “AI plagiarised” article later on.