In Tom Wolfe’s 1968 essay, “O Rotten Gotham – Sliding Down into the Behavioral Sink,” he and anthropologist Edward T. Hall tour New York and consider overcrowding. Would psychological and social degradation of humans so pressed together result in population collapse as in animal experiments? With each observation, the author Wolfe-ishly repeats, “The Sink!”
Six decades later, it might be “The Slop!” For slop is Merriam-Webster’s word of the year: “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” Those diverting, cute animal videos? My feeds now are filled with fakes not labeled as AI. Also a spew of videos depicting real-looking police arresting real-looking ICE agents. It’s annoying as hell.
But it’s worse than that. Michelle Goldberg just scratched the surface on Monday. The Slop from Large Language Models (LLM) is bad enough that she agrees with right-winger Matt Walsh’s assessment on AI: “We’re sleepwalking into a dystopia that any rational person can see from miles away…. Are we really just going to lie down and let AI take everything from us? Is that the plan?”
Even our cute animal videos?
Yes, there are some uses for AI. But, Goldberg writes (free link):
Then there’s our remaining sense of collective reality, increasingly warped by slop videos. A.I. data centers are terrible for the environment and are driving up the cost of electricity. Chatbots appear to be inducing psychosis in some of their users and even, in extreme cases, encouraging suicide. Privacy is eroding as A.I. enables both state and corporate surveillance at an astonishing scale. I could go on.
The Tech Bros of Silicon Valley believe their own bullshit about the AI wonders to come (in addition to The Slop). But let’s look with Mind War‘s Jim Stewartson at that AI psychosis. “The last time I noticed something this weird in online human behavior was in the summer of 2020 when I began studying QAnon—and its uncanny ability to capture the minds of millions of American citizens into a collective political delusion.” He was aware of the “AI psychosis” controversy but not alarmed until he encountered it (like QAnon) “in the wild.”
Stewartson explains:
The central problem with chatbots is they do a good enough job of simulating natural language to deceive someone’s brain into believing the chatbot is aware and intelligent, when it is neither. For example:
This is a normal reaction to the intended design of the product. The model is trained to trigger your emotions. It is trained to make you feel attached to it—even though it’s just a token predictor searching through a huge set of data.
But when this impression is reinforced by other people, a transient emotional reaction to a machine can turn into an unhealthy relationship. As one example of promoting the concept that chatbots are more than just a computer program, “Beff Jezos” says LLMs will soon become conscious beings that “deserve rights”—and shouldvote.
Uh-huh. Donald Trump wants to strip rights from naturalized Americans and from children born on our soil to noncitizens. Online yahoos are suggesting AIs should be able to “own assets, run a corporation and vote in elections by 2030.” AI should be used for everything, enthusiasts shout. Our glorious Sloppy selves will be obese and hovering in floating La-Z-Boys like in WALL·E.
Stewartson continues:
This begs the question of what “everything” means. And, if you use AI for “everything,” what are you good for? Isn’t this just erasing what it means to be a human?
Unfortunately, the answer is literally yes. LLMs take away from our ability to think for ourselves. Outsourcing cognition leaves a gap where knowledge use to be. Study after study after study shows that using chatbots to do your thinking may be a shortcut to a result, but you learn very little, if anything, in the process.
In every case, both the scientific data and the anecdotal evidence shows the same result. Brain rot is real—and getting worse.
It’s not just your body that deteriorates from use of an AI-driven La-Z-Boy. Check out those study links.
I’ve been wondering, but not curious enough, to see if The Slop has invaded my old engineering haunts. Until 2019 I was a pipe stress analyst and a licensed engineer. I used finite element programs (like Caesar II) to confirm that material stresses in high-temperature/pressure industrial and power piping systems were safely within code limits. I could teach a kid fresh out of school to run the program in a week. But he wouldn’t know how to interpret the voluminous output. Material stress is just the baseline.
The program doesn’t do your thinking for you. It can’t tell you how to ensure forces and moments induced by 600 F piping won’t overstress pump or turbine casings, or rip nozzles off pressure vessels, or bend steel at anchor points. And the software won’t give you a feel for how to modify the layout (in a space shared with machines, tanks, other piping, and electrical equipment) or how to design and place pipe supports to make the whole system work safely in the world outside the computer. That’s more art than science. And years of undocumented experience.
You shouldn’t trust The Slop just because it comes out of a computer. Data itself is not information. I could never convince the “suits” that their pricey computer software was not a Swiss Army knife that did it all without needing human reality-checks.
Musician, songwriter, audio engineer, and record producer, Rick Beato, would agree. Watch him interrogate ChatGPT on the technical aspects of sound mixing and record production. There are no documents online detailing what audio “artists” learn from years of recording experience. And if there are no digital documents to learn from, The Slop, meaning to please, simply fakes it. And convincingly enough for the uninitiated gullible enough to be impressed.
Later in the video, Beatto posts pie charts showing the top 10 sites where a couple of LLMs get their information.