Welcome to the next episode in my blog series Breaking Up with Big Tech. This is more of a bonus post, because I have already talked about AI from both of my primary perspectives (programming and publishing). My thoughts on Artificial Intelligence are well known at this point. I’m not a fan, but I did give it a chance.
My first post was written to just make sense of the madness. I expressed both concern and optimism, then ended with a warning about the inevitable disruption. My follow-up post was about the emerging parallels with the dot-com crisis and my lessening fear about AI replacing creative pursuits. Disruptive, yes, but not the death of creativity.
But now, after several years of watching LLMs cannibalize the internet, I have formulated a new opinion. This technology is a pox on humanity, far worse than social media.

I searched with LLMs. It horrified me.
Most people start their AI journeys with LLM searches, effectively using them to replace the standard indexed searches. Why go through the hassle of vetting information when the LLM can just summarize it for you, right? Intriguing on paper, but a nightmare in practice.
As a quick clarity break, LLM stands for Large Language Model. Common examples include ChatGPT, Gemini, Claude, and Copilot. The public uses LLM and AI interchangeably, but LLMs are not really AI. They are are sophisticated prediction engines that learn how to convey info, with little regard for accuracy. And in that realm, they are great at replicating human speech, often with unsettling results.
But here’s the sticking point: they only sound accurate. Most users accept whatever they spit out because it’s magically presented in a crisp and digestible way. It’s like talking to your best friend about any given topic. As long as they deliver a confident opinion, you trust them. And this is where the entire apparatus fails.
LLMs hallucinate. They just make stuff up in order to craft the best sounding reply. And since humans trust human-like responses, it creates a black hole of misinformation. Law firms have submitted AI-written dissertations where the cited cases didn’t exist. And if lawyers can’t be bothered to verify information, then the average user is beyond screwed.
I have seen this happen countless times. During my own dives into LLMs, I quickly learned to ask basic questions before giving them the hard stuff. And even then they often got it wrong. To make matters worse, different LLMs gave different wrong answers, but every answer was presented with the same brash confidence.
Here’s something we have all experienced:
LLM: “Here’s the answer.”
You: “That’s wrong.”
LLM: “You’re right. Sorry, here’s the correct answer.”
You: “That’s also wrong.”
LLM: “You’re right. Sorry again, here’s the totally correct answer.”
You: “Wrong again.”
LLM: “You’re right. Thanks for bearing with me. Here’s the actual totally correct answer.”
You: (staring at another wrong answer) Hello, darkness, my old friend.
This is devastating to an informed public. These tools are shockingly bad at what they are designed, packaged, and sold to do. The creators lied through their teeth to collect billions of investment dollars, which they reinvest amongst themselves to keep the market inflated. This is the only reason we aren’t in a major recession right now. And once that bubble pops, it will lead to an economic calamity.
In the wake of this realization, I decided to abandon LLM searches altogether. I would much rather invest time in verification than rely on a schizophrenic algorithm. Thus, I have returned to indexed searches. Much like walking through a library, I want to be able to vet a source, not have some crazy bridge troll summarize what I need from the parking lot.
It should also go without saying, but please use a private search engine, i.e. something that respects your autonomy and doesn’t harvest personal data. There are many good options to choose from. My favorite is DuckDuckGo, which I always set as the browser default.
I created media with LLMs. It horrified me.
Remember when we laughed at the “Will Smith eating spaghetti” AI-generated videos? They were funny because they were over-the-top stupid, like a toddler trying to build a computer. But nowadays, AI can create fake videos in seconds that are indistinguishable from reality. It’s unsettling to an existential degree.
I played with image and video generators based on sheer curiosity, and they were admittedly astounding. I can absolutely see the benefits from an educational perspective, because visuals are extremely powerful when trying to explain complex ideas.
I think most people would happily support the development of AI if it were laser-focused on scientific advancement. I want doctors to have better tools for diagnostics. I want physicists to have better tools for analysis. I want chemists to have better tools for calculation. I want us to leverage AI for the betterment of humanity.
But we’re not doing that. We’re creating media for a new era of fraud. Scammers are using AI to create sophisticated schemes. Criminals are using AI for highly advanced extortion rackets. Extremists are using AI to sow political turmoil. These companies decided to give a loaded gun to every person, then have the gall to question why the public despises them.
On top of that, many people assume that LLMs create new media. They don’t. They just steal copyrighted material from the past and repackage it for modern consumption. I was horrified to learn that some people are trying to copyright art that they generated with AI. This is a level of cognitive dissonance that I didn’t think possible. That would be like applying a photo filter to the Mona Lisa and expecting the same recognition for the work. Utter insanity.
As an author, I had thought that AI could help me workshop ideas. But after watching it steal and repackage visual media, I realized that every idea it “generated” belonged to someone else. When I pointed out the obvious plagiarism, you can probably guess what the response was. “You’re right. Sorry. Shall I create a new one?” Irony was dead.
It made me wonder how much LLMs have directly stolen from me. Much of my work is freely accessible, so I asked an LLM to summarize a random book. Sure enough, it puked out a book report (with some glaring errors, but mostly correct). That was the breaking point. I swore off AI forever, at least as an author.
I coded with LLMs. It horrified me.
I may not use AI as an author, but I would certainly use it as a developer. This made the most sense to me, because efficiency is baked into coding. It doesn’t matter how knowledgeable you are if you always miss a deadline. Any tool that increases speed is something to consider.
This gave rise to one of the most asinine concepts in the history of programming: vibe coding. This is where newbie devs use LLMs to write all of their code via chat prompts. It kinda worked in a controlled environment, which spawned a cult-like devotion to the method. Every tech bro touted it as the next great evolution in development.
But it wasn’t. In reality, vibe coding created quagmires of nonsensical code that horrified the senior devs. The apps were bulky, insecure, and unscalable. Why? Because LLMs have no idea what “good” code is. They just mash examples together to create semi-functional apps. Then, when you ask for simple changes, they have a terrible habit of rewriting the entire code base (even when you specifically tell them not to). This is nightmare fuel for source control.
And this isn’t knocking newbie devs. We were all there at one point. I gave vibe coding a solid go, because I have many ideas for useful apps and the thought of speed-running them through development is very appealing. But even with highly detailed prompts several pages in length, the LLMs still made the most basic coding mistakes. They don’t care about optimization, a skill honed by experience. They only care about satisfying the prompt.
Competent coders create stability. LLMs create chaos.
This became painfully apparent when MIT released a study showing that 95% of enterprise AI adoptions have failed. They just break shit. Hell, there are many reports of LLMs wiping hard drives (and then apologizing like idiot children). Companies panicked and hurried back to their senior devs, who have valuable experience that can’t be replicated.
I was one of those senior devs. I was forced to debug chaos code, which gave me a front-row seat to the programmatic failures of AI. Once again, I swore off of LLMs forever, this time as an exhausted coder.
LLMs are a societal cancer.
We were sold a bill of goods that never arrived. LLMs are a godsend to crooks, but a plague upon the general public. We now understand that social media has rewired our brains for the worse, and we are finally taking steps to reverse the damage. I can only hope that we do the same with AI before it’s too late.
I am doing my part. I refuse to use LLMs for anything in my life. As prevalent as they are right now, you can take simple steps to avoid them. Browsing the web? Use private search. Working on a document? Disable the assistant. Need customer service? Insist on a real person. No real people? Stop doing business with companies who replace workers with AI. Whack those moles with extreme prejudice.
Avoiding AI has the added benefit of keeping your mind sharp. LLMs poison your ability to think by offering unverified answers with undue confidence. These companies have flushed integrity down the toilet to sell you an illusion. AI has created far more problems than it ever hopes to solve. It needs to die. And the fastest way to dig the grave is to stop using it.



