In a previous post, I mused on the impacts of AI from professional and creative perspectives. As a programmer and an author, I have felt the pressures of AI from every direction. However, what I had assumed to be a massive upheaval has revealed itself as a mild irritation. This is by no means discounting the societal shock, but I am much less fearful of it now.
Why? Because I lived through the dot-com crash of the early 2000s and the current AI market is feeling eerily similar. I remember the hype. And most importantly, I remember the hype men and the fantastic cope when it all came crashing down.

First, a brief history. Back in the 90s, investors saw the rising power of the Internet and decided to fund tech companies. It was a good bet. But investors, being shortsighted morons with cash to burn, saw that slapping “.com” on a business name increased its market value. They threw money at garbage companies, which created a giant tech bubble. Consumers wised up to the bullshit, investors panicked, and the bubble popped. The tech sector imploded and countless developers, including myself, lost their jobs.
I was still a young coder at the time and was able to pivot. I had no experience dealing with a volatile market, nor was I aware of any red flags. But looking back on it years later, many flags were obvious. A big one being a sudden hype shift.
At the peak of the bubble, everybody was all-in. Every consumer, producer, and investor was on the same page. The hype was real and the cash was flowing. But when cracks started to form, consumers questioned the hype. This spooked investors, which resulted in a hype shift to producers, who doubled down because their market shares depended on it. That was the giant red flag I missed, right before the market collapsed.
As the saying goes, “History does not repeat itself, but it often rhymes.”
Fast forward to 2023, when LLMs (Large Language Models) got really good. ChatGPT-4 blew everyone’s mind and investors went all-in. A torrent of new AI companies sprouted up and their valuations shot through the roof. The swell created a new hype that everyone got behind: the promise of AGI (Artificial General Intelligence). Every producer touted it while cashing huge investor checks.
And then the cracks started to form.
The biggest one for me was the rise and fall of “vibe coding.” This is where rookie devs tell AI what to code, using chat prompts. They literally “vibe” an app into existence without writing a single line of code. It was supposed to be a game-changer, but it proved so troublesome that it doubled the work of competent coders who were forced to debug the mess. (I speak from experience.) Vibe coders are not coders. They’re cosplayers. And their systems knowledge is so deficient that they cause more harm than good. Companies realized that “vibe” apps couldn’t scale, so they pulled the plug.
Corporate users started to complain about hallucinations, i.e. the tendency for LLMs to make stuff up. When a company relies on AI at an enterprise level, hallucinations are an existential threat. Imagine a law firm using AI to write up critical cases, only for it to invent precedent out of thin air. (This has actually happened.)
And then, out of nowhere, China released DeepSeek, a comparable AI built for a tiny fraction of expected cost. In response, the hype for AGI intensified to keep the market inflated. Then MIT released a study showing that 95% of enterprise-level LLMs have failed to produce meaningful results. It was a nuclear-level report that put consumers on edge.
Cue the hype shift.
Like a choir of greed-fueled demons, every player in the AI industry started promising AGI in the near future. “It’s right around the corner,” I kept hearing. “2-3 years max,” more said. And do you know who doesn’t parrot any of these talking points? The consumers. They were led to believe that AI was a technological leap, the birth of a new era, and all they got was a tool that writes essays … badly. There is now a huge shift back to indexed searching, as AI has proven to be comically unreliable at basic tasks.
Funny enough, new versions of LLMs are actually worse than previous ones, and I suspect a few key reasons. First, they were trained on the open Internet. That’s a LOT of data. But the thing is, that data has been integrated. There are no more Internets to consume. It’s a finite resource. And second, AI is now producing a flood of new data that gets added to the same pool. It’s polluting its own waters and making the Internet worse. To quote an old IT adage, “Garbage in, garbage out.” Or to use the current lingo: AI slop.
And this isn’t even talking about the colossal stress on the electric grid. Massive data centers are sucking up huge amounts of electricity, which of course get passed onto the consumers because state reps want Big Tech to create jobs. The irony is that data centers are basically giant robots with minimal staff. The tech companies enjoy huge tax breaks while the people bear the brunt, like always. That’s why electric bills have skyrocketed across the country. Some states, like Virginia, are already allocating nearly 40% of their electricity to data centers. And for what? Crappy AI videos?
LLMs stress the grid. Crypto-miners stress the grid. Now imagine the power sink of a true AGI. Going from an LLM to super-intelligence is like going from a tricycle to teleportation. We simply do not have the capacity. It would require a full rethink of our relationship with power. And yet, the tech bros keep insisting that AGI is imminent.
So I’m watching this as a programmer and thinking, “This is the dot-com crisis all over again.” The red flags are everywhere. When the bubble pops, and I think that it will, I hope it doesn’t cause a catastrophic implosion of the financial market. After all, the only reason we’re not in a deep recession right now is because of the overinflated AI market. (Not-so-fun fact: a recent report estimated it to be 17 times larger than the dot-com bubble.) Given the current political chaos, a bubble burst could turn into an economic calamity.
And then there’s the author side of me, which is strangely optimistic.
I am watching the general public reject AI because they realize that it’s a giant plagiarism machine. It doesn’t create. It just steals from the past and recycles copyrighted material for new consumption. People don’t want AI music. They don’t want AI movies. And much to my relief, they don’t want AI books. It doesn’t matter if AI can write the next great novel. It will never be loved. That love is reserved for the next great author.
Human art is still deeply valued, and that gives me a lot of hope.
I was quite fearful that AI would lead to my creative death, that readers would seek trends and polish regardless of source. But that’s not true at all. They want authenticity. Who cares if an AI can write a book in seconds? That’s boring. They want to read a story and know that the author bled for its publication.
To paraphrase Rorschach, “They will look up and say, ‘Read my AI book.’ And I will look down and whisper, ‘No.'”
So yeah, the current state of AI has put me in a weird mental place. My creative fear has been replaced by an economic dread. I can still put words on paper, but now I wonder who will be comfortable enough to purchase them. I can only hope that our desire to support artists isn’t replaced by a desire to keep the lights on.



