AI Robot Friend

The recent release of ChatGPT5 has spun up the media again on AI, LLMs, and prompted a new wave of questions from all my non-technical friends on where all this is headed. I work as a programmer and for a company actively integrating AI tools into our products, so I have some perspective on all this. In no particular order, here are the thoughts I return to again and again in these conversations:

1. The People Most Excited About AI Tools Are Already Established In Their Fields

What I mean by this is that experienced coders, writers, marketers, and others are now able to utilize LLMs to significantly enhance their productivity. This is particularly true for underlying research, as search is dead (more on that shortly), as well as for idea generation and as a sounding board. And I get it: I’ve used LLMs daily for almost two years now, and they are wonderful at certain tasks, but here’s the thing–I know what I’m doing!

I can write a prompt on a coding topic and evaluate the response for accuracy. I can also ask an AI agent to write even more code and plausibly review it because I’ve written it from scratch myself in the past. In these cases, LLMs are amazing: I can truly fly with an amazing researcher who can also predict and provide feedback.

But this does not describe most people! The majority of users are not experts in a given field and are blindly trusting these tools. Or they are students/newcomers who are now tasked with learning how to write/do math/code/do science when LLMs offer a CliffNotes version of everything at their fingertips. Why actually stretch your mind and do anything from first principles when you can have AI do a half-decent job of it? And when your overworked teachers and professors are using the tools, too?

This is a major issue. It will be harder than ever for newcomers to truly master tools and fully leverage AI. Instead, I fear, they will be led down rabbit holes and learn things that aren’t quite true, because it is present in their LLM models.

So, to anyone who is a fellow coder and thinks AI will take over: ask a junior developer to use AI for a simple task and sit over their shoulder as they do it. I would wager you will be shocked at how quickly they are completely lost and how difficult it is for them to reason about any of the output. The same holds true for writing, marketing, and so on.

For people with existing expertise, LLMs are amazing at speeding up what we already know to some extent, but they can also be an endless maze that promises insight and knowledge that just isn’t there for everyone else.

2. AI Is An Excuse To Fire People

During the COVID years, many large tech companies hired aggressively; in some instances doubling their headcount within a few years. In hindsight, much of the optimism was overplaced, and now management has to adjust. What do you think is easier to explain to your board and public shareholders: that we made a mistake and overhired a few years ago, or that AI is causing us to need to lower our headcount?

3. Traditional Search is Dead

Even Google has given up on search and is all-in on AI-generated responses. If you try to use Google.com these days, you are met with AI responses first, then YouTube links (owned by Google), followed by a series of ads, and, possibly, near the bottom of the page, an organic link that is likely written by AI as well.

Now that traditional search is dead, what is the incentive for someone or a company to invest in quality writing and websites? It’s small. That leaves just video for discovery.

4. Video is Increasingly AI

If you spend time on YouTube, Instagram, and TikTok, you’ll quickly notice that many videos are clearly AI-generated. There’s the same AI voice, quick cuts, and likely variations on the same meme/joke that was funny the first time, but is now being recreated by everyone chasing algorithmic views. It is a depressing cycle.

That said, video is one of the few places where you can genuinely gain new followers and views. But the production costs required to do so are far greater than most people think, which means that in addition to having something to say, you have to know how to film, audio engineer, and edit it.

5. Text is Still King

Even as traditional websites are dying and video is taking over, we’re all using LLMs, which are, wait for it, text. So text hasn’t gone away at all, it’s just removed of actual human beings providing (potentially) accurate facts and a business model.

“AI Slop” is a bigger problem than ever, but so too is the fact that current and future generations will take the current baseline for granted. What’s the benefit of writing well if readers can’t appreciate it?

6. LLMs Appear To Be Leveling Off

OpenAI wrote a famous paper on Scaling Laws back in 2020. For a time, roughly GPT-2 (2019) to GPT-3 (2021), these laws meant ever-increasing model size and compute time led to real progress. The only real constraint was the hardware budget.

The returns appear to be diminishing in recent years even as models now approach hundreds of billions of parameters. And the training data necessary for these models is topping out now that effectively all English text found online is available.

The results over the last year for GPT4/5-era models have led to a clear plateau, where 10x improvements in size no longer yield proportional gains. And this is matched by underlying architecture issues, such as hallucinations, ever-farther from a clear solution.

The familiar S-shape of technology trends appears to be shifting to the far right, at least with current LLM approaches.

7. Open Source Models Are Growing

Even as a few companies focus on “frontier models” that ingest the entire internet and train on it, at a cost of hundreds of millions of dollars, there is a growing list of smaller models freely available that can be run directly on your computer. These models are getting better and better and solve privacy concerns around the big models, which reuse your data for training purposes.

It is also the case that 90% of most AI companies’ spending is on inference, aka hosting the models in the cloud. Even for a company like OpenAI, it can cost upwards of $100 million to train a frontier model, but billions each year to serve it to customers, which is the business model. There are no quick fixes for lowering inference costs, so any cloud-hosted AI solution is by definition going to cost a lot more than $20/month to break even for these companies.

8. Ads Are Coming

OpenAI, the makers of ChatGPT, are losing billions of dollars a year. As are their competitors. Even if model training costs start to subside, the inference costs of providing these models to companies and consumers via APIs and websites are only going to grow.

The only way to make these tools available to most people, who are unwilling to spend hundreds or thousands of dollars a month, is through advertising. OpenAI has even said as much publicly.

Part of why LLMs are so great right now is that they mimic the early days of Google, where the focus was on search quality. It was objectively not focused on ads. But that was a long time ago. Cory Doctorow’s theory of Enshittification(https://en.wikipedia.org/wiki/Enshittification) feels apt here. How much will people like ChatGPT when it’s filled with ads and costs more? Also, what happens when instead of trying to generate the best response, AI companies focus on engagement, aka keeping you around to serve more ads, just like social networks before them?

9. Trust Is the New Currency

Amidst an ever-growing wave of AI slop on the internet, actual humans and trust have never been more valuable. When you can’t trust search engines, especially not videos, and certainly not AI/LLMs, finding a human with real expertise who is willing to share it has never been more valuable.

I suspect we will find a few people in different domains who become the “thought leaders” as most content turns into garbage. This is good for them and good for quality content, but bad for the larger ecosystem of people sharing their expertise. If someone already has 1m followers on YouTube for a topic, it will be much easier for them to scale to 2m than for a newcomer to somehow stand out and gain enough traction to generate enough views to make the economics work.

This is not really a new challenge, but tech constantly amplifies already-existing dynamics into another stratosphere.

Where To Go From Here?

My personal takeaway is that, yes, AI is here to stay and is already changing the world. No, it will not take everyone’s job away at once, but it will make existing practitioners of most fields even more productive, which makes it enticing for owners and managers to fire a lot of people. It ironically makes learning harder than before, as we all have imperfect shortcuts at our fingertips, which are being used by both learners and teachers.

We are also in the early stages of Enshittification for LLMs, and things will likely get worse, not better. Are LLMs a tool for the elite who can pay enormous costs for them? Will models become commoditized, and we all just run them locally on our phones and computers?

Entirely new business models and companies will likely emerge, but with the traditional casualty of many existing jobs facing the incinerator.