Ignore the clickbait title for this video–The Only Trait for Success in the AI Era—How to Build It–Professor Po-Shen Loh from Carnegie Mellon University drops some real wisdom here for navigating AI.

Up front, he mentions that when ChatGPT first came out, his takeaway was that students needed to embrace creativity, because that was the one area that LLMs–trained on existing data–were weak. And yet recent developments, such as AI achieves silver-medal standard solving International Mathematical Olympiad problems, where AI solved previously unseen problems, challenges even that assumption. As he notes, “The creativity in the AI can probably surpass what we can do, too.”

So how can students and people more generally coexist with these powerful AI tools? His high-level take is, “The only unique thing about human intelligence is that we hopefully care that humans still exist.”

That’s a quite scary and yet profound insight. Ultimately, AI is a tool. As much as people have/do/will try to anthropomorphize them–take a look at the Eliza effect for more backstory here–they are unfeeling tools.

Cheating in School

For any teacher or parent, an immediate challenge of AI like ChatGPT is combating cheating, especially with reading and writing assignments. It is unlikely that any AI-detection tools will keep pace with advancements. So what to do?

One approach is to go back to blue books and in-class essays. That works to a degree. But what about teaching students how to do research? How to revise and edit their own work? How to really read and write in other words?

LLMs are good at this task in part because they have read everything on the internet. That’s what they’re trained on. The connections that ChatGPT makes, allowing it to seem human, are based on this corpus of knowledge and then, in its case, trying to predict the correct next token for a given prompt.

Scientists don’t exactly know how the human brain works, but the approach likely has some parallels. After all, the neural nets underpinning LLMs are loosely based on how scientists think the brain functions. Basically, absorb a lot of information and then find connections. The AI can do it. Will students and humans more broadly continue to develop and hone this skill when you can offload it to an AI?

As Loh notes, the real point of these assignments is not the resulting essay. It is to train students in how to think and reason.

Creating Value for Other People

Further on, Loh has a section on what matters more than creativity, especially if AI is increasingly good at that. He points to the idea of creating value for other people, which a computer doesn’t inherently have any motivation to do.

A person who both wants to help others and orients themselves around this goal is far more likely to attract colleagues. This isn’t an earth-shattering idea, but it stands out in a world where AI is rampant. Perhaps yet another reason to broadly be a good person so that it helps you and others.

It also feels good. We humans are wired to enjoy this type of reciprocal interaction rather than just competing. So, from a selfish perspective, recognizing this, if you want to be happier, you need to find ways to help other people.

I think there are plenty of examples of unhappy but otherwise successful people out there to prove this point.

Simulating the World

Loh also points out that students need the capacity to simulate the world for themselves. Not to just accept what a handful of frontier-level models tell them. But rather to find different viewpoints, debate them, and come to your own conclusions.

He is rightly fearful of a lack of analytical abilities in people. And as we have seen, all AI tools–since they are built by humans, for now–have implicit and explicit biases. It’s just that they do a good job of not showing it.

Ask DeepSeek, a powerful LLM from China, about Tiananmen Square, for example. Or here in the U.S., compare the responses of ChatGPT and Grok (from Elon Musk) on questions around politics. You will see a very clear difference in responses.

Any future world that relies on only a handful of AI models for our facts and thoughts is inherently dangerous. We have seen this again and again and again over the course of human history.

Ironically, the early promise of the World Wide Web was that it was a democratizing function. Anyone could post content. The early search engines, like Google, tried to highlight “the best” results. A meritocracy of sorts. We are far from that now in search results, as Google crams ever more ads down our limited attention span.

And it is likely that ChatGPT, in particular, now used by 1 in 10 humans, will resort to ads for economic reasons. After all, the inference costs of serving these models dwarf those of traditional web requests. It’s hard to point to real underlying numbers here, given AI companies are largely private, but it costs orders of magnitude more to serve 1 million customers using ChatGPT than it does to provide a similar level of Google search results.

AI’s Thefts

Near the end, Loh discusses other areas that AI strips away from us. One obvious area is “taste,” that uniquely human thing we all possess. How “we” feel about the world around us. AI smooths this gradient curve across all the written content and comes to a locally optimized solution that is not a proxy for individual human intelligence.

Inequality

A final point not touched upon in this video, but which I think a lot about, is inequality. We’ve seen throughout history that technological advances follow a predictable cycle where all the economic gains are consolidated in a small elite. It falls to politics–and more commonly violent revolutions–to redistribute these across the broader population.

On a more micro level, AI enables humans with access to the best models, time to think, and access to the best jobs, to do even more. It risks further leaving disadvantaged people behind.

One of the major drivers of the current political instability in the world, especially in the “western democratic” world of the U.S. and Europe, is the rise of far-right parties, seizing upon these inequalities. We’ve seen how that record plays out.