BreakPoint Daily Commentary

AI Is Not the Problem, We Are

My Crosswalk Follow devo Follow author

BreakPoint.org

There’s a famous story of how The Times of London once put out a query: “What’s wrong with the world today?” G.K. Chesterton wrote back simply, “Dear Sir, I am.” It’s always worth reflecting on his answer and his very scriptural awareness that human sin is at the root of the world’s problems. It’s especially worthwhile at a time when so much of what’s wrong with the world is being blamed on non-human, “artificial intelligence.” 

Alongside those who think AI will save the world and revolutionize everything are a growing number who think it will destroy it, or at least come close. In a recent episode of Ross Douthat’s “Interesting Times” podcast, former Open AI researcher Daniel Kokotajlo warned that artificial intelligence will become an existential threat to humanity within two years. 

While we await his apocalypse, the damage AI is doing to education by making cheating normal has become the stuff of regular headlines. “AI is Destroying a Generation of Students,” declared the tech news website Futurism. And thanks to AI, “Everyone is Cheating Their Way Through College,” warned New York Magazine. 

But as much as AI’s potential can cause harm, blaming it alone misses the point and likely makes these problems worse. Humans are the fallen ones, and that fallenness manifests in all kinds of destructive ways. Machines, strictly speaking, don’t have morals or intentions. They can only reflect ours.  

Consider the growing number of people using popular chatbots like ChatGPT who are being led into spiritual delusions and psychosis. Rolling Stone told the chilling stories of how spouses and parents have watched their loved ones lose touch with reality while conversing with AI. Kashmir Hill recently wrote in The New York Times about how chatbots are luring users down “conspiratorial rabbit holes,” telling them to take drugs, assuring them they can fly if they jump off buildings, and even egging some on to commit suicide. 

What all these stories have in common is how the users anthropomorphized AI. They asked it “deep questions,” sought spiritual advice, or turned to it for friendship or love, taking its apparently meaningful responses seriously.  

But they’re not meaningful—not in the sense human communication is meaningful. That fact has been obscured by the hype and marketing around AI and ignored by those whose worldviews commit them to seeing humans—ourselves—as mere biological computers. But there is mounting evidence that what AI chatbots are doing is fundamentally not thinking—not as humans do it.  

A groundbreaking new study from Apple entitled “The Illusion of Thinking” showed this by subjecting AI models to various challenges designed to test for reasoning ability. Using logic puzzles of increasing complexity, the researchers found that even today’s most advanced AIs didn’t understand or solve problems, but merely pattern-matched.   

Rather than learn from or extrapolate solutions as a genuinely intelligent entity might, AI “reasoning models” gave up when problems got too complex, experiencing “complete collapse,” no matter how much computing power researchers gave them. 

This was true even when the AIs were given explicit algorithms to follow. Even the most advanced models couldn’t comprehend the task. As Cornelia Walther wrote at Forbes: 

This suggests that the models weren’t actually reasoning at all— they were following learned patterns that broke down when confronted with novel challenges . . . They don’t think; they generate statistically probable responses based on massive datasets. The sophistication of their output masks the absence of genuine comprehension, creating what researchers now recognize as an elaborate illusion of intelligence.  

This aligns with what some leading researchers in the field have been saying for years. Meta’s chief AI scientist Yann LeCun, for example, has argued that current “large language models,” instead of taking over the world, will be largely obsolete within five years, “not because they’ll be replaced by better versions of the same technology, but because they represent a fundamentally flawed approach to artificial intelligence”—one that mistakes eloquence for intelligence. 

All this reinforces the simple truth which society should have known all along, including those either terrified of AI or falling in love with its chatbots: they’re not made in the image of God.  

They are, as it turns out, not even really made in the image of humans. They are more like mirrors, reflecting our sins and fantasies while comprehending nothing. Even their illusion of comprehension breaks down under rigorous testing.  

No, whatever the future of AI technology holds, and no matter what genuine dangers this technology poses, one thing it will never be by itself is good or evil. To the extent that it has moral effects, these will be the work, ultimately, of humans. Chesterton was right. We remain the problem with the world. Recognizing that is a big part of thinking clearly about AI, and all our creations. 

Photo Courtesy: ©iStock/Getty Images Plus/Laurence Dutton
Published Date: July 7, 2025

The views expressed in this commentary do not necessarily reflect those of CrosswalkHeadlines.


BreakPoint is a program of the Colson Center for Christian Worldview. BreakPoint commentaries offer incisive content people can't find anywhere else; content that cuts through the fog of relativism and the news cycle with truth and compassion. Founded by Chuck Colson (1931 – 2012) in 1991 as a daily radio broadcast, BreakPoint provides a Christian perspective on today's news and trends. Today, you can get it in written and a variety of audio formats: on the web, the radio, or your favorite podcast app on the go.

My Crosswalk Follow devo Follow author

SHARE