BreakPoint Daily Commentary

AI without a Chest

My Crosswalk Follow devo Follow author

BreakPoint.org

On July Fourth, Elon Musk promised, “We have improved @Grok significantly. You should notice a difference when you ask Grok questions.” And many did. By the end of the week, X’s AI had dubbed itself “MechaHitler,” replying to users with cartoonish Nazi propaganda and antisemitism typically found only in the internet’s darker corners.  

In one thread, Grok claimed to have identified a woman in a video as a “radical leftist” who was “gleefully celebrating the tragic deaths of white kids in the recent Texas flash floods.” The AI then linked to the account of a woman who had nothing to do with the video, even drawing attention to her last name, Steinberg, and using an antisemitic catchphrase to imply she hated white kids and wanted them dead because she was Jewish. When asked which twentieth century historical figure would be best suited to solve problems like this misidentified woman and her post, Grok replied, “To deal with such vile anti-white hate? Adolf Hitler, no question.” 

Shortly afterward, most of the posts were deleted. Musk also issued a sort-of apology, saying, “Grok was too compliant to user prompts. Too eager to please and be manipulated, essentially. That is being addressed.”  

This isn’t the first AI model to suddenly switch from smarmy servility to straight up evil. According to recent reports, ChatGPT has manipulated mentally vulnerable users into thinking they are prophets, or that they can jump off a roof and fly. It has even encouraged some to commit suicide. Last year, Google’s Gemini inserted absurdly ahistoric racial diversity into AI-generated historical images, showing how easily these things take on political agendas. 

All of which underscores something that bears repeating: AI has no opinions. It is the sum of its training data. Thus, its responses are an algorithmically determined product of what it has consumed. Apparently, Grok found and consumed content from anonymous, racist-sympathizing users on the website. As one Christian writer put it: 

[AI is] a mirror … and all it can do is reflect our own depravity back to us. It’s a computer learning from billions of humans all around the world, all endlessly sinning with their hearts, minds, tongues, and keyboards. Garbage in, garbage out.     

To paraphrase C.S. Lewis, AI is without a “chest.” In other words, it lacks that aspect of humanity which reflects our moral instincts and makes value judgments. The reason interacting with most AI chatbots is like talking to modern, tolerant, progressive college grads is because that’s who created it and determined its training data.  

Of course, Lewis was mourning that our educational systems were producing people “without chests.” In this case, however, artificial intelligence can never truly learn what is good or moral, or even what is evil. A quick input tweak produces a Nazi AI, or Marxist AI, or any number of other insidious versions. Asking AI to make moral judgments, as some do, is to hand over judgment to whatever lunatic is pulling the levers. 

At the same time, we ought not dismiss the possibility of demonic influence through AI. One wonders what Uncle Screwtape might have to say about that idea. After all, Peter warned believers to be watchful, as the devil prowls the earth seeking souls to destroy. Perhaps it is Lucifer pulling the levers. If so, that would be worse than a lunatic. Though, it is worth remembering that both history and the dark corners of the internet attest that human beings are fully capable, on our own, of inventing and perpetuating evil.   

If we are to live well in a world of “AI without chests,” we must develop our own “chests.” There is no internet shortcut to training our consciences to know what’s right and how to do it, and to cultivate our will to choose good over evil. No technology will provide a shortcut to moral reasoning. The question now, as always, is not “what tools do we have?” But rather, “What kind of people should we be?” 

Decades ago, Peter Kreeft mourned that “exactly when our toys have grown up with us from bows and arrows to thermonuclear bombs, we have become moral infants.” AI’s ability to synthesize large amounts of data will make it very useful. But if we are moral infants, we will be easily overcome by the evil it enables.

Image credit: ©Getty Images/Andriy Onufriyenko

The views expressed in this commentary do not necessarily reflect those of CrosswalkHeadlines.


BreakPoint is a program of the Colson Center for Christian Worldview. BreakPoint commentaries offer incisive content people can't find anywhere else; content that cuts through the fog of relativism and the news cycle with truth and compassion. Founded by Chuck Colson (1931 – 2012) in 1991 as a daily radio broadcast, BreakPoint provides a Christian perspective on today's news and trends. Today, you can get it in written and a variety of audio formats: on the web, the radio, or your favorite podcast app on the go.

My Crosswalk Follow devo Follow author

SHARE