BreakPoint.org

It’s not uncommon to hear artificial intelligence described as a new “tool” that extends and expands our technological capabilities. Already there are thousands of ways people are utilizing artificial intelligence. All tools help accomplish a task more easily or efficiently. Some tools, however, have the potential to change the task at a fundamental level.   

This is among the challenges presented by AI. If, in the end, it is not clear what AI is helping us to achieve more efficiently, this emerging technology will be easily abused. AI’s potential impact on education is a prime example.  

Since the days of Socrates, the goal of education was not only for students to gain knowledge but also the wisdom and experience to use that knowledge well. Whether the class texts appeared on scrolls or screens mattered little. Learning remained the goal, regardless of the tools used.  

In a recent article at The Hill, English professor Mark Massaro described a “wave” of chatbot cheating now making it nearly impossible to grade assignments or to know whether students even complete them. He has received essays written entirely by AI, complete with fake citations and statistics but meticulously formatted to appear legitimate. In addition to hurting the dishonest students who aren’t learning anything, attempts to flag AI-generated assignments, a process often powered by AI, have the potential to yield false positives that bring honest students under suspicion.  

Some professors are attempting to make peace with the technology, encouraging students to use AI-generated “scaffolding” to construct their essays. However, this is kind of like legalizing drugs: There’s little evidence it will cut down on abuse.   

Consider also the recent flood of fake news produced by AI. In an article in The Washington PostPranshu Verma reported that “since May, websites hosting AI-created false articles have increased by more than 1,000 percent.” According to one AI researcher, “Some of these sites are generating hundreds if not thousands of articles a day. … This is why we call it the next great misinformation superspreader.” 

Sometimes, this faux journalism appears among otherwise legitimate articles. Often, the technology is used by publications to cut corners and feed the content machine. However, it can have sinister consequences.  

A recent AI-generated story alleged that Israeli Prime Minister Benjamin Netanyahu had murdered his psychiatrist. The fact that this psychiatrist never existed didn’t stop the story from circulating on TV, news sites, and social media in several languages. When confronted, the owners of the site said they republished a story that was “satire,” but the incident demonstrates that the volume of this kind of fake content would be nearly impossible to police.  

Of course, there’s no sense in trying to put the AI genie back in a bottle. For better or worse, the technology is here to stay. We must develop an ability to evaluate its legitimate uses from its illegitimate uses. In other words, we must know what AI is for before experimenting with what it can do.  

That will require first knowing what human beings are for. For example, Genesis is clear (and research confirms) that human beings were made to work. After the fall, toil “by the sweat of your brow” is a part of work. The best human inventions throughout history are the tools that reduce needless toil, blunt the effects of the curse, and restore some dignity to those who work. 

We should ask whether a given application of AI helps achieve worthy human goals—for instance, teaching students or accurately reporting news—or if it offers shady shortcuts and clickbait instead. Does it restore dignity to human work, or will it leave us like the squashy passengers of the ship in Pixar’s Wall-E—coddled, fed, entertained, and utterly useless? 

Perhaps most importantly, we must govern what AI is doing to our relationships. Already, our most impressive human inventions—such as the printing press, the telephone, and the internet—facilitated more rapid and accurate human communication, but they also left us more isolated and disconnected from those closest to us. Obviously, artificial intelligence carries an even greater capacity to replace human communication and relationships (for example, chatbots and AI girlfriends). 

In a sense, the most important questions as we enter the age of AI are not new. We must ask, what are humans for? And how can we love one another well? These questions won’t easily untangle every ethical dilemma, but they can help distinguish between tools designed to fulfill the creation mandate and technologies designed to rewrite it. 

This Breakpoint was co-authored by Shane Morris. For more resources to live like a Christian in this cultural moment, go to breakpoint.org. 

Photo Courtesy: ©Getty Images/Lemon tm

Publish Date: January 8, 2024

The views expressed in this commentary do not necessarily reflect those of CrosswalkHeadlines.


BreakPoint is a program of the Colson Center for Christian Worldview. BreakPoint commentaries offer incisive content people can't find anywhere else; content that cuts through the fog of relativism and the news cycle with truth and compassion. Founded by Chuck Colson (1931 – 2012) in 1991 as a daily radio broadcast, BreakPoint provides a Christian perspective on today's news and trends. Today, you can get it in written and a variety of audio formats: on the web, the radio, or your favorite podcast app on the go.

John Stonestreet is President of the Colson Center for Christian Worldview, and radio host of BreakPoint, a daily national radio program providing thought-provoking commentaries on current events and life issues from a biblical worldview. John holds degrees from Trinity Evangelical Divinity School (IL) and Bryan College (TN), and is the co-author of Making Sense of Your World: A Biblical Worldview.