Back to wheaton.edu
Wheaton College Center for Applied Christian Ethics CACE logo

Dyanne Martin

100x100 Dyanne MartinAI and the Educator’s IOU

Dyanne Martin, Ph.D., Assistant Professor of English and Education

Richard Gibson’s three-day CACE seminar on ChatGPT provided me not only with a comprehensive history of the origin and ensuing development of artificial intelligence (AI), but also with the realities of what this AI will mean for me as a professor of literature and writing.  As we learned, ChatGPT has existed in some form or another since the 1940s and has always had advocates and detractors, but this sophisticated millennial version of the AI creates new complexities for educators, especially for those of us charged with teaching college students how to master the research and writing processes. While varying opinions on the use of ChatGPT in the classroom abound, on one thing we can all agree: writing teachers, arguably more than any other type of educator, must employ a paradigm shift to prevent students from falling prey to the temptation to turn their brains over to much-hyped algorithms.

Where promoters and journalists see intelligence in these algorithms and some even debate whether the silicone will one day achieve “consciousness,” this seminar showed me current models of AI are largely networks of resemblance and “large-language models,” based on what programmers might call a “type-ahead” feature. As Dr. Gibson remarked, “ChatGPT is auto-correct on steroids.” Although his wry, humorous understatement helps us place the AI in context, his point is an important one when we think of ChatGPT’s potential to undermine students’ ability to engage in critical thinking. At the most basic of levels, students—trusting in the algorithms to think for them—already place unwarranted faith in the grammar- and spellcheck features of their word processors. I like to remind students that if the computer were so smart, every student would have essays free of sentence-level errors one hundred percent of the time. The computer lacks the intelligence to think: it simply appears to be thinking due to sophisticated coding that creates what is ultimately a limited set of pattern-matching strategies. But these predictive language models are certainly not intelligence in any robust reckoning of that word. The quality of their output is subject to the data fed into them.

Therein lies the crux of our dilemma with ChatGPT. The program responds to queries or prompts by drawing from pre-existing data, amalgamating the information available to it, and reconstituting it in new fashions—but it cannot create ex nihilo. It is also subject to error. In the same way that my students need to know the stylistic and mechanical conventions of writing in order to understand when to listen to the computer, when to ignore it, and when to break the rules, they also need to know the conventions and mechanics of good research in order to realize when ChatGPT is in error. One of the current hallmarks of AI-generated papers is incorrect attributions—though we expect the algorithms to improve even in this area over time. Despite its failings, especially those present in the free version of ChatGPT, the AI still poses a formidable challenge to our age-old quest to have students engage in critical thinking. Accordingly, we writing teachers must recalibrate our assignments both to demystify the AI for our students and to make space for the necessary work of original thinking that self-generated writing entails.

In different academic disciplines, demystification necessarily takes on different forms. One colleague wondered if there are assignments we should farm out to ChatGPT. Another had already experimented with having students use the AI to polish their business papers. For me as a writing teacher, however, such an assignment would contravene the point of the instruction: in a writing class, students must demonstrate the ability to revise and edit their own papers, to wrestle with what they mean to say, to refine what they say until they say it in the most effective way possible. Good writers know that different syntactic strings have different rhetorical effects and that there is a corresponding art to speaking well. Offloading to AI the skills they would cultivate in such exercises not only is cheating, but it also deprives students of the rewards they might gain as they wrestle with the agon of knowledge. From the ancients onwards, understanding and truth are hard won. No technology will shortcut that process. Some metamorphoses require struggle before we can fly.

Although I am still contemplating what my AI-sensitive composition class will look like in the future, I know I will assign several seminal articles that I encountered in this workshop in order to promote dialogue and debate among my students about this very issue. I might even have students correct AI-generated essays according to the principles of good writing they will learn in class in order to demonstrate the technology’s limitations. Whatever I decide for my class, I cannot imagine an assignment, research-based or otherwise, wherein I would want students to shortcut the vital work of original thinking by offloading the task to ChatGPT. Would it be convenient for them? Sure. Might it make my grading life much less torturous? Probably. But will it make students lazy thinkers? Without a doubt—and the last thing our rudderless society needs is more lazy thinkers.

As participants in the seminar tossed around ideas, fears, and hopes regarding this new technology, Dr. Gibson pointed out that we are not going to win this technological arms race. No matter what programmers invent to detect AI-generated text in students’ work, other coders will likely be one or more steps ahead of that software, offering users “undetectable” unoriginal papers. So what does this mean for educators and students at a Christian liberal arts institution? First and foremost, it should remind us that we are cultivating souls, not just educating minds, and those souls are responsible for walking with integrity before God. We can ground our students in that first and foremost by reminding them of the spiritual principles that should govern their academic work and all of their lives. We are called to teach students the value of trust and honesty, as well as the consequences of violating those values.

Educators may never be able to prevent access to software that enables students to cheat, nor are conscientious programmers likely able to create software that fully detects the cheating. But educators certainly can make the software irrelevant in that regard by teaching students why cheating violates God’s precepts and destroys trust in their human relationships. One colleague in our workshop astutely noted, “Technology is neither good nor bad; nor is it neutral.” It is the non-neutrality of ChatGPT that begs intervention from a spiritual standpoint as we help students realize when to use the program responsibly and when to use their own brains. The dangers to critical thinking are not inconsiderable. After establishing a solid foundation in the ethics of academic work, educators must find new ways to deliver assignments to students that break down important intellectual tasks into discrete skills that they can master in class and then reassemble into larger discourse units without resorting to the technological Shangri-La they are told that ChatGPT is. In the end, ChatGPT puts students in a position they have never quite been in before. They now face a Faustian bargain that offers them passable papers free of charge at the press of a button. All they give in exchange is their minds.

Contact Us

Center for Applied Christian Ethics

117 Blanchard Hall
501 College Ave
Wheaton, IL 60187

630.752.5886
cace@wheaton.edu