Back to wheaton.edu
Wheaton College Center for Applied Christian Ethics CACE logo

Richard Gibson

Rick-Gibson-August 2023The Answer to Generative AI is Pedagogy Not Policy

Richard Gibson, Ph.D., Professor of English

Earlier this spring, my provost asked my thoughts about a sister school’s decision to name ChatGPT among the malefactors in a campus-wide syllabus statement on plagiarism. Should we follow suit? I was consulted because I had just convened a three-day faculty seminar on artificial intelligence and liberal arts education, which included conversation partners hailing from biology, studio art, education, mathematics, economics, sociology, computer science, theology, and philosophy, among other disciplines. That experience determined my answer to the provost: Wait and see; let guidelines bubble up from the faculty as they discover what the new tools can and cannot do. A syllabus statement might be useful at some point, I argued, but anything we craft now will likely seem outdated in short order. More importantly, the seminar suggested that policies, even well-designed ones, will not address the deeper issues posed by the new technologies.

I urged caution for three reasons. First, while my colleagues all recognized that the new tools could be used to cheat on assignments in their classes (and some shared stories of how students had already done so), no one suggested that we outlaw AI entirely. Some voiced optimism about generative AI’s promise for education, others skepticism. One participant had already integrated Midjourney into his creative practice. Most participants, though, had little or no direct exposure prior to our meetings; the seminar was their first chance to test drive ChatGPT, DALL-E, and company, and, in turn, to think aloud about generative AI’s appropriate uses in their classes. With such a range of opinion and experience, a consensus about campus policy was impossible to imagine. We agreed, however, that in the months ahead we would need to reassess not just the nuts and bolts of assignments but also the rationales driving our assignments. If ChatGPT can ace an exercise, one participant wondered, do students really need to do it? Our students are going to ask this question; so should we.

Second, a statement that targets current brands such as ChatGPT would approach the problem too narrowly. Even prohibiting generative AI (as a larger category) may quickly prove problematic, given how many applications will soon receive an AI injection. With the unveiling of the Wordcraft Writers Workshop last year, for example, Google previewed how a large language model can be integrated into a word processing application as an editor, brainstorming partner, and ghost writer. In the very near future, then, students won’t need to pay covert visits to ChatGPT; the AI assistant will be built into Microsoft Word, Google Docs, and whatever comes next. As a result, tools that now look like cheatware may, to our future students, appear to be ordinary parts of word processing.

Finally, singling out ChatGPT in our syllabuses would send the wrong signal. We must not delude ourselves: Generative AI cannot be dealt with simply by putting the right regulations in place. Figuring out AI’s role in higher education is a pedagogical problem, not an administrative one, and the problem is likely to find diverse solutions across the disciplines, perhaps even within academic departments. Our classrooms are now laboratories in which AI trials are going to be conducted—whether by us or, if we fail to take the lead, by our students without our input.

Let me be clear that I am not advocating the immediate adoption of generative AI throughout the land. Far from it. The faculty seminar showed me that while there are many possible reactions to the new status quo (including banishing digital technology from the classroom), all of us must adjust. My colleagues wondered whether some “vulnerable” homework assignments would now need to occur in the classroom both to prevent cheating and to ensure thinking. A computer scientist tolled “the death knell” for take-home exams after discovering that ChatGPT would score 98/100 on an exam that requires knowledge of “a somewhat obscure programming language.” Several participants considered allocating more time to in-class writing because they want their students to “think from scratch.” One colleague caught our attention as he related a previous semester’s experiment with a “reading lab” in which students completed assigned reading in a quiet, phone-free space. Multiple participants discussed how, early in the semester, we might demystify the technology somewhat, such as by teaching students about large language models, and, later, clarify how such tools do not fully replicate the human crafts of writing and reasoning.

The seminar revealed that external rules—particularly the kind that can be easily folded into a syllabus statement or student handbook—will do little to sustain the robust learning environment that educators labor, individually and corporately, to provide. Depending on that sort of top-down regulation, as one participant noted, leads to an arms-race against the tech companies that we cannot win.

So how should educators proceed? In her Rules: A Short History of What We Live By (2022), historian of science Lorraine Daston recalls another sort of “rule” that we might apply. That is the ancient understanding of “rule”—kanon in Greek and regula in Latin—as “model” or “paradigm,” preserved in English in a formula such as “The Rule of St. Benedict.” Daston argues that “paradigmatic rules” are especially useful because, unlike laws, they do not demand special dispensations (or litigation) when exceptions arise or drastic revisions when circumstances change. “Rules-as-models are the most supple, nimble rules of all,” Daston writes, “as supple and nimble as human learning.”

The model may be to some extent codified, but Daston stresses that this kind of rule’s enactment depends on the relationship between those who provide the model and those who imitate it. In The Rule of St. Benedict, the abbot holds his post because he doesn’t simply know the rule; he embodies it. The abbot, in turn, exercises discretion over the application of the code to promote the good of the members of and visitors to the community. Those under the rule strive to emulate this model person—not just “the rules”—in their own lives. “Whether the model was the abbot of a monastery or the artwork of a master or even the paradigmatic problem in a mathematics textbook, it could be endlessly adapted as circumstances demanded,” Daston observes. The ultimate goal of such a rule is not to police specific jurisdictions; it is to form people so that they can the carry general principles, as well as the rule’s animating spirit, into new settings, projects, problems. We need a new Rule of Education—one that grants educators discretion and is invested in students’ formation—for the age of AI.

The new wave of loquacious technologies is already affecting how our students approach their studies, whether we like it or not. One response is to impose regulations from above. That approach will fail. Better, I propose, to retool our classes to safeguard what we absolutely should not farm out to machines while also creating opportunities to learn together, by trial and error, where AI can help us.

Emerging adults need to see, as one of my colleagues put it, “the benefits of the struggle” in their own lives as well as their instructors’. The work before us—to preserve old practices and to implement new ones—provides the ideal occasion to talk to our students about not only the intellectual goods offered by our fields that we want them to experience. It is also a chance to share how we have been shaped for the better by the slow, often arduous work of joining a discipline. Our students need more than technology, and the answer cannot simply come in the form of another list of dos-and-don’ts in the already crowded space of a syllabus. They need models for how to navigate these new realities—paradigms for their practice, life models. AI’s foremost challenge for higher education is to think afresh about forming humans.

 

Contact Us

Center for Applied Christian Ethics

117 Blanchard Hall
501 College Ave
Wheaton, IL 60187

630.752.5886
cace@wheaton.edu