Bryan McGraw

100x100 Bryan McGrawAI & Liberal Arts

Bryan T. McGraw, Ph.D., Associate Professor of Politics, Dean of Social Sciences

I teach an advanced seminar on occasion on the intersection of war and justice, taking students through the history of philosophical and theological reflection on war and then grappling with discrete issues like nuclear weapons, terrorism, and the like. After reading about and playing with the ChatGPT Large Language Model (LLM), I decided that it would be worthwhile to try and integrate it into one of my assignments. For one of their four papers in the class, students crafted a prompt and then tried to get ChatGPT to write a 1,250-word essay in response to the prompt. The student then wrote an accompanying paper reflecting on the strengths and weaknesses of ChatGPT’s attempt.

Some of its weaknesses were notable, but not especially interesting. The LLM made mistakes in sketching out the contours of Just War Theory, misdescribing or simply missing some of its core principles. It resisted writing anything over 750 words, and its writing was rather boring. But those sorts of problems aren’t interesting since no doubt the LLMs will continue to get better and solve its more technical sorts of problems.

Where it really had difficulty, though, was in an area at the heart of the class. I have designed the class to do much more than just produce students who can rattle off all of the jus ad bellum or jus in bello principles or even tell you what Grotius or Vitoria added to the tradition. Students should (hopefully) be able to do those things, but do them in the service of something more important: getting good at thinking through the moral conundrums that inevitably attend a decision to go to war (or not to) and making solid, defensible judgments about those decisions. My hope is that students who take this class improve their capacity for good moral judgments about war, whether they are ever in a position to actually exercise that capacity or not. (I confess that I probably pay a bit more attention to the ROTC students who take the class precisely because they are indeed more likely to be in that position).

That is a thing ChatGPT could not do, or at least could not do well consistently. If you asked it to generate an argument defending the bombing of Hiroshima, it could produce two or three good arguments to that effect and even run it up against the atomic bombing. But if you asked it whether the U.S. should have dropped the atomic bombs on Hiroshima and Nagasaki, it would merely list off the arguments for and against and then give some mealy-mouthed conclusion that usually refused to take a side. And even if you did force it to, it was entirely unclear why it chose one way or another—except that you forced it to.

The upshot is that while the LLM is quite good at assembling text and offering arguments, it isn’t that good at what I care most about: students’ capacity for solid, defensible moral judgments. Why might this be the case? Well, LLMs are “predictive”: they operate by getting good at predicting, given the large corpus of data on which they are trained, how we should answer questions. That is to say they merely (if in very sophisticated ways) give back what they have been force-fed. They have been “educated” in the sense of being quite good (maybe in some respects better than most humans) at giving us the answers we have already given them. But while the practice of moral judgment does indeed rely on past experiences, good moral judgments are certainly not always continuous with those experiences. Indeed, sometimes good moral judgments require significant, even revolutionary, shifts, especially when faced novel dilemmas. No one, after all, had been forced to answer the question of whether to use nuclear weapons until 1945.

One of the core things a Christian liberal arts education ought to do is to help students improve their capacity for making good moral judgments, especially in situations where there seem to be genuine dilemmas. LLMs can assemble a tremendous amount of information and sketch out what others have argued (maybe even making up a couple of fictional claims along the way). But they are not—or certainly do not seem to be—much use in making moral judgments. That requires the cultivation of virtue, training in reasoning, and that ineffable—and very fallibly human—capacity to think anew about our most challenging questions. LLMs are not, then, really a threat to the Christian liberal arts, unless, that is, we set aside this core goal and allow them to be.