Back to wheaton.edu
Wheaton College Center for Applied Christian Ethics CACE logo

Denise Daniels

100x100 Denise DanielsWhat Kind of Person Do You Want to Be?

Denise Daniels, Ph.D., Hudson T. Harrison Professor of Entrepreneurship

“Technology is neither good nor bad. Neither is it neutral.” Kranzberg’s First Law

Once when I was at my wits’ end in an altercation with one of my then-adolescent children I asked in desperation, “Is this the kind of person you want to be?!” The immediate response was a sheepish, “No,” and like that, the altercation was over.

I have thought about that moment and that question many times since. Most of the time I think about it in my own life: Am I acting in a way that is consistent with the kind of person I want to be? But I have also started to think of it on behalf of my students. What kind of people do they want to be? What are we trying to shape them toward? What kind of society will they inherit? What kind of people will they need to be in order to get a job, succeed at a career, flourish in a family, serve others, and accomplish God’s purposes?

These have always been among the questions that those in Christian higher education have asked, but the stakes seem higher now. Our context is changing and the demands for any given skill set is changing along with it. At this moment is seems that artificial intelligence (AI) will likely exacerbate the speed of that change.

A (Very) Brief History of AI

In its early iterations – the first wave of AI beginning in the first half of the 20th century – the goal was to get machines to imitate humans. It was thought that if we provided enough rules and information, the machine could deduce its way into acting human. The famous Turing test was an assessment of the success of those efforts. But for decades we weren’t able to provide enough rules and the machines weren’t able to deduce their way to an imitation of humanity.

The second wave of AI recognized that just as babies used inductive approaches to learning about the world and their role in it, machines too would need a ground up approach to learning. But there was not enough data and computing power available to feed the machines enough for them to approximate human responses. Not enough, that is, until the development of the internet and “big data,” along with exponentially increasing computer speeds. By the mid-1990s machines began to learn. Statistical models allowed them to determine the probability of the next word or phrase in a sentence, or the color of the next pixel, or the frequency and duration of the next note. Given enough data, enough computing power, and enough time, the recent models of AI would likely pass the Turing test. And within the past few months, various generative AI models have been released to the public, making them accessible to anyone with a smart phone or computer.

But somewhere along the way, the goal of having a machine that acts like a human was set aside in favor of a machine that performs even better than a human. Within days of being introduced, ChatGPT could write papers that were indistinguishable from high-caliber college students’ efforts, and could do so in seconds. Generative AI could create art that won competitions. And only months after the debut of GPT3, GPT4 could pass the bar exam.

Implications for Humans

Any rational actor should be asking whether it makes sense for us to continue to do the work we’ve always done – particularly the tedious bits – if a computer can do it faster and better. We don’t have any reluctance about using a calculator to solve arithmetic problems, or an engineering program to draw up blueprints. Of course, there are questions about whether it is cheating if we are working in collaboration with a computer to write a paper or create a work of art. Yet this same concern has been raised with other technological advances. I remember a high school chemistry teacher shaking his head and begrudgingly allowing us to bring our calculators to class. But he told us it was a slippery slope: Someday a student was going to bring a computer to class, and he wasn’t sure what he would do then.

Whether there is integrity in using generative AI to create something we present as our own is not a trivial question. But perhaps the more important issue at stake with AI is exploring how it impacts our understanding of ourselves as humans, made in the image of God. Two questions come readily to mind.

First, our engagement with generative AI may raise the question of whether humans are simply inferior computers. If we understand ourselves as doing what computers do (but in many cases not as well), this question is a legitimate one. For decades cognitive psychology has modeled its understanding of the brain on computer models, and the current Common Model of Cognition assumes that there are architectural similarities between natural and computer “minds.” Yet humans are more than just “brains-on-a-stick,” as James K.A. Smith memorably expressed. We are embodied in a physical world and do more than simply think. In this sense we are qualitatively different from machines that can learn.

A second question raised by generative AI is how much can we offload to a computer before diminishing our humanity? In Robert Heinlein’s science fiction novel “Time Enough for Love,” the character Lazarus Long articulates a long list of diverse tasks humans should be able to do, concluding that “Specialization is for insects.” Perhaps today we would conclude that specialization is for computers. Certainly, computers can do many specific tasks better and faster than we can, and it makes sense to offload some of our work to them. But at what point do our efforts at offloading tedium begin to diminish our own skillset, abilities, or character? Will using generative AI diminish our humanity?

Implications for Teaching

The questions raised above require us to think more deeply about how we approach the teaching task. In some cases, it requires us to double down in what we are already doing. In other cases, it requires us to think fundamentally differently about how we engage our students and the teaching process. Below are three implications for the teaching task as we consider the challenges and opportunities provided by generative AI.  

To What End? Determining whether using AI will diminish our humanity – or that of our students – has to begin with our teleological assumptions: What kind of people do we want to be? What kind of people do we want our students to become? To what end are we teaching the disciplinary content we require? What skills do we want our students to master and why? What habits do we want them to form? What loves do we want to encourage?

Many of the parents I talk to want their children to go to college so they can be career ready. They fear that the societal transformation we are undergoing may preclude good job opportunities for their young adult offspring. They are worried about the future. And they are particularly worried about how much the education we provide will cost. These are good and reasonable concerns, and we have to take them seriously. But they are not ultimate concerns. These same parents also have ultimate concerns. Just as we do, they want their children to love God and neighbor. They want them to engage in healthy relationships and cultivate community. They want them more and more to reflect the imago Dei.

While it doesn’t sound like it has anything to do with machine learning, our first response to how we think about generative AI is to focus on what is true and good and beautiful. We need to think carefully about how we inspire a love of learning rather than simply valuing the right answer.

Old Ways of Learning. Engaging our students as humans and interacting with them in real time and space is another way of responding to the opportunities and challenges created by generative AI. We are not simply inferior computers, but rather embodied creatures who live in the material world that God made. Our teaching should reflect this truth. And this may require us at a very pragmatic level to think about how to engage old ways of learning and knowing in order to accomplish our ultimate purposes.

More things that used to be homework may need to be done in class. We might want to begin requiring handwritten notes or textual annotations in lieu of a before-class reading quiz. In-class handwritten exams and oral assessments may need to take the place of computer-based or take-home exams. One CACE participant has begun providing optional TA-led reading labs in lieu of a reading quiz. Allowed only a book, pen, and paper, students in these labs spend the time reading quietly together.

New Ways of Learning. Finally, the technological advances that have brought us generative AI have opportunities embedded in them. We should not reject these opportunities even while we are focusing on teleological goals or old-fashioned ways of learning. How might we think about engaging new ways of learning and knowing provided by tools like ChatGPT, or Dall-E2, or any number of other generative AI programs? What kinds of guidelines should we create? These questions require more space than I have and will require ongoing conversation in community. But we may want to think about productive uses of AI in the classroom. Can it be used to create reading quiz questions? In class exercises? Assessment rubrics? As we continue to ask teleological questions about what our ultimate ends are in educating students, we will better be able to answer these practical questions about how we might be able to use the tools provided by generative AI.

 

Contact Us

Center for Applied Christian Ethics

117 Blanchard Hall
501 College Ave
Wheaton, IL 60187

630.752.5886
cace@wheaton.edu