I was recently at the 2024 Faculty Development Conference of the International Alliance for Christian Education and went to a splendid panel discussion about the potential and pitfalls of AI in college education. I have to admit that I have been a little “slow on the uptake” to explore AI, so the session was enormously helpful as an introduction to AI software.
The session prompted me to do some initial forays on ChatGPT, in particular. I never want to be the sort of “old guy” who has no clue about the latest technologies. The panel also assured me that while AI creates obvious new problems for educators (AI-generated papers, most immediately), it also has some potential value for research, and even some potentially legitimate uses in humanities courses.
One of the first things I did was to see how AI generates a student paper. I typed in “three page paper on the Second Great Awakening” and it spat out said paper in about three seconds. On a quick skim, I thought the paper was very well-organized, if pretty bland. If I had gotten it from a college freshman, I would have given it an A-. What to do about the potential for cheating with AI is not my focus here, and plenty has already been written on the issue.
Of greater interest to me is what you could actually do with AI software that was valuable in history research. My conclusion is that using ChatGPT is sort of like doing turbo-charged research on Wikipedia. It is very good for finding out obscure but non-debatable information. For example, I just opened the ChatGPT app on my phone and asked what year President Benjamin Harrison was born (1833). This is information that almost no one would know off the top of their heads, but is not subject to debate or interpretation.
You can get similar results from Siri or other digital assistants, but as soon as your query gets more interpretive than (for example) a birth date, Siri starts giving you a list of websites rather than giving you answers. This is what I mean by ChatGPT “turbo-charging” the process - it eliminates the step of sorting through web results.
The problem - and this is a huge problem if you don’t know what you’re doing - is that ChatGPT will sometimes give totally incorrect or even imaginary answers to your questions. For example, I asked ChatGPT who was the most prominent Baptist evangelist of the 1830s. It told me Charles Finney, which any expert will know is wrong because Finney was a Presbyterian. I told ChatGPT it was wrong, and it agreed (better than arguing with me about it, I suppose).
Then I asked ChatGPT who the most prominent Baptist evangelist of the 1840s was. It said William Miller, which is a far better answer than Finney because Miller at least started out as a Baptist before becoming a founder of the Adventist movement. Still, one could imagine a better answer than Miller. Siri listed Jesse Mercer as one of its website options, but he’s not ideal since he died in 1841. Maybe the best answer would be Jacob Knapp, who appears in Siri’s search results, but only after some scrolling.
The IACE panel also introduced me to the concept of AI “hallucination,” meaning that it sometimes creates fake answers (for technical reasons I don’t completely understand). For example, I recently asked ChatGPT about books on “Two Kingdoms” theology, and overall the answers seemed good. I even ordered copies of a couple of the books it suggested. (This topic is not a specialty of mine, but I know enough to know some of the major writers in the field.) As I searched through the options, however, I searched for more information on a certain book in the list, by an author whose work I knew. The book simply did not exist. I told ChatGPT so, and it agreed - there is no such book.
So if ChatGPT is capable of making howling errors (Finney was a Baptist) or hallucinating about books that do not exist, what’s the point? Admittedly, the software does have significant limitations that would keep me (as of now) from leaning heavily on it for research. But like some other non-scholarly resources on the internet, AI can be helpful if you know what it is, and how to treat the results you get.
Probably the most helpful thing ChatGPT has done for me in recent weeks is that it instantly explained an arcane medical term to me. In my reading I came across the term “course of salivation” as a treatment for “liver complaint.” I didn’t know what this meant, so I asked ChatGPT, and it immediately gave me what seemed to be a clear and concise description of the treatment - inducing salivation in order to draw out toxins. The problem was that one of the most common ways to induce such salivation was to treat the patient with mercury, which doctors tragically failed to realize was poisonous and unsafe at any dose.
I had a harder time obtaining the same information quickly on a Google search. I am sure that I could have found it on Google eventually, but something about the way I asked it on ChatGPT gave me exactly what I wanted, in much less time than I could ever have gotten it on Google. Would that every ChatGPT search was so effective!
My primary takeaway is that it is worth it for every teacher and researcher to open up ChatGPT (or the AI software of your choice) and familiarize yourself with what it can do. It is hard to say just how central it will become to research and teaching in the coming years, but it seems certain that AI is not going away.