The Pietist Schoolman

The Pietist Schoolman

Share this post

The Pietist Schoolman
The Pietist Schoolman
How Not to Approach AI

How Not to Approach AI

Advice for fellow educators

Chris Gehrz's avatar
Chris Gehrz
Jun 10, 2025
∙ Paid
8

Share this post

The Pietist Schoolman
The Pietist Schoolman
How Not to Approach AI
Share

One of my few mantras in teaching writing to students — or in keeping this blog — is that we often have to write in order to determine what we think. That’s particularly true when you’re not sure what you think about something that seems to threaten the whole project of teaching writing: generative AI.

My summer break started with the New York Times reporting that “OpenAI, the maker of ChatGPT, has a plan to overhaul college education — by embedding its artificial intelligence tools in every facet of campus life,” and it will continue with me teaching a World War I class whose online exams and essays seem to invite the illicit use of tools like ChatGPT. So before that course launches next week, I’m going to use today’s post to think through three principles for how not to approach AI within the context of education.

If I start to sound like I’m accusing someone of something, know that my finger is pointed at me. There’s nothing that I’m about to warn against that I haven’t either done or contemplated doing.

Don’t Ignore AI — and Don’t Surrender to It

All things being equal, I would rather not think about AI at all. But wishing away a technological innovation that’s so obviously revolutionary won’t solve the problem. College professors and other teachers need to understand what generative AI is and what it can and can’t do, how it’s being commodified for the use of students, teachers, and administrators (and who is making money in the process), and problems it entails —  environmental, ethical, and legal, not just the fact that it enables cheating (more below) and is often unreliable.

To put it mildly, AI and its implications are complicated. So it’s good to consult experts, like my colleague in computer science who spent an hour this spring unpacking AI theory and practice for students in my higher education seminar. But beware another kind of expert: those who attach words like “inevitability” to AI as they offer to help individuals and institutions accommodate themselves to the unavoidable revolution.

This seems like a rare good time to make use of Substack’s AI-powered “Generate image” tool. Above is what it suggested for “artificial intelligence inevitable.”

As long as teachers have had students, technological changes have come along to disrupt the educational status quo. Such progress has sometimes improved the quality of education, or at least access to it, and it has often promised to reduce the work that humans have to put into teaching and learning. But just as often, it has come packaged in claims of inevitable change that haven’t panned out. Centuries after the inventions of the codex and the printing press, I’m still leading discussions like Socrates and giving lectures like Thomas Aquinas. It wasn’t so long ago that the advent of the Internet and the Worldwide Web, personal computers and smartphones were destined to disrupt the very idea of face-to-face instruction, yet MOOCs have been no more revolutionary than correspondence or radio courses were decades earlier, while the experience of the COVID lockdown tempered earlier enthusiasm for the promise of purely online education.

I’m neither a technophobe nor a technophile. I’m trying to keep abreast of advances in AI, while not surrendering in advance to a revolution that’s unlikely to fulfill its supposed potential.

Don’t Blame AI — or the Students Who Use It — for Larger Problems

As I’ve planned my summer course, I’ve already started to imagine what I’ll do if a student turns in an essay that our university’s AI detection software flags as likely having been machine-written. So long as it’s reasonably clear that the student has violated our academic honesty policy,1 I’m obligated not to ignore what’s happened. I might give a warning if it’s a first offense, but there’s a process I’m supposed to go through, and I’ll try not to shirk it.

(Once more, with feeling: we can neither ignore AI nor surrender to its inevitability, in the existential abstract nor in this kind of mundane, particular case.)

But we also shouldn’t stop at policing a policy. In the handful of cases where I’ve felt I had no choice other than to give zero credit and submit a report to our deans, I’ve first met with the student. What strikes me in each conversation — and what bothers me in reading some of the more polemical writing on this topic — is that AI isn’t itself the problem. Its use (or abuse) is symptomatic of deeper problems that the most creative educator can’t solve on their own.

Keep reading with a 7-day free trial

Subscribe to The Pietist Schoolman to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Christopher Gehrz
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share