Skip to Main Content
CEED logo header

CEED Blog: CEED Blog

To Teach or Not To Teach With AI? Perhaps That Is Not the Right Question.

by Drew Dunphy on 2025-11-18T12:52:19-05:00 | 0 Comments

I hope many of you were able to attend Dr. Torrey Trust’s excellent presentations about artificial intelligence at our most recent Professional Day.  If not, I hope you’ll find time to flip through her slide decks (which you can find here and here).  Dr. Trust provided a wealth of information about the current state of AI – some of it, honestly, slightly terrifying – all of which I found deeply relevant to our work.

 

In the Teaching & Learning Collaborative this semester, we are discussing Teaching With AI by Jose Bowen and C. Edward Watson.  The book makes an interesting case for integrating AI into our work as educators, and it delves into the pitfalls both of teaching with AI and of not teaching our students to use AI responsibly.  As with so many issues in American life, however, I think the topic of AI in education is sometimes presented as a false dichotomy:  either we embrace AI and integrate it into our teaching, or we ban it from our classrooms.  There are other ways to think about the challenges AI presents, of course, one of which I have come to refer to as being “AI aware.”  We don’t have to teach with AI, but we do have to grapple with the fact that AI is woven into what happens in our classes, no matter how hard we try to keep it out, and often in ways we don’t see. 

 

With that in mind, I wanted to offer a few reflections from Dr. Trust’s presentations in the hope they might spark more conversation about this topic on campus:

 

  • ALL of us need to be talking with our students about AI.  If you think your students aren’t using AI you are almost certainly – with all due respect – wrong.  Dr. Trust discussed recent studies that suggest 90% or more of college students, at all types of institutions, are using AI.  And if you think you have “AI-proofed” your assignments you are also – again, respectfully – probably wrong.   Even in the last year, AI tools have become vastly more sophisticated.  Gemini and Copilot are available to rewrite text at the click of a button, and ChatGPT is now connected to the Internet, meaning it can complete all sorts of assignments it previously couldn’t.  Given this reality, we have to talk with our students about what uses of technology are and are not appropriate in our classes, as well as the value of doing the work we assign without the use of AI.

 

  • Companies are aggressively marketing AI tools to our students.  The rollout of AI in private industry seems to be a mixed bag, but tech companies are “all in” on marketing AI to college students.  Social media influencers in particular are trumpeting AI tools as the easy path to a perfect GPA.  Students who consume a lot of social media may be affected by this marketing pressure, imagining that “everybody’s doing it” so they should use the tools as well.  At the same time, they hear warnings from their professors about the dire consequences of using AI – so pressure is coming at students from multiple directions. 

 

  • Bias is a huge problem in using – and detecting – AI.  Dr. Trust presented some alarming examples of the biases inherent in AI tools.  Since LLMs (large language models) like ChatGPT are trained using data from the Internet, they replicate – and even amplify -- the biases found online.  In one experiment, researchers asked ChatGPT to grade five essays.  The essays were identical except for the names of the students, and the essays bearing Caucasian-sounding names were given higher grades.  In terms of detecting AI use, students of color are more likely to be wrongly accused of using AI, and Stanford University has produced studies showing that writing by non-native English speakers is more likely to be wrongly flagged by AI detectors than writing by native speakers.  So even using AI detectors (which are, of course, a type of AI) to stop the use of AI is a thorny issue that requires serious discussion.

 

  • There are useful, ethical ways for students to use AI tools. Check out Google Gemini’s Storybook feature, which will create and narrate picture books on almost any topic.  This strikes me as a highly useful tool for students struggling to understand material. (It certainly helped my ten-year-old, who was having trouble preparing for a quiz on the water cycle.)  If a student is struggling to grasp a key concept from a textbook or lecture, the tool can create a storybook that explains the concept in a simpler way.  Unfortunately, I think few of our students are aware of these types of tools or applications of AI.  If most of their exposure to AI tools comes from social media, they might only see AI as a tool for cheating.   

 

  • Students often struggle to distinguish between ethical and non-ethical use of technologyStudents know they shouldn’t copy and paste an essay from ChatGPT.  But given the mixed messages they hear about technology, and the speed with which the tech is changing -- and the fact that AI is now built into all kinds of other tools they use – there is clearly gray area and confusion about ethical use.  I experienced that first-hand in my composition classes this semester.  Early on, several students asked me if it was OK to use ChatGPT for brainstorming.  I think there are ethical ways to do that (I use ChatGPT that way myself sometimes), but it quickly became clear to me that my students had a much different idea than I do about what “brainstorming” means.  That led to some difficult conversations and some disappointed students when they learned I wouldn’t accept the papers they’d turned in.

 

  • We need to prioritize what AI and other tech cannot do.  Whatever our feelings or policies about AI might be, we have to acknowledge that our students will be entering a workplace where AI plays a big role.  Many tasks will be automated by AI, and students will be expected to use AI effectively in their future jobs.  We need to equip our students with skills and knowledge that cannot be automated: interpersonal skills, humanistic values, and knowledge that can’t simply be looked up online.  If that requires re-thinking our teaching or curriculum, we need to do it, because AI is not going away any time soon.

 

Did you have other take-aways from Torrey Trust’s presentations on Professional Day?  Or insights about AI from your recent work with students?  If so, please send them to me and I’ll add them in the comments below.  This is a conversation we need to be having! 

 

 


 Add a Comment

0 Comments.

  Subscribe



Enter your e-mail address to receive notifications of new posts by e-mail.


  Archive



  Return to Blog
This post is closed for further discussion.