Teaching & Learning with ChatGPT: Opportunity or Quagmire? Part I
Welcoming Generative AI into Our Classrooms
The recent launch of generative artificial intelligence models, like ChatGPT, are eliciting an energetic variety of responses from instructors everywhere, ranging from consternation to cautious optimism. It is likely we are witnessing a novel and permanent disruption in the classroom activities of higher education. While it will take several months (years?) to fully assess the extent of this continually developing disorder, we are faced, in a few short weeks, with the beginning of a new semester. During the past three years, though, we’ve certainly learned how to pay attention to new challenges and how to pivot to meet them directly.
The inescapable reality is that ChatGPT and other AI writers are here, and students are going to use them. Trying to prevent the use of these new tools is likely to be a losing battle. We may be dismayed with students who will simply use these platforms in order to achieve an acceptable grade without actually engaging in original thought or work. We may sympathize, to some degree, with students who use these tools to complete more assignments in less time. Consider, too, our dawning awareness that current plagiarism detection software cannot completely detect AI-generated material. Further, consider that AI writers will likely improve in efficacy as rapidly (if not more so) as the technology of any detection software. The appeal of this new resource – either in wholesome or shady ways – is undeniable.
Even a few minutes’ thought about the use of AI writers quickly gives rise to sweeping questions of (nearly) existential scope. Is plagiarism our prime concern as we view the proliferation of AI writers among our student populations? Should we stop students from utilizing these easily accessible resources? Is that goal remotely achievable? Where do we draw lines between identifying someone’s original work amid the array of commonly accepted digital technologies already available? Who among us has not relied on the ubiquitous “auto-correct” when typing; or accepted wording suggestions in our text messages? What attention should we give to the inherently visible flaws among AI writers? How are we to be concerned about their evident lack of inclusivity?
How do we describe our evaluation of quality? Are there differences in degree among possible methods for producing, say, a loaf of bread? Consider some possibilities: A world-class baker employing her years of experience and skill, working with locally-sourced ingredients; A home baker preparing bread ‘from scratch’; A home baker using a boxed mix and a bread machine; An automated factory assembly line producing Wonder Bread. Do these various venues constitute differences only in degree, or do the variable circumstances result in substantive differences? An analogy, perhaps, as we seek to understand our own and our students’ engagement with AI writers.
While everyone’s entanglement with AI writers is unavoidable, this situation represents a unique and optimistic moment for deeply refining our approach to classroom work of all kinds. Though (of course!) creating additional work for instructors, this is a prime opportunity to highlight our ongoing care for student learning, academic well-being, and the authenticity and validity of our learning outcomes. Over the next few weeks of IAP, and by way of introduction, we will examine some broader questions here. These questions are especially critical and relevant as we all seek to establish a workable foundation for engaging in the long term with AI technologies across MIT classrooms and learning spaces.
Broader Questions & Upcoming Posts
Part II. How can we use these AI tools to support and enhance student learning?
- Can these tools help us to more effectively meet existing desired goals for learning outcomes?
- How might these tools prompt us to reconsider goals for student learning?
- Are there levels of higher-order thinking that we’d like students to achieve, and if so, can AI tools help them get there?
- Does the technology enable students to engage more meaningfully and authentically with the course material and/or the discipline overall?
- How can we redesign our assignments and assessments to leverage these tools to better support authentic and meaningful student learning?
Part III: Academic Integrity | Student Data Privacy | Accessibility & Equity
- Here, we outline a few issues to consider and address before the beginning of the semester.
Undertaking an initial survey of these questions will no doubt raise other questions and concerns and, we hope, demonstrate additional cause for optimism and creativity. And, given the ongoing proliferation of articles and blog posts, we expect no shortage of material to offer for your reflection, comment, and use.
In the meantime, if you’d like to read some particularly thoughtful pieces, we recommend the following:
- Schiappa, Edward & Montfort, Nicholas (2023). Advice Concerning the Increase in AI-Assisted Writing, Klopfer, Eric & Reich, J. (2023) and Calculating the Future of Writing in the Face of AI. Comparative Media Studies & Writing @ MIT.
- D’Agostino, Susan (2023).ChatGPT Advice Academics Can Use Now, Inside Higher Ed.
- Bruff, Derek (2022). Three Things to Know about AI Tools and Teaching, Agile Learning Blog.
- Brake, Josh (2022). Education in the World of ChatGPT. The Absent-Minded Professor Blog.
A more complete list of resources is included at the end of this post.
Are you interested in leveraging the utility of AI writers in your assignments and coursework?
Contact us (TLL@mit.edu) with your suggestions, questions, and ideas. What are your strategies for engaging with this new reality? We are happy to collaborate with you on the development of effective approaches!
Resources
Higher Ed
- Brake, Josh (2022). Education in the World of ChatGPT. The Absent-Minded Professor Blog.
- Bruff, Derek (2022). Three Things to Know about AI Tools and Teaching, Agile Learning Blog.
- D’Agostino, Susan (2023).ChatGPT Advice Academics Can Use Now, Inside Higher Ed.
- Fyfe, Paul (2022). How to cheat on your final paper: Assigning AI for student writing. AI & Society.
- Gleason, Nancy (2022). ChatGPT and the rise of AI writers: how should higher education respond?, Times Higher Education.
- Klopfer, Eric & Reich, J. (2023) and Calculating the Future of Writing in the Face of AI. Comparative Media Studies & Writing @ MIT
- McKnight, Lucinda (2022). Eight ways to engage with AI writers in higher education. Times Higher Education.
- McMurtrie, Beth (2023). Teaching: Will ChatGPT Change the Way You Teach?, Chronicle of Higher Education.
- Mollick, Ethan R. and Mollick, Lilach (2022). New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments. Available on SSRN.
- Mondschein, Ken (2022). Avoiding Cheating by AI: Lessons from Medieval History Medievalists.net.
- Schiappa, Edward & Montfort, Nicholas (2023). Advice Concerning the Increase in AI-Assisted Writing, Comparative Media Studies & Writing @ MIT.
- Stokel-Walker, Chris (2022). AI bot ChatGPT writes smart essays — should professors worry?. Nature.
- Watkins, Marc (2022). AI Will Augment, Not Replace [Writing], Inside Higher Education.
- Comparative Media Studies & Writing @ MIT Schiappa, Edward & Montfort, Nicholas (2023). Advice Concerning the Increase in AI-Assisted Writing, Klopfer, Eric & Reich, J. (2023) and Calculating the Future of Writing in the Face of AI.
- University of Michigan’s Center for Research on Learning & Teaching (2023). ChatGPT: Implications for Teaching and Student Learning.
General
- Bogost, Ian (2022). ChatGPT Is Dumber Than You Think, Atlantic.
- Roose, Keven Roose (2022). The Brilliance and Weirdness of ChatGPT, NYTimes.