The Role of Generative AI at Middlebury College: Insights from a Recent Survey
Over 80% of Middlebury College students are integrating generative AI into their coursework, as revealed by a recent survey conducted alongside my colleague, economist Zara Contractor. This remarkable adoption rate stands out in the context of technology use, especially when compared to the 40% rate among U.S. adults. It’s fascinating to note that this rapid uptake occurred in less than two years following the public launch of ChatGPT.
Understanding the Survey
To delve deeper into how students at Middlebury are utilizing artificial intelligence, we surveyed 634 students—an impressive 20% of the student body—between December 2024 and February 2025. While our focus was on one institution, our findings resonate with broader trends observed in similar studies across higher education, painting a comprehensive picture of AI’s role in learning environments.
Shifting the Narrative: AI as an Educational Tool
Amid sensational headlines that claim “ChatGPT has unraveled the entire academic project” and “AI Cheating Is Getting Worse,” our research suggests a different narrative. Rather than solely being a tool for circumventing academic effort, students predominantly use AI to enhance their learning experience.
In our survey, we presented students with ten distinct academic uses of AI, ranging from explaining concepts and summarizing readings to proofreading and generating programming code. Notably, the most common use was for explaining complex topics, positioning AI as an “on-demand tutor.” Many students expressed that this resource was especially beneficial when traditional academic support, such as office hours, was unavailable.
Categorizing AI Usage
We categorized AI applications into two main types:
- Augmentation – Enhancing learning experiences.
- Automation – Completing tasks with minimal effort.
The findings revealed that 61% of student respondents use AI primarily for augmentation, while 42% seek automated solutions for tasks such as drafting essays or writing code. Interestingly, even when AI was employed for automation, students did not utilize it indiscriminately. Many indicated that they resorted to automation during peak academic stress, such as exam periods or for less critical tasks, rather than relying on it for significant assignments.
Broader Implications Across Institutions
While Middlebury is a small liberal arts college with a specific demographic, we decided to expand our analysis to include data from over 130 universities across more than 50 countries. Encouragingly, these findings mirrored those from Middlebury: globally, students who engage with AI tend to focus on enhancing their coursework rather than simply automating their tasks.
However, a pertinent question arises: how reliable are students’ self-reported utilizations of AI? One concern is the potential underreporting of controversial uses, such as essay writing. To validate our findings, we compared our survey data with actual usage patterns from AI company Anthropic, which examined interactions through their chatbot, Claude AI.
Anthropic’s findings corroborated our survey results, indicating that students are indeed utilizing AI for technical explanations and tasks like creating practice questions, editing essays, and summarizing materials.
The Significance of This Study
Writer and academic Hua Hsu recently highlighted the lack of reliable data on AI usage among American students, often emphasizing startling anecdotes rather than comprehensive data. These sensational stories, such as one about a Columbia student allegedly using AI to cheat on assignments, can lead to misunderstandings that equate widespread AI adoption with universal cheating.
Our research indicates that while AI use is pervasive, it serves primarily as a tool for academic enhancement. This important distinction helps counteract the potentially damaging narrative that all AI use signifies dishonesty, thus freeing responsible students from feeling naïve about adhering to academic integrity.
Moreover, this skewed perspective has broader implications for university administrators. Accurate information about real student AI usage is essential for crafting effective policies, and alarmist narratives can distort understanding and prompt ineffective responses.
Future Directions: Finding the Balance
Given our findings, adopting extreme measures—such as blanket bans on AI or unrestricted use—can pose significant risks. Prohibitive policies may disadvantage students who greatly benefit from AI’s tutoring features, while lenient policies could inadvertently facilitate harmful automation practices detrimental to engagement and learning outcomes.
Rather than a one-size-fits-all policy, institutions should concentrate on teaching students how to differentiate beneficial AI applications from those that might be detrimental. Unfortunately, research on the actual impact of AI on learning outcomes remains in its infancy. Currently, no comprehensive studies systematically assess how different uses of AI affect student learning, or whether AI impacts might differ markedly among students.
In the absence of robust evidence, those engaged in education must employ their best judgment to determine how AI can be leveraged as a powerful ally in the learning process.
Inspired by: Source

