Survey gathers data, short responses from students on AI views and behaviors
The following is the second installment in a series by The Bachelor focusing on artificial intelligence at Wabash College.
While artificial intelligence impacts all of higher education, there is no group within a college or university more shaped by AI than students. From the advent of ChatGPT, debate immediately broke out over how AI would affect the central mission of higher education: preparing young people for their lives and careers. Yet despite how pressing the topic is to them, students and their perspectives remain underrepresented when it comes to debates on artificial intelligence in higher education.
Wabash regularly collects student data and even employs institutional research professionals. However, there has been little research on student views and behaviors regarding AI. Director of Institutional Research David Dalenberg hinted that students and AI will be researched further.
“Although we have collected little AI-related data on students in the past, AI-focused questions might become more prevalent in future surveys,” said Dalenberg.
“I’d love to know how students are using it beyond the classroom,” said Associate Dean of Students Marc Welch ’99. “In many ways, I suspect that our students could teach us a thing or two about it.”
With this in mind, The Bachelor conducted a survey among the Wabash student body to investigate how students use and think about AI. The survey was open to the free participation of all students, yet remained anonymous to encourage truthful answers regarding controversial topics such as academic honesty.
All class years were represented roughly equally in the 166 responses, with 19% of the student body participating in the survey. This article will break down the survey data, which included several opportunities for students to anonymously share their candid views on AI.

“What do you use AI for?”
Unsurprisingly, schoolwork was the most common object of student AI use. Still, there were many diverse applications even within a strictly academic context. Students cited study guides, in-depth explanations, annotation, research, brainstorming and exam prep as constructive uses of AI.
“I use it to study for quizzes and exams, plugging homework into AI and getting a practice exam from it,” said one student.
Beyond help with coursework, students also reported using AI in other contexts, such as for professional development, coding, creative projects or simply searching for general knowledge.
“I use tools like Perplexity as an alternative to Google, as they cite all of their information to specific sources,” said one student.
Many students also mentioned personal planning as a key use of AI, with some specifically referencing reasons like “self help” or “life questions.” As large language models (LLMs) become more reliable and life-like, they are increasingly trusted with life advice.
“What best describes your attitude towards AI and its growing role in society?”
Student attitudes towards AI are diverse but evenly split. Most students describe their views on AI as either neutral or moderately skewed. Students reported extremely negative or hopeful views on the future of AI at comparable rates of roughly 10–12%. This is broadly representative of college students nationwide.
“The numbers don’t surprise me,” said Dalenberg. “There was an even mix of attitudes in the survey, and it seems like this lines up with national trends.”
Additionally, Bachelor staff placed this data in the context of class year. As the stacked column chart indicates, there is a slight but noticeable trend towards AI skepticism as age increases. Despite a small outlier in some very pessimistic freshmen, pessimism tends to become more common and more extreme among older Wabash students, with over 50% of seniors reporting at least somewhat negative expectations for AI. Conversely, younger students tend to report more hope for AI. While no one knows why these trends occur, a dean’s experience may provide some insight.
“I’ve heard that some seniors in job interviews are being asked about AI: their usage, knowledge of and comfort with it,” said Welch. “If a senior doesn’t have great experience with it, they might not feel as confident in the interview nor prepared for the job.”
“What makes you anxious and/or hopeful about AI?”
Students also gave thoughtful reasons for why they held optimistic or pessimistic views on AI. A major concern among students is the economic impact of AI, particularly how it forces many entry-level jobs into obsolescence and worsens an already grim job market.
“I am anxious that more people will lose their jobs based on the role AI takes in society,” said one student. “Growing unemployment will lead to great civil distress and a more divided nation.”

Another recurring concern among students is the environmental damage caused by resource-intensive AI infrastructure. Several students were specifically concerned by large data centers demanding vast quantities of water.
Students also share a major concern with many faculty members. Namely, they worry that an overreliance on AI in daily life will cognitively stunt many individuals.
“I think it makes people stupid and lazy,” said one blunt student.
However, students reported hope for a great number of benefits offered by powerful LLMs. A major advantage that AI provides is sheer efficiency for tedious, unenjoyable tasks.
“I am optimistic that [AI] will minimize the time spent on mundane tasks that are non-essential, busy work essentially,” said one student.
Of course, AI is not simply used for busy work. LLMs’ incredible speed and depth can exponentially increase the efficiency and efficacy of the right user. Such a powerful tool opens a great many doors, which excites some students.
“Properly applied, it is a significant ‘multiplier’ on an individual’s or group’s ability to execute, which I think will be broadly beneficial to society,” said one student.
Ultimately, many students grasp that the potential of AI rests in the hands of those that use it. Even while some students resent the proliferation of AI art, for example, this does not necessarily preclude more positive applications.
“I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes,” said one student.
“Should Wabash incorporate AI into the curriculum?”
All Wabash students must grapple with a future dominated by AI. This unavoidable reality begs the question: how should Wabash prepare these young men for the AI economy while still delivering an authentic liberal arts education?
Students weighed in on whether AI should be a part of the Wabash curriculum. A decisive majority voted in favor of some kind of AI education, almost 68% of students polled. Many students suggested that the College offer an introductory course focused on all things AI. An “AI-101” course could be intended for general audiences and help students judge between appropriate and inappropriate AI use.

“There should be classes or sessions on how to use AI effectively without cheating,” said one student. “It is a great resource when used correctly, but using it incorrectly can have massive consequences on your learning.”
Another student emphasized greater training on AI use among the faculty.
“Most professors do not know how to use LLMs properly,” said one student. “I think education intended specifically for faculty would be helpful.”
While most polled students support integrating AI into the Wabash course material somehow, there is a vocal minority that strongly opposes AI in the classroom. Some dissenters view AI as too underdeveloped to use as a teaching resource yet.
“Wabash should absolutely not incorporate AI into the curriculum,” said one student. “AI is still in development, makes occasional (and sometimes frequent) errors, and is not an effective teaching tool.”
Other students view AI-oriented coursework as antithetical to the liberal arts ethos Wabash prides itself on.
“Dialogue between individuals generates more insight than any AI could,” said another student. “Not because the AI won’t provide accurate information, but because of how our brain processes information differently.”
Final thoughts
With data on students’ perspectives, the College has a clearer picture of where students stand. Anecdotal evidence can roughly sketch student views, but quantifiable, measurable data is a major step toward improving dialogue about AI on campus.
One survey is not totally representative. One hundred sixty-six is a large number of students, but it is not even 20% of the student body. Another issue is that there are specific issues with academic AI use where the survey may not yield fully accurate data. There are repeated incidents of students getting caught generating whole writing assignments with AI, yet not even one student admitted to using an LLM to write their essay in the survey.
Furthermore, the optional nature of the survey may skew the data. The students that answered the survey may be less likely to engage in problematic AI use in the first place.
“It’s difficult to have a sense of the exact degree of self-selection bias in a survey,” said Dalenberg. “It’s reasonable to guess that, as a group, students that use AI consistently were at least slightly less likely to fill out the survey.”
However, while the results of any poll can never be perfect, the possibility of a respondent lying or the survey missing some information should not prevent us from trusting in data. Anyone that doubts the ability of current Wabash students to think critically can look at these students’ thoughtful answers and see that AI is not preventing Wabash men from doing what they do best.
GRAPHICS BY NATHAN ELLENBERGER ’26
