The AAC&U and Elon University survey of 1,057 faculty finds nearly universal concern that AI is making students overly dependent on technology. Ninety percent said AI will reduce critical thinking and 83% said it will shorten attention spans. Most faculty reported increases in cheating and doubts about graduates' readiness to use AI in the workplace. Authors urge AI literacy, clearer norms and leadership to protect academic integrity and learning.
95% Of Faculty Say AI Is Making Students Overly Dependent, Raising Integrity And Skills Concerns

A new survey from the American Association of Colleges and Universities (AAC&U) and Elon University’s Imagining the Digital Future Center found widespread faculty concern that generative artificial intelligence is undermining students’ learning, judgment and academic integrity.
Key Findings
The non-scientific survey polled 1,057 college and university faculty late last year. Respondents reported several consistent worries about AI’s effects on student learning and on higher education more broadly:
- Overreliance: 95% of faculty said AI will cause students to rely too heavily on the technology.
- Critical Thinking and Attention: 90% said AI will decrease students’ critical thinking abilities, and 83% said it will shorten students’ attention spans.
- Academic Integrity: 78% reported that cheating on their campus has increased since AI became readily available, and 57% described the increase as significant. Seventy-three percent have personally addressed academic integrity problems tied to student use of AI.
- Research Skills: Faculty were split on research outcomes: 48% said students’ research skills have deteriorated because of AI, while 20% said those skills have improved.
- Value Of Credentials: 74% believe AI tool use will erode the integrity and value of academic degrees, including 36% who expect a significant decline; just 8% see a net positive effect on degree value.
- Workplace Readiness: 63% said graduates last spring were not well-prepared (or not prepared at all) to use AI in the workplace; 37% judged them very or somewhat prepared.
Voices From The Report
"These faculty are divided about the use of generative AI itself. Some are innovating and eager to do more; a notable share are strongly resistant; and many are grappling with how to proceed," said Lee Rainie, director of Elon University’s Imagining the Digital Future Center. "Without clear values, shared norms and serious investment in AI literacy, we risk trading compelling teaching, deep learning, human judgment and students’ intellectual independence for convenience and a perilous, automated future."
Eddie Watson, co-author and vice president for digital innovation at AAC&U, said the findings "do not call for abandoning AI, but for intentional leadership — rethinking teaching models, assessment practices, and academic integrity so that human judgment, inquiry, and learning remain central."
What This Means
Faculty responses signal an inflection point for higher education: institutions must balance the potential benefits of AI with clear policies, stronger AI literacy programs, updated assessment strategies and renewed emphasis on academic integrity. The report recommends leadership and urgent action so AI can strengthen — rather than undermine — student learning and the credibility of degrees.
Methodology note: The survey is described as non-scientific and was conducted using a faculty list compiled by AAC&U and Elon University across a range of titles and disciplines.
Help us improve.


































