Faculty Uncertainty About AI in Teaching and Learning

Reading the Digital Education Council faculty survey and its valuable insights

Was this forwarded to you by a friend? Sign up, and get your own copy of the news that matters sent to your inbox every week. Sign up for the On EdTech newsletter. Interested in additional analysis? Try with our 30-day free trial and Upgrade to the On EdTech+ newsletter.

Last year, we covered the insightful report on student use of AI published by the Digital Education Council (DEC). Last month, they followed up with a new report on faculty use. While many surveys explore faculty and student approaches to AI, most cover the same ground and add little to our understanding. The DEC reports stand out, not just in size and scope with 1,681 faculty responses from 52 institutions across 28 countries, but also in their ability to ask thought-provoking questions that address critical issues.

It’s all about the content

One of the most frustrating aspects of AI implementation in existing EdTech tools is the overwhelming focus on content creation, while other critical functions, such as feedback and assessment, are too often overlooked. The survey data reflects this trend, mirroring what we see in the tools themselves.

The key question is whether vendors are simply responding to faculty demand by prioritizing content creation, or if faculty are focusing on it because that’s where AI integration has been most heavily developed.

Chart showing top AI use case is creation of content

There’s a strong argument that AI-powered content creation tools are a valuable addition to higher education, as this area has been largely neglected and is ripe for innovation. I would agree with this argument if more of these tools actually guided faculty toward better instructional design, even in fundamental ways, such as chunking content effectively or incorporating reflection prompts.

However, many of the AI-driven content tools I’ve seen prioritize generating material, like drafting a syllabus on the fly, rather than designing meaningful learning experiences. Until I see more emphasis on the latter, I’ll remain frustrated that AI is primarily being used for lower-order tasks like content generation and automation, while higher-impact applications such as fostering engagement and enhancing feedback are still being underutilized.

Lingering ambivalence in faculty approaches to AI

The survey shows that a majority of faculty are positive about AI.

Chart showing faculty sentiment on AI divided

This is encouraging news and also somewhat surprising. More than half of faculty have resisted the relentless narrative of sturm und drang surrounding AI and cheating. However, a closer look at the data reveals a significant degree of lingering ambivalence (which, coincidentally, is also the name of the modern jazz quartet I’m about to start).

For instance, the 30% of faculty categorized as "neutral" would be more accurately described as ambivalent. The chart itself labels them as “uncertain” or having mixed feelings.

Examining additional data further clarifies the nuances of this ambivalence.

Chart showing time and resources are top barriers to AI use

While some responses reflect negative perspectives, most indicate uncertainty or a lack of perceived benefit. I would categorize the response about not having enough time in this group as well. To me, it suggests a deeper issue - a lack of knowledge. Faculty may not know where to start with AI, nor do they have a clear sense of its potential return on investment, making them reluctant to commit time to exploring it.

Moving towards effective usage

The survey reveals that faculty face numerous obstacles to adopting and effectively using AI, but these challenges largely boil down to two key issues: guidelines and training.

Faculty report that institutional guidelines on AI are either insufficient or poorly communicated, leaving them without clear direction on how to integrate AI into their teaching.

Chart showing 80% of faculty do not find institutional guidelines comprehensive

In order to use AI more in their teaching they would like access to tools and more training.

Resources training and best practices can enable AI integration

My hesitations

I don’t believe either of these findings should serve as a clear guide for institutional policy moving forward. In fact, I suspect faculty may be asking for things they don’t actually want.

When it comes to guidelines, perhaps I’ve spent too much time in large research-intensive institutions in the US, but I can’t imagine most faculty reacting well to anything resembling comprehensive AI teaching guidelines, especially since any attempt at such guidelines would quickly become outdated. More detailed policies aren’t the solution.

Instead, we need to conduct further research, through interviews and focus groups, to better understand what faculty feel is missing in existing guidelines. Even then, institutions should tread carefully. AI-related guidelines should remain flexible and evolve organically as the technology and its applications shift. It’s also possible that what faculty are seeking from guidelines might actually be better addressed elsewhere, which brings us to the issue of resources and training.

The call for more access to tools and resources is broad and somewhat vague, so we need to dig deeper into what faculty mean by this. Many campuses already provide access to AI tools, so what exactly do faculty feel is missing?

I’m particularly skeptical about the push for more training, especially when framed as developing literacy. Maybe that’s just my lingering scars from working in academic technology, where faculty resistance to changing practices was a constant challenge.

Instead, I believe the real keys lie in two other areas highlighted in the survey: access to best practices and use cases, as well as fostering an environment that encourages innovation and tolerates failure. However, even these strategies will only be effective once we gain a clearer understanding of what’s lacking in current guidelines and resources. And find practical ways to address those gaps.

The need for understanding and action

Overall, the report presents a now-familiar picture: faculty remain divided on AI, with some enthusiastic adopters but others less so, with the majority feeling ambivalent. This hesitation is driven, in part, by a lack of knowledge or fluency with AI tools and their potential role in teaching. Even among those who view AI positively, uncertainty persists. Without a strong foundation of understanding, faculty struggle to articulate what support or resources they need to use AI more effectively and confidently.

This highlights a critical need for institutions and organizations (such as DEC) to conduct research - much of it likely qualitative - into the specific gaps in faculty knowledge, the most effective ways to educate faculty about AI in teaching, and the key elements they seek in guidelines. These insights should then serve as a blueprint for action that is based on understanding root causes.

Without such groundwork, major investments in AI for higher education, such as the one announced today between OpenAI and the California State University System, are unlikely to achieve meaningful improvements in teaching.

Parting thoughts

This is a good survey report of which I have only scratched the surface. Every time I read through it I came away with something new to think about. I encourage you to do the same.

The main On EdTech newsletter is free to share in part or in whole. All we ask is attribution.

Thanks for being a subscriber.