Interesting Reads This Week

AI, tough economic times and inflated grades - higher ed in a nutshell

Was this forwarded to you by a friend? Sign up, and get your own copy of the news that matters sent to your inbox every week. Sign up for the On EdTech newsletter. Interested in additional analysis? Try with our 30-day free trial and Upgrade to the On EdTech+ newsletter.

It has been a busy week in higher education. During hectic times like these, I often feel like an EdTech archaeologist, carefully sifting through the torrent of updates to uncover the pieces that lay bare the deeper trends and issues. So, what did I discover in my sifting this week?

Generative AI in higher ed: Progress, pitfalls, and the perpetual Stage 2

The Association of Pacific Rim Universities (APRU) has released a white paper on Generative AI in higher education. While it’s a solid and useful resource, I wouldn’t echo some of the overly effusive praise I’ve seen on LinkedIn (I know, when it comes to white papers on AI in higher education, it’s a low bar). The paper stems from a series of meetings held by APRU members throughout 2024, culminating in an in-person workshop at the Hong Kong University of Science and Technology.

The paper does a commendable job of outlining the challenges higher education faces in grappling with generative AI. Many institutions lack staff with the expertise to effectively implement and manage AI systems. Additionally, some instructors and leaders view AI as an existential threat, fearing it could undermine their authority or even take their jobs. This fear often leads to a narrow focus on AI’s potential risks—such as threats to academic integrity—without considering longer-term, more nuanced responses.

As a result, many member institutions have taken a “cautious and piecemeal” approach to AI. To address this, the white paper introduces a framework developed during the workshop, structured around five key concepts: Culture, Rules, Access, Familiarity, and Trust (CRAFT).

The core of the paper is a set of rubrics for each CRAFT concept, illustrating how various stakeholders might approach these issues as their understanding and implementation of AI mature. These rubrics are a practical tool for institutions to benchmark their current practices and envision what more advanced approaches could look like.

Interestingly, these rubrics are essentially maturity models in disguise, though they avoid at least some of the pitfalls typical of such models. I appreciate them more than traditional maturity models, partly because they don’t place heavy emphasis on advancing through the stages. Additionally, the stages aren’t numbered, sidestepping what my friend Tony refers to as the inevitable fate of all maturity models: everyone perpetually being stuck at "Stage 2"!

Subscribe to Premium to read the rest.

Become a paying subscriber of Premium to get access to this post and other subscriber-only content.

Already a paying subscriber? Sign In.

A subscription gets you:

  • • New content 3-4 times per week
  • • Shared Q&A discussions
  • • More coming soon