Update on Coursera’s AI and Academic Integrity Play

A welcome interview but with remaining questions

Was this forwarded to you by a friend? Sign up, and get your own copy of the news that matters sent to your inbox every week. Sign up for the On EdTech newsletter. Interested in additional analysis? Try with our 30-day free trial and Upgrade to the On EdTech+ newsletter.

Last month, Coursera announced a new suite of AI-driven academic integrity tools, although the press accounts were confusing as to what that meant. CEO Jeff Maggioncalda’s official blog post itself was vague.

Today, we’re excited to announce several new genAI-powered features designed to scale assessment creation and grading, strengthen academic integrity, and enhance learning and evaluation. These features, including AI-Assisted Grading, Proctoring and lockdown browser, and AI-based Viva Exams will help campuses deliver authentic learning to students while increasing the value of online assessments, courses, and certificates. [snip]

The features we launch today illustrate how new technologies can support a more authentic, verified learning experience for students, educators, and employers.

Are these tools meant for the broader market and do they represent a change in corporate strategy, or are they just intended for Coursera courses? Coursera already has partnerships that provide similar tools (Honorlock, Turnitin, Examity, ProctorU, etc) – does this mean that Coursera is cutting them out and going direct? There was little in the way of answers, not just on details but on the overall scope of the initiative. The initial press coverage mentioned pilot programs in India – are these new tools only meant in certain geographies?

At Inside Higher Ed, Phil was quoted about his skepticism.

Phil Hill, a market analyst and ed-tech consultant with Phil Hill and Associates, said Coursera’s announcement left more questions than answers and called the entire rollout “poorly thought out.”

“I think they have been very strategic in the past—they do OPM work, consumer, enterprise, all organized around the same theme of synchronous online courses,” Hill said. “They’ve made a lot of smart moves in the past and this feels different to me, because it’s not clear what they’re trying to do or why they’re trying to do it.”

Follow-up

A Coursera spokesperson followed up and set up an interview for us to talk to Chief Content Officer Marni Baker Stein. It was refreshing to talk to Stein openly, with no scripting and no communications staff interference – kudos for that approach.

While we still have questions about some of the assumptions behind the AI approach to academic integrity, we at least understand the initial scope and justification. In a nutshell, this set of AI tools is primarily intended for enterprise sales (think campus, corporate, government) for upskilling to provide a lower-cost inclusion of academic integrity features. It is not intended for general EdTech usage competing against partners and (at least initially) not even intended for consumer sales (open MOOCs).

The basics

There were a bunch of tools mentioned, which felt like a laundry list in the announcement / blog post. But it might help to group them together. These tools can be categorized into two related groups: those focusing on assessments and those designed to prevent cheating.

The assessment tools include options such as:

  • Using AI to more quickly and effectively devise mathematical, text, and multiple-choice questions;

  • Enhancing question banks to provide multiple variants of questions making copying more difficult; and

  • AI-assisting grading suggesting scores and feedback.

The cheating prevention tools include:

  • Locking graded items;

  • Locking down the learner’s browser;

  • Limiting time and attempt on exams;

  • Ai-driven proctoring and identity checking before and during the exam, with flags being raised if something unusual is detected to be evaluated by a person;

  • Providing plagiarism detection, identifying content similar to previous submissions, deterring plagiarism, and educating students on independent thinking and originality; and

  • Generating AI-based Viva Exam, where AI will generate questions about content produced by the student to test knowledge of the material and potentially the authenticity of the response.

Coursera is currently piloting these tools with several universities in India and Malaysia as well as select institutions in the US.

What about existing solutions?

Coursera has partnerships with several academic integrity vendors including Honorlock and Turnitin. According to Stein, these partnerships will remain in place. The new tools they are developing will be used internally on the Coursera platform within the Coursera for Campus solutions.

Some of the open questions stem from third-party tools on the market. Several AI-driven proctoring tools and locked browsers are already well-established, with Respondus being a major player. Similarly, Turnitin increasingly dominates the plagiarism detection space, having absorbed a number of competitors. Further, new tools are emerging to help instructors make cheating less likely within assessments by improving question banks and formulation. A few other tools provide AI-assisted grading, such as Turnitin's Gradescope, Graide, and Markr.

With these tools and even partnerships available, the question is why change approach.

A new assessment approach

What we haven’t seen on the market (at least with company partners) is a new approach to viva exams in the manner of the new Coursera tools. For background, the OxfordEssays blog has a good description of the origin of this type of assessment.

A viva voce examination, widely known as the viva, is an oral examination at the culmination of your PhD. It is comprised of a committee of both internal and external examiners who look through your work and, essentially, decide whether you pass or fail your PhD. It is an interview and there are a number of different ways that a viva can be conducted. In some cases, the viva is open to the public, which means that anyone who is interested can attend. In other cases, they are closed, which means that it is just you and a panel of examiners.

In either case, the examiners generally include internal reviewers (someone in your department who has interests in your subject area but who has not helped you with the PhD) and external reviewers (usually 2-4 people who have expertise in your discipline, but who do not teach/work at the university that you are affiliated with).

Coursera is planning to use generative AI to facilitate viva-like exams by posing questions based on submitted content. This builds on chatbot functionality like Coursera's Coach, and it is a novel approach to expand access to this type of assessment. If Coursera is successful, we would not be surprised to see this approach become a feature of other assessment platforms soon.

Why do this?

Stein described this development as a way to leverage AI to address scalability challenges for Coursera, especially in growing markets like India. Here, cost constraints are paramount, academic integrity is a major concern, and online learning is rapidly expanding despite past skepticism.

On some levels, AI seems like a great fit for this situation – functionally. While proctoring costs have decreased, they remain significant. Plagiarism solutions, on the other hand, tend to become more expensive as features increase and the market consolidates.

The other issue beyond pure cost is dealing with geographies and market niches that do not have a history of using a variety of EdTech tools. In other words, where an ecosystem does not actively exist.

However, we are not convinced of the long-term benefits of this approach. As one proctoring CEO pointed out, proctoring can be deceptively simple. This, he argued, is why "many unqualified individuals" set up shop with just a webcam. However, there are all kinds of complex challenges in doing AI driven proctoring well. These include the high number of false positives, difficulties AI has recognizing students from non-Caucasian ethnicities, and the constant emergence of methods to bypass AI-driven proctoring systems (e.g., virtual cameras). Additionally, these tools face heavy government scrutiny (like the Illinois biometric law), raise privacy and security concerns due to the PII they collect, and can be challenging to support. Academic integrity tools are also wildly unpopular with students.

Further, generative AI tools are expensive to provide, which somewhat runs counter to the argument for taking this approach to address cost and access.

Is Coursera really positioned to manage these issues and run specialized assessment and proctoring systems? Couldn't the company leverage its substantial size and market influence to negotiate favorable prices with vendors specializing in these systems? We are not arguing that we have the simple answers, but it is not that clear that Coursera does either, at least based on public information.

The strategic view of Coursera’s move

Alternatively, one could argue that this is just the beginning of a major shift for Coursera, as it rethinks its platform in light of new possibilities from generative AI. Assessment has long been the Achilles heel of massive learning platforms like Coursera. While Coursera has successfully scaled content and administrative functions, assessment and the associated guarantee of academic integrity are much more challenging to scale effectively.

Coursera's CEO, Jeff Maggioncalda, is a strong proponent of AI, even developing his own course on the topic for CEOs. If AI-driven assessment is to become a key part of Coursera's platform, enabling it to expand more reliably into paid training and education, then in-house production in parallel with third-party partnerships might make sense. Bringing together all the different aspects of the assessment challenge is something that other vendors haven't achieved (although Turnitin, through acquisitions, is coming close). There is an argument that not only is the broad academic integrity provision a central function, but also that partnering with multiple vendors offering partial solutions would be less efficient.

Parting thoughts

Even beyond the future success or quagmire of Coursera’s AI approach, we think Coursera's move is worth following. Its size in online learning means others are likely to follow, particularly given Coursera’s brand recognition among university leaders. If Coursera is successful, this could introduce new ways of handling academic integrity in a holistic manner and impact a consolidating sector, following its pandemic expansion.

If you are undecided about the wisdom of Coursera's move, so are we. However, this move has implications for EdTech.

  • This move could shift the focus away from point solutions and encourage a more holistic approach to academic integrity, where it's seen as reliant upon a suite of related solutions and design choices.

  • It could also help push higher education towards the much-needed perspective of rethinking assessment as critical to both expanding scalability and addressing academic integrity concerns.

We appreciate the further clarity provided by Stein.

The main On EdTech newsletter is free to share in part or in whole. All we ask is attribution.

Thanks for being a subscriber.