On Flexibility in Teaching, Learning, and EdTech
And also on finger pointing

Was this forwarded to you by a friend? Sign up, and get your own copy of the news that matters sent to your inbox every week. Sign up for the On EdTech newsletter. Interested in additional analysis? Try with our 30-day free trial and Upgrade to the On EdTech+ newsletter.

A central source of uncertainty and resistance to the use of AI in higher education is assessment: how we know things and how we prove we know them when the nature of knowledge is undergoing radical change.
I often dodge the “how should we do assessment now?” question with a tired joke about sociologists and light bulbs (it’s not the bulb; it’s the system that needs changing). A recent article by Thomas Corbin, Margaret Bearman, David Boud, and Philip Dawson may finally let me retire that joke. They argue that neat technological or policy “solutions” to the challenge of AI in assessment (GenAI-assessment) are the wrong approach. Instead, assessment in the age of AI is a classic wicked problem, so one-size-fits-all fixes will fail. Instructors need to experiment, compromise, and iterate. And institutions need to grant permission to do so.
I wrote about the piece in my Interesting Reads This Week post last Saturday, but I want to unpack it further. I find the wicked-problem lens powerful and agree with much of the conclusion. However, the paper implicitly positions faculty as flexible experimenters in opposition to rigid administrators, a framing that doesn’t fully match what I’ve seen. Full disclosure: I used to be a mid-level university administrator and am married to a more senior one.
Faculty can be inflexible, especially around assessment. Administrators aren’t the only culprits. To move forward, and to embrace a wicked-problem approach to assessment, we need to acknowledge this additional source of rigidity. Only then can we design systems that support the imperfect but more flexible practices the authors advocate.
Assessment in the age of generative AI & wicked problems
We are all familiar with the pressures generative AI places on conventional assessment. In a world where students can easily find answers to homework sets or short-cut to polished text, how do we design rigorous, valid assessments where cheating is not an ever-present threat?
Corbin et al. argue that gen AI-era assessments exhibit the classic characteristics of wicked problems, originally described by theorists of design and planning Rittel and Webber.
Issue | Wicked Problem Characteristic | GenAI-Assessment Characteristic |
---|---|---|
No definitive formulation, no single problem to solved | Stakeholders see different problems, pulling solutions in conflicting directions and preventing a single cohesive response. | Instructors cant decide whether the core problem to be solved is 1. to prepare students to use the technologies they will need in the workforce or 2. to prevent cheating |
No stopping rule | There is no clear criteria for knowing when you have reached a solution | Instructors are uncertain if they have solved the cheating problem. New challenges always pop up. |
Solutions aren’t true/false, only good/bad | Unlike technical problems which can have solutions that are either true or false, the solutions to wicked problems fall on a spectrum and involve trade-offs | All GenAI-assessment solutions involve trade-offs, sacrificing time, rigor, validity or testing the full desired range of capabilities. |
No reliable way to test the solution | There are no clear metrics for testing whether solutions have succeeded | Instructors expressed doubts as to whether they could tell whether AI had been used or not |
Trial and error is not possible due to high stakes | You cant find solutions by experimenting because every attempt has real consequences. | Assessments impact students grades and even things like enrollments |
Endless possible approaches | There are endless possible approaches & no way to tell if all options have been considered | Instructors are trying all sorts of approaches eg blue books, question banks, oral exams etc and new ones are constantly being developed |
Each case is essentially unique | Best practices don’t work because each problem is unique in its nature and context | Assessment approaches that work in one context fail in others |
Problems are symptoms of other problems | Wicked problems are interconnected with other systemic issues. | Instructors believe that the problems of GenAI assessment are symptomatic of other systemic issues such as underlying business models, students lack of engagement etc |
Problem framing determines solutions | The way that a problem is framed shapes which solutions seem feasible and which not | Integrity framing results in control/proctoring solutions. A professional tool framing results in integration solutions. A framing of GenAI as an existential threat results in solutions emphasizing fundamental rethinking. |
No right to be wrong | Decision-makers bear full responsibility for the consequences of their choices | Faculty feel vulnerable to the judgement of administrators over the nature and success of their assessment practices |
Where the article points the finger
It was this last point that got me questioning some of the assumptions built into Corbin et al’s analysis. Embedded throughout the argument is an implicit suggestion that it is administrators who are preventing faculty from adopting the practices suggested by a wicked problem frame. At times this becomes more explicit, as seen in this faculty member quote
I feel very, very vulnerable within the university running assessments like this because I know that there are pockets of the university management who would really like to just see us do traditional, detached, academic assessments that don’t threaten to push students
This assumption, that admin inflexibility is the primary barrier, weakens the overall argument, according to the authors. . Corbin et al. say the wicked-problem framing.
So, what can be created if we reframe the GenAI-assessment challenge as a wicked problem? First, it lifts the impossible burden on teachers and institutions to immediately get things right once and for all.
I agree that it has that potential. To move forward, the authors offer several recommendations.
Adopt a wicked-problem framing that prioritizes “assuring learning” over chasing perfect “solutions” or defaulting to policing.
Grant three core permissions to assessment designers:
Permission to compromise: Make trade-offs explicit (e.g., authenticity vs. workload); accept that some approaches will fail and treat failures as evidence.
Permission to diverge: Allow discipline-, cohort-, and context-specific designs; replace uniformity with fitness for purpose.
Permission to iterate: Build revision cycles into plans and workload; expect assessments to evolve each term; support rapid adaptation rather than penalize change.
But it’s not just administrators, faculty can be rigid too
I love the wicked-problem framing, but the analysis often casts administrators as the drivers of one-size-fits-all fixes while positioning instructors as experimenters at the coalface. In practice, it’s more complicated. There are certainly rule-bound administrators, but I’ve seen many faculty insist on rigid tech solutions, especially in assessment.
During the pandemic, I covered proctoring technologies and spoke with what felt like hundreds of institutions worldwide as assessment moved online. The stories were striking: students taking proctored exams in cars in campus parking lots; “no-mask” rules colliding with public or semi-public testing spaces like library break out rooms; student push back over mandatory room scans, even for PhD dissertation defenses. Whenever I suggested relaxing rules or dropping blanket requirements, the strongest resistance typically came from faculty, not administrators.
Sometimes this was a desire to run proctoring “out of the box,” rather than craft flexible, context-aware strategies. Other times, faculty didn’t see a need to adapt practices to the realities in front of them. Some departments pushed for extra “certainty,” such as two cameras instead of one, even as students struggled to buy hardware amid supply-chain shortages.
There’s another irony: faculty were ultimately responsible for reviewing and enforcing proctoring findings, yet seldom actually reviewed the results or interpreted them with anything remotely resembling flexibility. If the algorithm said the student was cheating, then it must be true. Only about 11% of test sessions tagged for suspicious activity by AI tools in ProctorU data were reviewed by instructors. Similarly, data from the University of Iowa found just 14% of flagged sessions were reviewed by faculty. In short, strict surveillance up front wasn’t matched by thoughtful, case-by-case review and flexibility downstream.
If assessment is a wicked problem, we have to name this dynamic: rigidity isn’t only administrative. Faculty defaults to inflexible, tool-first solutions can undermine the adaptability the moment requires.
Two additional caveats, and some additional applications of the wicked problem frame
In addition to my concern with the analysis that casts faculty as fighting for flexibility against a rule-bound administration, Corbin et al.’s argument has another weakness.
The “focus on learning” isn’t a trump card. The authors urge us to reorient assessment toward learning rather than policing. Traditional assessment advocates would say that has always been the point, to evidence that learning occurred, which is hard whether you treat assessment in traditional terms or as a wicked problem.
That said, I like the wicked-problem lens and can imagine applying it across EdTech contexts.
Student success is one obvious area though I may be biased, having just started a newsletter on the topic. Unless you get very specific (say, retention of first-time, full-time freshmen), definitions are fuzzy, there’s no clear “solved” point, approaches are endless, and the stakes are high—classic wickedness.
Even something as mundane as implementing a new LMS/VLE benefits from this framing. No rollout is perfect, so institutions should make trade-offs, and make them explicit. Divergent rollout strategies should match context: a medical school, an online/professional unit, and a largely undergraduate college will need different, fit-for-purpose approaches. And implementation is never “done”; it requires iteration, follow-up, continuous improvement, and adding missing features, even if not starting from scratch.
Which brings me to a second caveat: wickedness predates generative AI. The definition is broad enough that most pre-GenAI assessment and, frankly, many EdTech projects, qualify as wicked, as the examples above suggest.
I don’t think Corbin et al. make the case that generative AI creates uniquely wicked problems. It clearly amplifies them; it doesn’t uniquely create them.
Parting thoughts
Perhaps we should reframe the wicked-problem lens as applying to most issues in learning and EdTech. Given that generative AI isn’t going away, and will keep amplifying the inherent wickedness of teaching, learning, and technology, we need to stop the finger-pointing and cultivate the habits Corbin et al. identify across both administrators and faculty.
Higher education has a choice: double down on solutionism, with all its predictable pitfalls—in a futile attempt to “fix” a wicked problem, or embrace the wicked-problem framing and use the flexibility it permits to do the harder work of redesigning areas like assessment from the ground up: more authentic, harder to short-circuit, and ultimately more about evidence of learning than policing it.
The main On EdTech newsletter is free to share in part or in whole. All we ask is attribution.
Thanks for being a subscriber.