Was this forwarded to you by a friend? Sign up, and get your own copy of the news that matters sent to your inbox every week. Sign up for the On EdTech newsletter. Interested in additional analysis? Upgrade to the On EdTech+ newsletter.

At ASU+GSV earlier this month the AI counter-revolution went mainstream. ASU leaders and others pushed back hard against Big Tech’s influence in education, voicing real concerns about cognitive load, the value of struggle, and what gets lost when students offload thinking to tools. That pushback deserves to be taken seriously.

Yet something more interesting—and potentially more lasting—also happened. In panels that drifted off script, people started talking seriously about what learning actually is and what it is for. Not as abstract philosophy. As an urgent operational question. The trigger, almost always, was AI.

AI may be giving us permission to have more honest conversations about learning outcomes and educational mission, even if it did not create the underlying problems.

The counterrevolution has a targeting problem

The pushback on AI in education is producing some legitimate thinking—cognitive load, the value of struggle, what gets lost when students offload thinking to a tool. That argument deserves to be taken seriously. But it also requires honesty about what it is targeting.

A report released this week by Yale’s Committee on Trust in Higher Education highlights how deep these problems run. It details a decades-long erosion of public trust caused by pervasive grade inflation that has undermined credential integrity, intellectual monocultures on campus, affordability pressures, and debt burdens at professional schools that outrun expected earnings. The committee places significant responsibility on higher education institutions themselves for allowing these issues to accumulate over many years.

None of those problems were created by AI. Grade inflation was accelerating before ChatGPT existed. The debt-to-earnings mismatch was documented long before generative models could write a student essay. Intellectual insularity on campus is not a technology problem. These are choices institutions made, or allowed to accumulate, over a long period.

What ties them together is the long, quiet push to make education transactional. By “transactional,” I do not mean simply that students care about jobs or that institutions care about outcomes. I mean that higher education increasingly operates as a managed exchange rather than a formative experience: students accumulate credits, credentials, and services, while institutions reduce friction, protect revenue, and move students through the system. In that environment, too many decisions get optimized for completion, satisfaction, and administrative self-protection rather than for rigor, intellectual development, or a coherent educational mission. And when the underlying goal becomes the piece of paper more than the learning it is supposed to represent, a cheating culture has room to flourish. Academic dishonesty becomes easier to rationalize when students see themselves as completing transactions rather than undergoing formation, and when institutions themselves send signals that progress toward the credential matters more than the integrity of the process.

Cognitive offloading to AI is simply the latest symptom of that same transactional mindset.

Which makes the counterrevolution’s targeting problem clear. The concern about offloading thinking to AI is legitimate. But if that concern does not extend backward—to the assessment regimes that reduced learning to completion metrics, to the credentialing inflation that substituted credential accumulation for demonstrated capability, to the affordability spiral that made the transactional degree a rational response to irrational costs—then it is reacting to a symptom while leaving the disease unexamined.

What AI is doing, at its most useful, is forcing the question: what are institutions actually for, and are they doing it? That question was available before. The permission to ask it urgently, apparently, required a technological disruption large enough that ignoring it was no longer comfortable.

The binary is the enemy

The most useful voices at ASU+GSV this week—and there were several—were the ones refusing to choose sides. Not pro-AI, not anti-AI. Pro-learning, with AI as one variable among many. Lev Gonick’s focus on measuring and supporting learning rather than debating the technology is the right frame. Leadership in this moment looks like holding that position under pressure from both directions—the vendor floor pushing adoption and the counterrevolution pushing resistance.

The binary resistance mentality may feel satisfying in the moment, but it is unlikely to change the practical integration of AI the way its loudest advocates hope.

What the workforce conversations got right

The most grounded thinking at the conference often came from workforce sessions, and not by accident. Workforce has a reality check built in that pure pedagogy conversations lack—employers, hiring data, economic mobility. When workforce panelists talked about what graduates actually need, the conversation had to stay tethered to evidence. That discipline is a model for how the broader learning conversation should work, and it’s worth naming as a contrast to the sessions that stayed in aspiration.

The permission question

One of the more refreshing conversations at ASU+GSV was the simple reminder that real learning usually involves struggle. Not pointless friction or bureaucratic hoops, but the kind of effort that comes from reading a full book closely enough to wrestle with an argument, sitting with confusion long enough for understanding to develop, or working through an idea in a process that can be frustrating precisely because it is formative. That matters because education loses something essential when learning is reduced to checkboxes, task completion, or the fastest path past difficulty. AI did not create that tension, but it is helping force a more serious conversation about it.

If AI anxiety is what finally opens serious conversations about learning quality, institutional purpose, and the thirty-year drift toward transaction, that is worth something regardless of how the technology itself plays out. The risk is that the industry—vendors, investors, institutions—uses AI as a focusing mechanism without reckoning with the structural damage that preceded it. Permission to have the conversation is not the same as having it honestly. ASU+GSV showed both the opening and the risk in the same week.

The main On EdTech newsletter is free to share in part or in whole. All we ask is attribution.

Thanks for being a subscriber.

Keep Reading