AI is not Advancing Linearly, or Exponentially
It might be better to look at the advances as inflection points separating radically different phases; and that has implications for EdTech

Was this forwarded to you by a friend? Sign up, and get your own copy of the news that matters sent to your inbox every week. Sign up for the On EdTech newsletter. Interested in additional analysis? Upgrade to the On EdTech+ newsletter.
Last week I shared a post looking at The Locality of Online Education, using AI tools (mostly Claude Code) to generate custom graphics based on three separate but connected datasets.
The locality analysis required something that was genuinely beyond what I could reliably do in December: combining multiple independent datasets—VOL survey reports across 14 years, NC-SARA institutional enrollment data, and IPEDS nationwide figures—into a coherent analytical framework with charts precise enough to publish. The December attempt with NotebookLM collapsed on a simpler version of that problem. Last week it worked.

This experience points to what I consider to be a better framing to understand AI and its implications for education in general but also EdTech specifically. Rather than viewing AI as one thing that is increasing in capabilities quickly, or even exponentially, what we’re really seeing is a whole new set of capabilities separated by inflection points. It’s not just a matter of being able to do certain tasks better over time—it’s more a matter of discovering the ability to do tasks that simply could not be done, at least in a reliable cost-effective manner just months ago.
This distinction matters because most strategy and policy discussions still assume steady improvement, not phase changes.
AI Phase Changes
This observation is not new, and it even predates the LLM era of AI. Vasant Dhar from NYU described the paradigm shifts in AI development from the late 1960s through the introduction of general purpose LLMs in late 2022. From expert systems to machine learning to deep learning to general purpose computing.
To understand the state of the art of AI and where it is heading, it is important to understand its scientific history, including the bottlenecks that stalled progress in each paradigm and the degree to which they were addressed by each
paradigm shift.
What has happened in the past few years extends these phases changes, but at a much faster pace. Ben Thompson has described the AI era in terms of three inflection points.
Chat with GPT3.5 in Nov 2022 - The basis for the modern AI started in 2017, but the GPT3.5 release made the world see the possibilities. This phase, however, was marked by wrong answers and even hallucinations when the AI tool didn’t know the answer.
Reasoning with GPT o1 in Sep 2024 - This phase meant that AI would get answers right much more often, but more significantly, the AI could reason about its responses and even re-evaluate its approach to answer questions and iteratively improve.
Agents with Claude Code/Opus 4.5 and OpenAI Codex/GPT 5.2 in Dec 2025 - The foundation models Opus 4.5 and GPT 5.2 were big improvements, but the combination with non-LLM capabilties in Code and Codex changed the game. The generic concept of an agent changed, and the combined probabilistic / deterministic capabilities meant that real tasks could be accomplished, providing a whole new level of value.
Each of these inflection points changed not just what AI could do, but what it made practical to do. It is that third phase change that I have experienced in the past few months.
Moving Past Evolution in Institutions and Vendors
The problem is that much of current strategy—both institutional and vendor—is built on a model that no longer fits the reality.
Most colleges and universities have responded to AI in good faith, developing policies, principles, and task forces at a rapid pace. But those efforts are largely grounded in assumptions from the previous phase of AI capability. The focus is on human-in-the-loop oversight, on mitigating hallucinations, on defining appropriate versus inappropriate use cases in relatively bounded terms. Those are not wrong concerns. They are incomplete.
They assume a category of tool that is fundamentally assistive—something that helps a student write a paper or helps a faculty member generate content, but remains unreliable enough to require constant supervision. That was a reasonable assumption in late 2022 and even into 2023. It is less reliable today, and it will be even less so going forward.
If AI capabilities are evolving through phase changes, then policies built on prior-phase assumptions risk being out of date before they are fully implemented. The issue is not that institutions are behind. It is that they are calibrating to a moving target as if it were stable.
A similar pattern shows up on the vendor side, but in a different form.
Most EdTech product development I see today treats AI as a feature layer—an enhancement to existing workflows. Add a chatbot to the LMS. Improve content generation in courseware. Automate pieces of advising or student support. Again, none of this is wrong. In many cases, these are useful improvements.
If the underlying capabilities are going through phase changes, that assumption deserves more scrutiny. When tools can not only assist with tasks but execute multi-step workflows, check their own work, and produce reproducible outputs, the boundary between “tool” and “system” starts to blur. In that context, improving existing features may be less important than rethinking what the product is actually for.
This is not an argument that current policies or product strategies are misguided. It is an argument that many of them are anchored in a prior phase of AI capability, even as the ground is shifting underneath them. Implicitly, this assumes a smooth continuum of improvement rather than a shift in what the system is.
The practical challenge, then, is not simply adopting AI. It is recognizing when the category of what is being adopted has changed.
Moving Past Evolution in Public Policy
The same dynamic shows up even more clearly in public policy, particularly at the state level and in the EU.
Most current AI policy frameworks are grounded in a version of AI that is already fading—systems that are brittle, narrowly scoped, and primarily risky because they produce incorrect outputs or embed bias. As a result, the policy focus centers on transparency, explainability, human oversight, and risk categorization. Again, none of this is wrong. But it reflects a continuity assumption: that we are regulating a stable class of tools getting incrementally better.
That assumption is increasingly strained.
When AI systems begin to cross phase thresholds—moving from generating outputs to executing workflows, from requiring constant supervision to performing bounded tasks with internal validation—the nature of both risk and impact changes. The policy question shifts from “how do we constrain unreliable tools?” to “how do we govern systems that are becoming operational actors?”
Many current frameworks, particularly in the EU, are highly structured around predefined risk categories and compliance regimes. Frameworks built around disclosure, risk tiers, and mandated human oversight. The problem is not that these are too strict; it’s that they are too static. They assume that capabilities evolve gradually enough for categories to remain meaningful over time. Phase changes break that assumption.
The result is a growing mismatch: policies designed for yesterday’s AI are being applied to today’s systems and will likely be enforced against tomorrow’s. That doesn’t just create compliance burdens—it risks missing the most important changes in how AI is actually being used.
The challenge for policymakers is similar to that facing institutions and vendors: not simply to regulate AI, but to recognize when the object of regulation has fundamentally changed.
The Addition That Matters
This is what the phase change looks like in practice. I might have been able to produce the locality analysis in mid-2025 with enough Excel manipulation, Tableau work, manual data collation, and AI assistance on the margins. But I probably wouldn't have. The opportunity cost would have been prohibitive—a week of grinding work for a newsletter post. In the end, I would have reported the VOL numbers, noted the methodology caveats, and moved on.
What's changed in this case is not that AI made me faster at work I was already doing. It's that it has made certain analytical projects feasible that would have remained in the "worth doing but won't get done" category. That distinction—expanding what gets done at all—is more important than speeding up what already gets done.
The practical challenge, then, is not simply adopting AI. It is recognizing when the category of what is being adopted has changed—and then acting as if that recognition actually matters. Most institutions, vendors, and policymakers are not doing that yet. They are calibrating carefully to a target that has already moved, and will move again.
The phase changes are not waiting for the strategies to catch up.
In future posts, I’ll explore what this means for institutional strategy and policy assumptions that may already be out of date.
The main On EdTech newsletter is free to share in part or in whole. All we ask is attribution.
Thanks for being a subscriber.