Was this forwarded to you by a friend? Sign up, and get your own copy of the news that matters sent to your inbox every week. Sign up for the On EdTech newsletter. Interested in additional analysis? Upgrade to the On EdTech+ newsletter.

The State University of New York's Board of Trustees executive committee approved a Systemwide AI Policy on April 30. The policy is worth reading not because it is unusually bad or unusually bold, but because it is so representative of where higher education AI policy currently sits. It creates a responsible-use framework, recognizes real institutional risks, points campuses toward local implementation, and says many of the right words about privacy, accountability, equity, shared governance, and training.

It also largely misses the point of where AI is heading and how students are impacted.

This is not just a SUNY issue. Across higher education, too many AI policies still read as if the core question is how to manage ChatGPT usage without upsetting existing academic processes. Should students be allowed to use AI? How should faculty disclose expectations? How do we protect privacy? How do we avoid bias? How do we preserve academic integrity? These are all legitimate questions, but they are no longer sufficient questions.

Two strategic failures stand out. The policy is governing yesterday's AI. And the policy explicitly endorses the inconsistency students are reporting in their day-to-day experience moving through courses. Neither is an accident, and the explanation in both cases is the same—a policy designed to fit comfortably alongside everything that already exists is a policy that will not change what institutions actually do.

The policy is protective, by design or otherwise

The unifying logic of the document is protective. It acknowledges AI prominently enough to satisfy a board vote, a press release, and a regulatory landscape that increasingly expects institutions to have something on file. It does so without constraining any process, governance body, or faculty assumption that currently exists. Existing institutional frameworks are not challenged; they are encouraged to ‘extend’ to AI. Campuses must publish or update policies, but the structure of the work is to weave AI into the existing policy landscape rather than force a reconsideration of that landscape.

Existing procurement practices remain intact, with new language to be added "to the extent possible." Existing shared governance arrangements are explicitly preserved as the right venue for pedagogical decisions, with course-level discretion protected by name. The risk-based governance requirement defers categorization to each campus, which means in practice that whatever each campus is currently doing will likely qualify.

This is not an accusation of bad faith. There are legitimate reasons to write a protective AI policy. Faculty governance norms are real. Operational disruption is costly. The technology is moving faster than institutional processes can absorb. And the alternative—a policy that addresses difficult questions and constrains current practice—would have triggered fights the system office reasonably might want to avoid.

But the consequence is the same regardless of intent. Institutional policies that avoid the real but challenging problems faced by students and faculty will fail to make a difference. The two strategic failures that follow are not separate from this design choice.

The policy is governing yesterday's AI

Read the AI System definition carefully. It describes a system that uses "model inference to formulate options for information or action." Formulate options. That is the pre-2025 framing—AI as recommendation engine, with a human reviewable surface between output and institutional action. It is not language that contemplates systems that take action themselves, chain tool calls, execute code, or operate inside institutional systems autonomously. The entire document is structured around the chat-era assumption that there is always a human in the loop to review what the AI proposed.

A few weeks ago I described the uneven nature of AI developments, with inflection points and phase changes.

Most colleges and universities have responded to AI in good faith, developing policies, principles, and task forces at a rapid pace. But those efforts are largely grounded in assumptions from the previous phase of AI capability. The focus is on human-in-the-loop oversight, on mitigating hallucinations, on defining appropriate versus inappropriate use cases in relatively bounded terms. Those are not wrong concerns. They are incomplete.

They assume a category of tool that is fundamentally assistive—something that helps a student write a paper or helps a faculty member generate content, but remains unreliable enough to require constant supervision. That was a reasonable assumption in late 2022 and even into 2023. It is less reliable today, and it will be even less so going forward.

If AI capabilities are evolving through phase changes, then policies built on prior-phase assumptions risk being out of date before they are fully implemented. The issue is not that institutions are behind. It is that they are calibrating to a moving target as if it were stable.

SUNY’s policy presents a good example.

The accountability principle has the same vintage problem. "The ultimate accountability for work completed and actions made by, or in conjunction with, AI systems must rest with human beings." True in the abstract. Operationally meaningless when applied to systems that generate hundreds or thousands of micro-decisions inside workflows no human reviews in real time. The policy does not distinguish human-in-the-loop from human-on-the-loop—the difference between a person reviewing each consequential output before action and a person supervising a system that is already operating. This is the inflection point from late 2025. A governance framework that cannot see the distinction cannot assign accountability for agentic action; it can only restate the principle that someone is responsible.

Procurement is the next casualty. The requirement to "preserve SUNY's decision-making authority" was written for a product landscape in which decision-making was discrete and reviewable. Current vendor offerings routinely include agentic capabilities that take multi-step actions inside institutional systems—drafting and sending communications, modifying records, executing code against institutional data, navigating administrative workflows. The policy has no vocabulary for any of this. It cannot ask vendors the questions that need to be asked, because the framework was not written from a posture that knows those questions exist.

The deepest version of this blindness is in program outcomes. The policy treats AI exclusively as something used inside SUNY. It does not treat AI as something reshaping the labor markets SUNY credentials are supposed to prepare graduates to enter.

That distinction matters. If agentic systems compress entry-level work in software, finance, legal services, marketing, customer support, and business analysis, then AI policy is also workforce policy and curriculum policy whether the document admits it or not. The Education principle addresses AI literacy. Literacy is necessary and not sufficient. The harder question—whether the programs students are paying to complete still terminate in the labor markets the institution implies they do—is not in the document.

No policy can address all problems, but a systemwide AI policy that bills itself as providing a framework should at least frame the big questions that need to be addressed..

The policy reinforces the inconsistency students are reporting

If the first failure is about what the policy cannot see, the second is about what the policy refuses to coordinate.

Surveys of students on AI policy tend to raise the same issue, perhaps best expressed by the Cal State Student Association earlier this year [emphasis original].

One of the most repeated themes among students is the absence of a consistent, transparent classroom policy on AI use. This inconsistency is reflected in student survey data, where 61.4% of students generally disagree that their professors encourage the use of AI in coursework, despite 64% of students generally agreeing that AI has positively affected their learning at the university. Students described situations where some professors encourage AI literacy while others penalize any perceived use of it, creating confusion, fear, and mistrust. This contradiction, students recognizing learning benefits from AI while receiving little to no consistent instructional support, highlights the misalignment between faculty practices and students’ experiences and outcomes. Students believe that the conflicting faculty approaches have left students unsure what constitutes “acceptable use.” That uncertainty is further reinforced by the fact that 66.8% of students generally disagree that their professors teach them how to use AI effectively, suggesting that discouragement often occurs without guidance.

In the SUNY policy, the third guiding philosophy states that pedagogical AI policies should be developed through shared governance and "should take care not to limit creativity and experimentation in teaching, research, and learning." Read in context, that sentence does the opposite of what students would want. It pushes pedagogical AI decisions down to shared governance bodies—which in practice means departments, then individual faculty—and explicitly protects course-level discretion as a value the policy will not constrain.

That is the language of preserving faculty autonomy. It is not the language of cross-course or cross-section coherence for the students who actually move through SUNY's curricula. Students are encountering different AI rules in adjacent courses in the same program, or different rules across sections of the same course, with no coherent expectation about what they are supposed to learn to do—or not do—with the technology by the time they graduate. The SUNY policy does not engage that finding. The Roles and Responsibilities requirement is institutional—management and oversight—not pedagogical. The Education principle covers AI literacy curricula, not students' day-to-day experience navigating a patchwork of course-level rules. In practice, that amounts to endorsing variability as the price of preserving faculty autonomy.

A policy that took student consistency seriously would have to constrain faculty discretion in some defined way—a minimum syllabus disclosure standard, a baseline taxonomy of permitted, restricted, and required AI uses, or a program-level coherence requirement that departments articulate consistent expectations across the courses a student actually takes. Each of those creates friction with shared governance. SUNY's policy resolves the tension by deferring entirely to faculty-level autonomy and treating the resulting variability as a feature. That is the protective posture in its sharpest form, and it is the position that produces exactly the experience students keep reporting.

The inconsistency students experience is the visible surface of a deeper question the policy declines to engage: assessment integrity. This is what faculty are most often grappling with right now—not whether AI is broadly useful or broadly threatening, but what counts as a student's own work in a course where AI tools are accessible, capable, and in many cases permitted by the institution itself. The SUNY document does not contain the words cheating, plagiarism, integrity, assessment, or authorship. The closest it comes is a single phrase distinguishing pedagogical AI policy from policy "relating to student evaluation and grading," and that distinction is used only to push assessment governance further down—to the same course-by-course discretion that produces the inconsistency in the first place.

The result is that students and faculty are experiencing the same structural failure from opposite sides. Students see a patchwork of rules across courses with no coherent expectation about what they should learn to do with AI by graduation. Faculty are left to manage, individually and without institutional backing, the harder question underneath: what a student's submitted work actually demonstrates, and by extension what a SUNY credential is now supposed to mean. Those are not separable problems. Inconsistent AI rules across courses are not just an inconvenience for students; they are the form a system-level integrity vacuum takes when it is pushed down to the syllabus level. A policy that declines to engage either side is, in practice, a policy that has decided this question is too costly to take a position on.

The pattern, not the document

These are not separate critiques, and they are not specific to SUNY. The policy that mis-categorizes the technology and the policy that declines to coordinate the student experience are the same policy, and the design logic that produces both is showing up across the systems and institutions writing AI policies right now. Acknowledge AI prominently. Constrain nothing currently in place. Defer the operational questions to local discretion or to a future review cycle. The result is a document trustees can approve, press offices can announce, and the field can quietly continue to ignore in practice.

A policy that does not threaten existing assumptions cannot govern a technology that is already overturning them. The questions these policies decline to engage will not wait for the next review cycle. Students are already living inside the gap.

The main On EdTech newsletter is free to share in part or in whole. All we ask is attribution.

Thanks for being a subscriber.

Keep Reading