The Concept vs. The Reality of Earnings Accountability

A useful visual on why the new earnings-based accountability metric results will surprise many higher ed leaders

Was this forwarded to you by a friend? Sign up, and get your own copy of the news that matters sent to your inbox every week. Sign up for the On EdTech newsletter. Interested in additional analysis? Upgrade to the On EdTech+ newsletter.

Over the next few years, many higher education leaders are going to confront a surprising policy outcome. Programs that clearly increase student earnings will fail federal accountability metrics.

When that happens, many leaders will initially assume the data must be wrong. In most cases, however, the data will be doing exactly what the policy requires. The real issue is simpler: the concept people believe the policy measures is not the same as the reality of what the metric actually measures.

I have written about this problem before, but two recent podcast conversations helped clarify the issue, and a new Urban Institute report provides a useful visual illustration of the gap.

The Concept People Believe the Policy Measures

In last year’s One Big Beautiful Bill (OB3), Congress implemented many higher education reforms in a surprising amount of detail. The accountability concept behind the new federal framework is intuitive and broadly supported on a bipartisan basis: did this program increase students’ earnings compared to what they would have earned otherwise? If not, don’t provide federal student loans for students in that program.

That is the concept policymakers and the public are responding to when they talk about earnings-based accountability that will become official in the coming months. And it’s a concept that many people—including me, for what it’s worth—support.

In an Illumination by Modern Campus podcast episode released today, I described the scope of this change and referenced that many programs that fail OB3 metrics might have to be shut down [lightly edited].

And what you have with the One Big Beautiful Bill (OB3) is it's now statutory language that at the program level, institutions have to look at, quote unquote, what's the return on investment for the graduates of this program. And if you fail two out of three years, you lose access for federal financial aid loans and potentially Pell grants even.

So there are a lot of details behind it, but the point is now nearly every program is going to be evaluated this way. And the risk could be existential. Can this program survive if we lose access to Title IV loans?”

The Reality of the Earnings Premium Metric

The problem is that there is no reliable data, nationwide, to measure the comparison for the group of students in an academic program. You can’t measure the alternate reality of these same students if they had not enrolled, so the benchmark must be chosen from some other data. The OB3 benchmark is state-level median earnings of high school graduates (no college) aged 25-34.

That metric is something fundamentally different, ignoring regional income differences within each state, gender wage differences, and other demographic factors—and it treats undergraduate certificate workforce programs the same as a Stanford economics program when applying the benchmark. I described this issue on a recent podcast episode on The Rant by Eloy Oakley [lightly edited].

The example I use, let's say you're a woman in Macallen, Texas, and you see a program that my nominal income as a high school graduate, I'm going to make $20,000 somewhere, right? Let's just say that I take some program and I boost my earnings. That's what we're trying to go for.

But because the comparison group includes Austin, Dallas, Houston—it's a statewide median—the program I'm taking, it might be doing me a lot of good, but it might fail simply because where I am and ignoring the demographics.

That's the fundamental argument—it's a poorly designed metric that misses the biggest variations. And once we see what happens . . . I can already tell you it's going to be predominantly the open access public and low income areas
that suffer the most. And I'm not trying to say they all should get a pass because they're low income, but smart public policy should take these things into account. So that's my fundamental argument here.

The metric does not ask whether a program increased student earnings.

A Compelling Visual Case

Thanks to the CSPEN conference, I found a recent Urban Institute report where researchers evaluated the 2023 Gainful Employment rules using data from two real institutions based on 2014–16 completions. The report looked at GE metrics—with small differences from OB3, but the concept and the reality are largely the same—and they actually checked student earnings before the program and three years after completion.

The metrics are not identical, but the visual results are compelling and show the concept vs. reality disconnect in real terms. I have annotated figure 1, which is based on a for-profit institution in Texas.

The easiest way to understand this gap is to look at actual program data.

Annotated by Phil Hill

Under the concept of earnings-based accountability, the dental support program should be viewed as a success—it nearly doubled students’ earnings (a 98% increase), but the reality is that the completer earnings are below the statewide benchmark. The program added value and increased earnings, but it would likely fail the OB3 metrics due to the concept vs. reality gap.

The Coming Surprise

For many programs, this will be a non-issue as earnings will be much higher than the benchmark levels. But at the margins there will be many workforce-oriented, shorter programs and a growing number of discipline-specific degree programs where the issue will matter.

Higher education leaders often assume that policy debates hinge on ideology. In this case, the bigger issue may simply be how the metric works.

Over the next several years, we are likely to see programs that clearly improve students’ lives still flagged as failures under the federal accountability system.

When that happens, this chart may become one of the simplest ways to explain what went wrong.

And it’s why understanding the difference between the concept and the reality of the metric will matter so much in the years ahead.

The main On EdTech newsletter is free to share in part or in whole. All we ask is attribution.

Thanks for being a subscriber.