Excellent Research Confirms Flaw in Earnings Premium
Exploring ROI and the differences in state vs. local earnings metrics, with recommendations for the upcoming NegReg

Was this forwarded to you by a friend? Sign up, and get your own copy of the news that matters sent to your inbox every week. Sign up for the On EdTech newsletter. Interested in additional analysis? Upgrade to the On EdTech+ newsletter.
US higher education is about to face a whole new world of accountability, when academic programs in all sectors start facing new regulations about graduate earnings in July. Financial Value Transparency & Gainful Employment (FVT & GE) regulations were enacted in 2023 with the first data submission six weeks ago, and the OBBBA law was signed in July with the associated regulations entering negotiated rulemaking (NegReg) in early December. All of this means each program will be judged primarily by an Earnings Premium metric, where failure would eventually lead to the loss of participation in federal financial aid loan programs.
There is a new report released this week by three researchers at the University of Wisconsin-Madison that looks at a seemingly esoteric issue that could have a big and undesired impact on regional and access-oriented institutions. That issue is based on how we define one of the three metrics from that report - Earnings Premium.
About the Report
One of the leading efforts comes via the Postsecondary Value Commission, which in 2021 developed three core metrics quantifying the economic return on investment (ROI) for every college in the United States: Minimum Economic Return, Earnings Premium, and Economic Mobility. In this report, we explore how sensitive these three metrics are to different assumptions. Specifically, what happens when we recalculate these metrics using local as opposed to statewide earnings data?
The broad look at measuring ROI is valuable (read the whole report), but I want to focus on Earnings Premium in particular, as that is the metric hitting higher ed in reality.
One problem is described in the report, confirming On EdTech reporting, is that the usage of state-level comparison groups is not appropriate to measure earnings gains with the majority of colleges and universities - it creates a bias.
Most colleges operate in regional markets defined not by state or national boundaries but by much smaller geographic areas.
Geography also plays a major role in shaping earnings, cost of living, and other economic outcomes in the U.S. For example, rural communities often have lower earnings than urban or suburban areas in their own states. North Carolina’s Research Triangle, home to three top tier research institutions and a robust science and technology sector, is a prime example. This region fares better in most economic measures compared to the rural Appalachian regions of the state. Likewise, the Seattle metro area, headquarters to several of the world’s largest and most profitable technology companies, has higher median incomes, higher cost of living, and lower unemployment rates compared to the state’s rural regions.
Notably, the new Postsecondary Value Commission (PVC) report uses a better geographic region definition than I have used, and they introduced some valuable visualizations and summary of how many institutions might be impacted.
Disclosure: I have provided a declaration in a lawsuit against the FVT & GE regulations on this and similar topics (gender is another problem), and I have advised schools figuring out how to navigate the new and upcoming rules.
I should note up front that the report does not use the same Earnings Premium as implemented in GE & FVT or OBBBA. The age group definitions differ as well as an institutional vs. program based view. But these differences do not change the underlying logic.
A Better Geographic Region Definition
In my own analysis, I have used Public Use Microdata Areas (PUMAs), which are admittedly too small and do not cross state lines. The PVC researchers instead use Commuting zones, of which there are 625 in the US.
Commuting zones are somewhere in the middle—they are smaller than states, yet larger than a single city—and they are especially relevant in the context of higher education and across the social sciences where researchers commonly use commuting zones to study how employment, earnings, health, upward mobility, poverty, and education vary by where one lives. Commuting zones are statistically-derived measures of local labor markets based on the U.S. Census Bureau’s journey-to-work data.

This is a better approach, and I plan to use it for future coverage of this topic.
Useful Graphics
I have used graphics to essentially prove the point that state-level Earnings Premium comparison metrics are flawed and should be replaced with regional metrics, and that this is feasible with existing data. But the PVC report goes further and analyzes the majority of US higher education institutions as they individually fit within the state vs. local issue.
For example, the PVC report shows its institutional Earnings Premium metric at the state level (vertical axis, above $0 is passing) vs. the commuting zone local level (horizontal axis, right of $0 is passing), with this example for the state of Kentucky. Note that “institutions benefitting from using local earnings are orange, institutions benefiting from using state earnings are blue.”

The PVC report also shows the number of institutions that would benefit from using the commuting zone local level per state. 51 colleges and university in Texas, 37 in California, and 441 in total for Earnings Premium (and 754 across all three metrics in the report).

The Key Findings
It is all well and good to look at economic returns to hold institutions accountable, but flawed metrics will have flawed consequences.
And this finding should be the most important takeaway [emphasis added].
In total, 754 unduplicated institutions in 47 states are positively affected by using local earnings when calculating the Postsecondary Value Commission’s ROI metrics. These 754 institutions would have failed to pass a given ROI metric had we only used state-level earnings. But when we use local earnings, which account for differences in cost-of-living and capture local contexts, these 754 institutions now pass. This represents about 16% of all colleges and universities in the U.S., suggesting one seemingly small tweak—using local rather than state-level earnings—can impact a nontrivial share of the nation’s colleges and universities.
And what is the unsurprising commonality of these institutions that would benefit from a local metric (in other words, those penalized by the chosen state-level metric)? [emphasis added]
Using local earnings in these calculations not only captures important statistical variation, it also has a greater effect on institutions that are most deeply tied to their local regions and economies. For instance, we find institutions benefiting most from local measures tend to be public, have broad-access missions, and serve disproportionately high shares of Pell grant recipients. Approximately one in five public institutions would pass the Postsecondary Value Commission’s ROI metrics had local earnings been used in the calculation. But using statelevel earnings makes these institutions fail. In addition to the impacts on the public sector of higher education, using local earnings also tends to benefit institutions located in rural places and places with lower-incomes, higher child poverty rates, and in many cases higher shares of people of color. More research is needed to fully understand these patterns, but we are finding evidence that geographic variation in earnings can disproportionately affect certain institutions and communities, meaning it should be accounted for when calculating economic ROI metrics.
The problem is not that more institutions should pass, it is that colleges and universities in low-income areas are disadvantaged and may fail even if graduates see a boost in income.
Alignment
A recurring theme at On EdTech is how the chosen metric as defined by the federal government has a fundamental flaw that I mentioned again yesterday.
The policy idea behind ROI is to hold institutions accountable for the economic gains of program completers. A graduate of an undergraduate program should make more than a high school graduate, and those completing graduate programs should have better earnings than undergraduate completers. The problem, however, is that completer earnings are aggregated at a program cohort level (e.g. for the 40 students graduating in 2022 and 2023 combined) but the comparison groups are aggregated at a completely different level (e.g., for an entire state or metropolitan region). This difference blurs demographic distinctions such as different wage levels in different geographic areas, and different wage levels for women and men. See Al Essa’s coverage for additional analysis.
I have been describing this problem for over a year, and recently Al Essa in his newsletter did his own analysis, coming to similar conclusions.
The new PVC report is even more important. While Al Essa is a serious researcher and shows his data and methods, who knows - maybe my sunny disposition might have influenced him.

The new report was done completely independently and carries the implied endorsement of BMGF, IHEP, and many other members.
Recommendations
FVT & GE are regulations that could be improved, and the Department of Education (ED) is holding NegReg sessions that include this topic starting in early December. The PVC report has clear recommendations [emphasis added].
Using local-level (rather than state-level) earnings data when reporting EP and D/E rates in the FVT framework would provide a more accurate and contextualized statistic for current or future students. For instance, if the goal is to provide students with actionable information, then a student attending a rural college who plans to live in a rural place after college might find it irrelevant to compare earnings against the statewide median. Instead, they may want to see how a program’s earnings compare to programs at other rural-located institutions or other nearby places. The same argument can be made for urban and suburban locations, or those crossing state lines. Adjusting measures for geographic contexts provides more useful information at relatively low cost and could be incorporated into the FVT framework during future negotiations and updates.
One problem, as I have noted, is that while the Department of Education (ED) could modify the FVT & GE regulations on its own, the OBBBA metric is written into law and would take an act of Congress to change the state-level part of the statute. But even here, the PVC report has a very useful recommendation [emphasis added].
While the Postsecondary Value Commission’s metrics are not exactly the same as OBBBA’s, the Earnings Premium metric is most similar to this new law. As a result, we anticipate many public community colleges, institutions serving lower-income students, and located in lower-income communities will be disproportionately affected by comparing earnings to state-level medians rather than to local earnings. The U.S. Department of Education is undergoing negotiated rulemaking in the winter of 2025 and early 2026, where they will develop an appeals process for programs failing the new test. Perhaps programs failing the state-level test can use local-level earnings upon appeal; those passing the test with local earnings could be reinstated or otherwise not penalized.
Amen, and amen. And kudos to PVC and the research team for such a well-done report.
The main On EdTech newsletter is free to share in part or in whole. All we ask is attribution.
Thanks for being a subscriber.
