Success Porn in EdTech

Exaggerating reality, declaring success, and promoting endlessly

Was this forwarded to you by a friend? Sign up, and get your own copy of the news that matters sent to your inbox every week. Sign up for the On EdTech newsletter. Interested in additional analysis? Try with our 30-day free trial and Upgrade to the On EdTech+ newsletter.

We have a success problem in EdTech, we really do. And no, I don’t mean that there is too much winning.

The problem is that we are addicted to stories and examples of success. I have come to referring to this tendency as success porn based on the following common characteristics.

  1. It is a caricature of reality. Bodies and actions are exaggerated far beyond what they are or look like in reality, and any imperfections or problems are airbrushed out of the way.

  2. People are indulging in pornography rather than, you know, the real thing. At least according to media reports.

  3. Like Justice Potter Stevens, we don’t need a definition of pornography, we know it when we see it.

Looking further at the way we approach success in EdTech, I note the following tendencies:

  1. When we take a project that has good outcomes but exaggerate its success or minimize its failures, problems, or challenges to the point where they are pretty much invisible.

  2. The widespread propensity to declare something a success and to present it as such regardless of whether it’s true or not, or when it may be only partially true. Success Porn happens when we equate doing an EdTech project with it being a success, and worthy of emulation elsewhere.

  3. When we take those projects and promote them endlessly in the EdTech media as the solution to all our problems, the answer to all our prayers.

Why do these tendencies represent a problem?

Exaggerating reality & creating unreasonable expectations

Too often in EdTech we seize on the success of some projects, but we ignore or shut out the downsides or the limitations.  The promising small N results are exaggerated to make it look like they were achieved at scale, and the challenges associated with implementing the project are minimized or hidden from view.

This was and continues to be my gripe with the CUNY ASAP project. It has produced some really good and promising results. But these results are on a small scale, not the giant impact that the endless promotion would justify. Most accounts of the initiative don’t cover the limitations of ASAP (its self-fulfilling nature, the cost etc.) or the challenges associated with ramping it up to have a meaningful impact on outcomes. You need to dig into the findings of the CCRC to find these challenges – they are just not in the picture.

In EdTech we have been through this before, many times. For example, staying with student success we saw the same sort of thing play out with the Purdue Signals project. The small N results were embellished and the problems in the analysis were ignored. And it turns out that there are all kinds of other challenges associated with early alerts to identify at risk students. But we don’t pay much attention to those because we have an ideal in mind based on these successes.

All projects have challenges and limitations, and understanding those factors makes the projects stronger and makes it possible for other institutions to adopt and adapt those. By leaving out the ugly parts, we do everyone, including ourselves, a disservice.

Too much of it happening and getting in the way of real progress

In EdTech we hear a lot about successes and not enough about real, messy, but inventive projects with at times mixed but real results.

Because we exaggerate the results, ignore the shortcomings, and promote the heck out of a few projects, they become the exemplars to which other institutions aspire. So those other institutions try to do the same sorts of projects but often find themselves running into reality.

This ties in with my frustration with EdTech’s love of best practices. Folks see the awesome results of the examples touted by think tanks and the trade press and want some of that action. But not only is there not total fidelity to reality in these best practice examples, but they may not be appropriate for a different setting, with different people. And we see the same sort of projects repeatedly.

You see some of this reflected in my own writing. I tend to use the same examples quite often. Part of the reason is that I really don’t like picking on folks and so I try to use examples I’ve used before instead of highlighting some new problem.

But as a sector we have relatively few examples to call upon, because the same ones are trumpeted repeatedly, and we end up with a bit of a boring monoculture. We need more experimentation and failure, of people trying out different things and sometimes failing and being honest about why. This is how progress will happen rather than the endless efforts at repeating the success of say an Arizona State at online learning, or a Georgia State at student success.

We know it when we see it

Those of us who have spent a long time in EdTech all instinctively know when something is being promoted as a success but when the reality underneath is really different.

I have had that feeling countless times in my career but two times in particular stand out.

Once at a conference presentation, a senior administrator described the success of a project that I (and my neighbor who worked at the same institution) knew had been plagued by problems, largely because of the immature technology underlying it. But the project had been implemented and therefore was, by definition, a success. All we needed to do was exchange glances to acknowledge that we were hearing and feeling the same thing.

Similarly, going back to the Purdue Signals project mentioned above, I remember being at a learning analytics conference with Ellen Wagner years ago when the project was very popular. She asked someone associated with the project about the number of students involved and it was small. Really small. Ellen’s eyes widened and again we exchanged glances.

Why is all this a problem

This endless promotion and exaggeration of success is a problem in EdTech because it gets in the way of real progress. Embroidering results, declaring success, and not dealing square on with why something didn’t work means that our successes stay at a superficial level. We seldom grapple with the problems of implementation and scale and so never break through to that next level. This means that we often abandon experiments too early.

For example, at one point there was quite a lot of hype about helping students understand what courses and career pathways might be productive for them. Software tools were developed, for example at Austin Peay (later sold to D2L) that offered the promise of helping students navigate these tough questions that stop them from making timely progress through their programs and graduating in time. But it turns out that reality was a tad more complicated than the original hype suggested, and slowly but surely these projects were abandoned.

As described above, the way that success gets promoted results in a kind of monoculture in EdTech with too many people trying to do the same kind of things. The rest of us know that we are looking at success porn and get cynical, and this is corrosive and in its own way further harms progress in EdTech.

Parting thoughts

We need to change our approach dealing with success in EdTech. Good results need to be celebrated, but the obsession with inflated results does not do any favors. Let’s be open and honest about what worked and what didn’t. Embracing this culture will help us make real progress instead of gawking at the same airbrushed results all the while knowing there is a different reality underneath.

The main On EdTech newsletter is free to share in part or in whole. All we ask is attribution.

Thanks for being a subscriber.