Interesting Reads This Week
No easy solutions

Reminder: prices for On EdTech+ subscriptions change on March 1st. Monthly subscribers can lock in today’s pricing by switching to an annual plan.
Was this forwarded to you by a friend? Sign up, and get your own copy of the news that matters sent to your inbox every week. Sign up for the On EdTech newsletter. Interested in additional analysis? Upgrade to the On EdTech+ newsletter.
What all happened in EdTech, and what did I read this week?
Everything pointed to the same uncomfortable truth: there are no easy solutions in the complex system that is higher education. Tools, practices, and jobs exist within larger systems, and changing them without understanding that complexity is likely to unleash a lot of unintended consequences.
Tools depend on teaching
New research from folks at the University of Toronto, Stanford, and Khan Academy sheds some interesting light on the impact of tutoring platforms, such as Khan Academy itself.
It is a dense piece of research to parse, but it offers several findings that confirm what we already know about online tutoring (or computer-assisted learning, CAL), alongside others that raise new questions.
It is also a strong piece of empirical work, drawing on a large dataset and a quasi-experimental design, a welcome change from the endless randomized controlled trials in this space and their often questionable findings. Much of the existing evidence on tools like Khan Academy comes from idealized experiments, leaving open the question of whether they work under messy, everyday conditions.
As I mentioned, it is a complex argument, so I will let the authors describe their method.
We assemble a three-year panel dataset including 490,000 observations, 214,000 students, 11,000 teachers, and 1,000 schools across the United States. Leveraging
within-teacher variation in CAL usage over time, we estimate the incremental average treatment effect of each additional CAL hour per school year. Rather than estimate how a student’s own CAL use affects their math learning, a relationship prone to hidden confounders, we estimate the impact of CAL usage based
on variation in teacher use over time.
The good news is that they find the tool improves math scores. Even at fairly low levels of use—about 11 minutes per week, compared with the recommended 30—students’ scores improve. They also find a roughly linear relationship between usage and learning: more use leads to more improvement. And, as we have come to expect from many studies of EdTech, stronger students tend to benefit more than weaker ones.
One intriguing, but ultimately intensely frustrating, aspect of the research is the authors’ use of the concept of teacher efficiency. If you dig into the report, though, it gets to the heart of the matter, so bear with me.