Interesting Reads This Week
AI is changing how work gets done—we’re responding to everything else

Was this forwarded to you by a friend? Sign up, and get your own copy of the news that matters sent to your inbox every week. Sign up for the On EdTech newsletter. Interested in additional analysis? Upgrade to the On EdTech+ newsletter.
Here on the Wasatch front, everything thinks it’s spring. Clearly they never read T.S. Eliot.

The things I read this week all circle around a similar problem. AI is changing how work actually gets done. But many of our responses—whether in pedagogy, markets, or institutional structure—are focused on what it looks like on the surface rather than on the underlying work itself.
How the work gets done is changing
I had read the many-authored HBS paper on the three modes of human work with AI a while back, and I kept coming back to it. I think it is a deceptively important paper, especially for higher education. There are distinct ways of using AI, and each has different implications for knowledge and understanding. As we design learning experiences, we should bear this in mind and seek to foster some approaches rather than others.
The authors—of whom there are many, though Ethan Mollick will be the best known to the OET audience—studied the workflow of 244 consultants from Boston Consulting Group, aiming to understand what the consultants actually do when they use AI.
Through exploration of the full problem-solving workflow, we reveal hidden patterns in how they structure, delegate, and in some instances, verify their
collaboration with the system. What's striking is that although every consultant had access to the same tools and the same task, their choices about when to engage GenAI and how much authority to give it differed dramatically.
They identify three distinct types of interaction, which they term Cyborg, Centaur, and Self-Automater. These are not rigid categories, and the sample is narrow, but they are useful as archetypes for thinking about how people work with AI.
Cyborgs. This was the largest group, about 60% of the sample. They engaged in “fused co-creation” with AI and used it continuously throughout all stages of their work.
Their collaboration unfolded as an iterative dialogue: probing AI outputs, extending ideas, and validating results in a seamless rhythm of joint problem-solving.
Centaurs. This was the smallest group in the sample (about 14%), and they used AI selectively for specific subtasks while maintaining overall control of the process.