- On EdTech Newsletter
- Posts
- Research Background Prior To Gates Foundation Response
Research Background Prior To Gates Foundation Response
In a recent post on the Gates Foundation-funded CourseGateway initiative, I challenged the claims that adoption of courseware is a key component of improving the performance of underrepresented student groups in gateway courses. [full-page audio link]
Really? Back up that claim if you want us to believe that this initiative will produce any results. The Tyton report funded by the Gates Foundation suggests that product review of courseware might not be the answer, at least based on the input from real educators teaching real students. And there is no other research or, you know, evidence presented that suggests that courseware adoption as an intervention will improve the DFWI trends of underrepresented students in gateway courses, aka “the problem to be solved.”
To her credit, Alison Pendergast of the Gates Foundation replied to the post.
Hi Phil – I think you know that I am the Senior Program Officer working on these courseware-related initiatives. I welcome the opportunity to speak with you about our goals and answer any questions you have. At the core, we are committed to helping Black, Latino, and Indigenous students, and students experiencing poverty equitably achieve a quality credential or degree. One (of many) focus areas is working with institutions, organizations, associations, and people who are developing innovations in equity-centered teaching practices and digital learning tools and curricula. I would encourage you to dig into the good work of Every Learner Everywhere. With specific regard to Coursegateway, it is simply a tool to help faculty discover, evaluate, select, and implement quality digital learning tools and implementation best practices. This is an extension of the work we did together on the Next Gen Courseware Challenge and courseware-in-context framework. The Tyton research you reference above is the longitudinal research we support and make available to the field to see trends in teaching and learning. The EBT research is one way to look at the impact of digital learning curriculum tools like courseware and how it supports faculty practice. The use of EBTs is not a measure of the ultimate efficacy of courseware in closing gaps. We also continue to invest in educational research to help build the evidence base for what works in what context with our focus students. Hope this helps clarify. Feel free to give me a shout if you want to discuss this further. We deeply value your insights and expertise…
I replied to Alison:
Hi Alison, good to hear from you and thanks for the comment. I appreciate the offer to answer any questions I have, as well as the description of goals in the comment. I think it would benefit the community to do this in public, however. Would you be open to an email Q&A that would then be collected into a blog post? You could also provide an introduction (perhaps your comment above) that would be included with the Q&A. Let me know.
This interaction led to a breakfast meeting last week with with Alison and Rahim Rajan, Deputy Director Postsecondary Success and previously our program officer during e-Literate TV days. Alison and Rahim agreed to respond to my questions in writing within the next week or two, which I appreciate.
Based on multiple conversations I had with readers of the blog, I realized that I should lay out more of the evidence base that I’m aware of regarding courseware adoption, particularly for Gates Foundation-funded research. This means a wonky post with a good deal of history – so let the reader be forewarned.
Context
As I have described in the third post of the series, the Gates Foundation has funded various adaptive courseware initiatives for more than a year. Further, the foundation has consistently focused on lower division, mostly gateway courses and the performance of Pell recipients and underrepresented minority (URM) student groups within those courses. That is the problem to be solved – improving overall and URM student performance in gateway courses.
The intervention promoted by the Gates Foundation has been the adoption of high-quality adaptive courseware along with professional development, course redesign, and support services. Almost always with courseware as the central EdTech innovation. The question is around the effectiveness of that multi-dimentional intervention in addressing the problem. Our focus is on student outcomes.
The Gates Foundation has not just funded the adoption of courseware and associated services; it has also funded multiple research initiatives to build an evidence base to support future work. While there are other research results around courseware, they tend to be very narrow in scope around specific implementations in specific disciplines or courses. The foundation has sought expand the scope of research.
Much of the research was contracted with SRI Education (through 2018), and then with Digital Promise after the lead researcher Barbara Means changed organizations.
The First Five Years
The first report worth noting was SRI’s (2014) “Lessons from Five Years of Funding Digital Courseware.” That report looked at “12 major postsecondary courseware-related projects” and provided an indepedent sythesis of findings to date. Much of the report was descriptive in nature, showing what data was being collected and what evidence base existed to perform any evaluation. The report then shared a meta-analysis to provide a quantitative synthesis of impacts on course completion and student learning. When looking at course completion rate:
As shown in Exhibit 3, the Postsecondary Success projects as a whole had a moderately positive effect on course completion rates. The overall average effect size of .37 standard deviation units is equivalent to almost doubling the likelihood that a student who starts a course will finish it. However, this average effect estimate was influenced strongly by a single project with a large positive impact (Pathways). If the average Postsecondary Success courseware impact on course completion is estimated with the Pathways data excluded, the average effect estimate is very close to 0. As seen in Exhibit 3, five of the six projects with effect estimates for course completion had effect sizes that were close to 0. Certainly the observed impacts on course completion have been much more modest than the dramatic improvement envisioned in early articulations of the Postsecondary Success strategy.
The report then made recommendations to the foundation for their Postsecondary Success efforts (and kudos to the foundation for asking), with this core finding.
The course completion data suggest that the Postsecondary Success strategy was overly optimistic about the power of circa 2010 learning technology to improve course completion rates by itself.
There are many other recommendations, including a key one that “Funding decisions and evaluation activities should be tightly coupled.” In other words, make sure your funded initiatives include a reasonable design of evaluation so that an evidence base can be built over time.
Adaptive Learning Market Accelerator Program (ALMAP) Evaluation
The second report of note was SRI’s (2016) “Lessons Learned From Early Implementations of Adaptive Courseware,” based on a grant program with 14 insitutions to “to incorporate nine adaptive learning products into 23 courses and to conduct quasi-experiments (side-by-side comparisons with comparison groups) to measure their effects on student outcomes, and to gather data on cost impacts and instructor and student satisfaction.”
Michael Feldstein described the report in an article for the Chronicle of Higher Education titled “Adaptive Learning Earns an Incomplete,” which focused on just how difficult it is to develop meaningful learning science research.
A majority of courses that used adaptive learning had “no discernible impact” on grades, with just four out of 15 that could be assessed resulting in “slightly higher” averages.
SRI found no evidence that adaptive learning had had an effect on course completion in the 16 grantee-provided data sets “appropriate” for estimating that impact.
The study found “minor enhancements of course grades,” on average, but few strong outliers.
Students and instructors in two-year colleges and developmental courses reported high levels of satisfaction with their adaptive-learning experiences.
However, only a third of students in four-year colleges expressed overall satisfaction. The researchers wrote: “It is not clear from the available data whether the qualities of the courseware they were using, the way in which the courseware was implemented, or a sense of loss of instructor personal attention was most responsible for their lack of enthusiasm.”
The SRI report called out three contexts for analysis: Blended Adaptive vs. Lecture format (moving away from a lecture-centered course section to a blended section using adaptive courseware, Online Adaptive vs. Online, and Blended Adaptive vs. Blended. For the Online Adaptive vs. Online summary of results, we see that only one course show majority positive student outcomes.
In fact, that NC State course section was the only one out of fifteen (one school had two courses redesigned) that showed “majority significantly positive student outcomes.”
Like the 2014 report, the 2016 SRI report stressed the importance of not just looking at courseware and to set up grants with better assessment designs.
Next Generation Courseware Challenge
The next report of note was SRI’s (2018) “Next Generation Courseware Challenge Evaluation Final Report.” This report looked at a grant program funding “seven startup, nonprofit, and academic organizations received three-year grants to develop, iterate, refine, and scale the adoption of adaptive courseware products for higher education.” When looking at student outcomes:
After applying statistical modeling to control for differences between treatment and comparison conditions in terms of student characteristics and prior achievement, analysts found that NGCC courseware effects on student grades varied widely, as shown in Figure ES-1. Students using the NGCC courseware earned significantly higher grades than those in the comparison or business-asusual versions of the course in 10 of the 28 impact studies (those with boldface labels in the figure).
In seven of these cases, the effects of courseware implementation were sizable enough to have practical consequences. With positive effect sizes of .30 or greater,10 this quarter of the NGCC courseware implementations had impacts equivalent to moving an average student at the 50th percentile in course performance to the 62nd percentile or higher. On the other hand, 4 of the 28 NGCC course implementation impact studies found a statistically significant negative effect. In the other half of the datasets, student grades in the two conditions were statistically equivalent after controlling for differences in student characteristics, such as full-time enrollment and prior grade point average.
The square for each courseware implementation represents the estimated impact (effect size). The size of the square represents the weighting of the study in the meta-analysis. The length of the horizontal line through the square represents the confidence interval around the impact estimate. The longer the line, the more uncertainty there is around the true impact of the courseware implementation. The diamond at the bottom of the graph represents the average effect for the 28 courseware implementations. Its width represents the 95% confidence interval for the average impact. Squares to the right of the boldface vertical line represent studies in which students in the courseware sections outperformed students in the businessas-usual sections. Squares to the left of the boldface vertical line represent studies in which students in the business-as-usual courseware sections outperformed students in the courseware sections. Only those courseware implementations with boldface labels are statistically significant (i.e., we can be confident that the difference in average grades for students in courseware and business-as-usual course sections did not occur by chance).
The report also worked to tease out in which conditions the introduction of courseware might have a positive effect on student outcomes.
Case Studies and Descriptive Work
The foundation’s Postsecondary Success work changed with the introduction of the Every Learner Everywhere network starting in 2018, and there are three other reports of interest, none of which had the same rigorous design of the SRI reports.
Digital Promise (2020) “Every Learner Everywhere & Lighthouse Institutions,” describing 12 institutional adoptions of adaptive courseware, along with a summary of instructor perceptions.
Digital Promise (2022) “Designing Gateway Statistics and Chemistry Courses for Today’s Students: Case Studies of Postsecondary Course Innovations,” analyzing case studies of gateway courses, including one that used adaptive courseware.
Tyton Partners (2022) “Time for Class: The state of digital learning and courseware adoption,” which was quoted in more depth in this post.
Basis for the Questions
All of these are valuable reports, and there are some specific course examples of impressive outcomes improvements and areas of promise, but I don’t see a real basis to believe that the intervention of courseware and services will successfully and broadly address the problem of improving student outcomes in gateway courses.
But I may be missing something, and Alison and Rahim indicated their strategy has evolved in the past few years. This is why I appreciate the Gates Foundation offer to answer my questions, which I will post when I receive the response. In the meantime, I hope this post provides better historical context on the question around courseware effectiveness and the evidence base.
The post Research Background Prior To Gates Foundation Response appeared first on Phil Hill & Associates.