The Blind Spot in Responsible AI Guidelines

Avoiding the risk of virtue signaling by dealing with tougher questions such as environmental impact

Was this forwarded to you by a friend? Sign up, and get your own copy of the news that matters sent to your inbox every week. Sign up for the On EdTech newsletter. Interested in additional analysis? Try with our 30-day free trial and Upgrade to the On EdTech+ newsletter.

Generative AI undeniably represents the most significant technology trend in EdTech. Whenever you encounter an AI-enabled EdTech tool, you'll likely find a code of ethics governing its use, whether it's from a vendor, a state, or national education agency, or an organization. These codes of ethics, or guidelines for responsible AI usage, often share common features, such as a focus on equity, privacy, and the prevention of bias. However, a glaring omission in most of them is the environmental impact of AI. This silence is concerning, but it also undermines the role of these principles. Leaving out something as critical as environmental impact runs the risk of making them seem more like virtue signaling rather than a real constraint and guide for behavior.  

AI, EdTech, and the codes of responsible use

Over the past eighteen months, we've witnessed an arms race in the integration of generative AI capabilities and Large Language Models (LLMs) into EdTech tools. From learning management systems (LMS) to language learning apps, tutoring systems to classroom LCD screens, vendors have raced to add AI features, while instructors and students have grappled with how and when to use them.

As vendors have rolled out AI features in their tools, they have typically developed principles or guidelines for the responsible use of AI with their products. These lists or frameworks abound, from Instructure, D2L, Moodle, Turnitin, Pearson Vue, Coursera, and Grammarly, to name a few.

As an example, Anthology, a provider of a broad set of EdTech and administrative tools including the Blackboard LMS, developed its list of 7 Trustworthy AI Principles governing their use of AI. These principles are broadly representative of the types of guidelines governing AI in EdTech, as seen in the February 2024 launch of the revised principles.

Anthology's trustworthy AI 7 principles

You'll find similar ideas across other vendors' principles and frameworks. Occasionally, you'll see some additional or alternative ideas or variations on the broad concepts in Anthology's list. For example:

  • A more explicit focus on equity, such as Instructure's idea that the aim of AI should be to "level the playing field" for all students.

  • An emphasis on using AI to improve educational outcomes.

  • A commitment to further exploring the issues and educating users about the nature of AI.

We see similar sets of commitments among organizations serving education and education-focused government bodies. For instance, the US Department of Education's Office of Educational Technology's Designing for Education with Artificial Intelligence report identifies a set of ethics themes that mirror those used by the vendors. The trade organization SIAA through its Principles for AI in Education echoes similar themes.

What we seldom see, however, is any consideration of the environmental impact of the integration of AI. This is surprising for several reasons, including the scale of impact and the references to NIST framework that does open up the environmental topic.

The scale of the impact of AI on the environment

There's a growing awareness of the significant environmental impact of building and supporting AI applications. In its 2024 environmental report, Google revealed that its emissions had increased by 50% compared to 2019, primarily due to increased electricity consumption by data centers supporting AI. Similarly, Microsoft saw its emissions increase 30% between 2020 and 2024, also driven by growing data center usage.

The electricity usage of major technology companies now rivals that of large and populous countries, as listed by Statistica.

Ranking tech companies electricity use vs. countries

Most AI models demand and use increasing amounts of electricity. Some models have such high AI demands that data center energy consumption could reach as much as 6% of the nation's total electricity usage by 2026.

The emissions produced to power data centers running AI applications aren't just from using the apps (known as inference). Significant energy is consumed, and emissions are produced, during the development of LLMs and AI (known as training), as described in Forbes.

The data centers used to train and operate these models require vast amounts of electricity. GPT-4, for example, required over 50 gigawatt-hours, approximately 0.02% of the electricity California generates in a year, and 50 times the amount it took to train GPT-3, the previous iteration. As AI proliferates across industries, this energy demand will only grow.

Other models

Many of the EdTech AI guidelines reference the National Institute of Standards and Technology (NIST) Risk Management Framework that seeks to mitigate the negative effects of AI. The framework explicitly includes the impact on the environment, but this has not made its way into the education variants of principles.

Examples of potential harm

Why this matters

If AI adoption in education becomes widespread, the environmental impact is likely to be significant. Furthermore, this topic of general AI energy consumption is likely to become more prominent, and education communities should be prepared for the public discussions.

Yet, the silence of most guidelines for responsible AI use on environmental impact indicates a trend to deal with the easy questions, if easy is defined as making users feel comfortable and safe. They focus on enabling AI use through positive commitments to privacy, transparency, equity, and human involvement. These are all things we can agree on; they are the motherhood and apple pie of ethics.

Designing trustworthy technology usage principles is tricky, but if we want AI guidelines to be more than virtue signaling, we need to address tougher issues, and environmental impact is a clear blind spot. Should we be using AI, and under what circumstances? Should we encourage unlimited AI usage or seek usage efficiency? Interrogating these questions would force designers and users to make difficult decisions and trade-offs, sometimes leading to not using AI or sourcing it from more energy efficient options that might provide good-enough solutions rather than the latest-and-greatest.

I should note that this is not simply an issue of your views on climate change. We are also dealing with economic sustainability and stewardship of resources, and it would be a mistake to view this issue through a partisan lens.

What would a commitment to environmental impact look like?

There are no easy answers to how EdTech vendors should engage with the ethics of environmental impact, but if they are serious about responsible use (and I think they are), and if we want to continue finding use cases for AI in EdTech as most of us do, we must wrestle with the tough issues and at times make uncomfortable decisions.

Some places like NIST already include the impact on the environment as one of the risk factors of using AI. Within education, some individual institutions such as Cornell have included it as part of how they weigh responsible use. But vendors, organizations and government bodies working in education need to go beyond merely listing consideration of the environmental impact as a principle, they also need to start doing more of the following and doing it in a transparent manner. 

  • Making decisions when to not include AI in a product because the use case does not warrant the impact, at least given current technology.

  • How they are choosing and will continue to choose and press for more efficient AI architectures.

  • How they are choosing to work with providers using data centers with more efficient designs, or where alternative energy sources are used, even if this means a more expensive product, or holding off on integration until better options become available. There are big variations, for example, in how some of the big AI vendors power data centers.

Parting thoughts

It’s hard to argue against AI responsible use. But if codes and principles are going to be meaningful, they need to take on tough issues, including the growing environment impact of generative AI.

The main On EdTech newsletter is free to share in part or in whole. All we ask is attribution.

Thanks for being a subscriber.