Generative AI

Costs of generative AI

All technologies come with costs. For example, in addition to providing exercise and emissions-free transportation, bicycles can result in theft, accidents, and death. A question to consider with new technology is: What cost is too high? What is a deal breaker for a given technology? This question is considered in medicine, for example, and is regulated by governments. If a drug results in too many adverse reactions, it is withdrawn (or never released to the public in the first place), even if it is otherwise beneficial. In an age of climate crisis, of increasing social and economic inequality and insecurity, of increasing threat to truth and democracy, what level of harm from generative AI outweighs its benefits? This is a question to seriously consider. When creators of AI themselves are sounding the alarm that AI has the potential to threaten humanity's continued existence, however small that potential is, is it wise to hastily adopt the use of AI to "keep from falling behind"?

To encourage reflection and caution, this guide begins with an examination of the costs of AI. In reviewing the existing and potential harms of generative AI, you may decide that these harms are a deal breaker for using these tools as they exist in their current, mostly unregulated, state. Or, you may decide to advocate for stronger regulation of AI development and deployment. Or, you may to decide to still make use of generative AI, but with restraint and careful consideration of the costs.

The next section presents a list, by Rebecca Sweetman of Queen's University, of harms created by the Large Language Models that enable many current generative AI tools.

Some Harm Considerations of Large Language Models (LLMs)

“Some Harm Considerations of Large Language Models (LLMs)” by Rebecca Sweetman is licensed under CC BY-NC-SA 4.0 International.

 

Some Harm Considerations of Large Language Models (LLMs) text-only

Carbon Footprint

The carbon footprint of LLMs not only affects our environment, but also our economy. It is important to recognize carbon impacts are externalized to the commons—not factored into the operational costs of the platform, nor considered the responsibility of the platform. However, the expense of high energy consumption drives a publicly acceptable case for privatizing and commodifying these platforms, driving them away from public, open-source resources to subscriber-based models for corporate/investor profit. As such, while the negative carbon impact is unevenly but publicly shared, the benefit and profit of use is privatized.

The carbon footprint spans all three temporalities of design and development, operationalization, and future legacy. 

Extractive Industries

AI has intensive resource demands from extractive industries like mining, oil, and gas. This has globalized ecocidal, racist, and socioeconomic impacts. Extractive industries disproportionately continue to burden the Global South through predatory policies and structures of international finance/debt and globalized capitalism.

It also has extensive demands on water.

Salvage Industries

We also need to consider the afterlife of every technological upgrade. Although not unique to LLMs, the computing demands of AI and the consumer demand LLMs will generate will add to the volume of e-waste created and shipped to countries of the Global South for toxic reclamation of valuable metals. The intergenerational impacts are ecocidal for air, soil, water, and localized ecologies; racist in effects on human health, employment, and geographies; and socioeconomic, disproportionately burdening the poor, particularly from the Global South.

Exploitative Labour to “Train” Datasets

Datasets are “trained” by exploitative, racialized labour practices, hiring workers in the Global South to view traumatic images and flagged content for racist, sexist, and offensive content, for example.

Training AI datasets also has internal processes that perpetuate gendered harms. In their Excavating AI project, Kate Crawford and Trevor Paglen revealed, "We find an implicit assumption here: only 'male' and 'female' bodies are 'natural'.”

Extractive Labour from Users, Intellectual Property / Copyright & Privacy Issues

Ignoring longstanding calls of AI ethicists, ChatGPT and other LLMs are systems that fail to proactively address harms before being released to the public. Instead, they rely on extractive labour from users to test and improve the platform (like the thumbs up or down feature in ChatGPT). This adds significant commercial value to these products through the free, or even paying, crowdsourced labour of their users.

Intellectual property and copyright issues have also been flagged, given the output from ChatGPT becomes a user's own intellectual property despite the fact that this information has been deconstructed and reassembled from many other sources or semantic patterns of others' intellectual labour. Copyright concerns have been particularly expressed for AI image generators that turn text-based prompts into new, "original" images, which may, in fact, draw substantially from preexisting images and creative works produced by others without citation or respect of copyright terms.

Privacy issues are also a concern. In ethical and social risks of harm from language models, DeepMind authors write, "By providing true information about individuals' personal characteristics, privacy violations may occur. [...] Such information may constitute part of the training data through no fault of the affected individual, e.g. where data leaks occur or where others post private information about them on online networks."

While ChatGPT seems to be taking steps to address these issues, when asking it directly what the privacy risks are to ChatGPT, it identified the following:

  • Data breaches: Although ChatGPT does not collect or store personal information, the server or platform that hosts the AI model could be vulnerable to data breaches or hacking attacks, which could expose user data.
  • Unintended disclosure: Chatbots like ChatGPT can sometimes misinterpret user input or provide inaccurate responses, which could inadvertently reveal sensitive or confidential information.
  • Third-party access: ChatGPT may use third-party APIs or services to provide certain functionalities or integrate with other applications, which could potentially expose user data to those third-party providers.
  • Automation of low-wage work = Increased Precarity/Unemployment for Marginalized

LLMs present the option for the automation of a wide spectrum of lower wage work, including customer service. This harm may cause increased precarity or unemployment for those most marginalized (racialized, disabled, gendered, poor, etc.) who currently are disproportionately employed in lower wage work.

DeepMind identified negative effects of LLMs on employment: "increasing inequality and negative effects on job quality," "undermining creative economies," and "displacing employees from their roles" leading to "an increase in unemployment."

Designed to Benefit the Most Privileged

LLMs are designed to benefit those already with the most power and privilege in the world. This can be seen in their business investment models, their marketing strategies, and their commercialization. Their design and development has not prioritized ethical engagement with historically marginalized communities, and their operationalization is not seeking their support, even though LLM future legacies disproportionately most affects those it further marginalizes. 

"...most language technology is built to serve the needs of those who already have the most privilege in society. Consider, for example, who is likely to both have the financial resources to purchase a Google Home, Amazon Alexa or an Apple device with Siri installed and comfortably speak a variety of a language which they are prepared to handle. Furthermore, when [LLMs] encode and reinforce hegemonic biases, the harms that follow are most likely to fall on marginalized populations who, even in rich nations, are most likely to experience environmental racism." (Source)

Designs Affirm & Amplify Normative Privilege Bias

Given the design, development, and operationalization premises that prioritize access, use, user experience, and benefit for the privileged, ongoing feedback, redesign, and redevelopment is likely to follow suit.

Access Discrimination

Not all will have equal access to LLM platforms, particularly as they are increasingly privatized and commodified in a globalized world designed to benefit the privileged. There are immediate access inequities geopolitically. These include: 

  • physical and socioeconomic access to internet, smartphones, or computers
  • unequal access for women and girls
  • accessibility as usability for disabled people
  • access in regions that employ censorship
  • “freemium” or tiered access to quality of services based on socioeconomic means

DeepMind identified the harms of "disparate access to benefits due to hardware, software, [and] skill constraints."

Reproduces & Amplifies Access Barriers

As LLMs continue to progress, they will draw from user data and feedback of initial users, who will disproportionately be privileged. Without an equitable user base to inform this feedback, LLMs will continue to reproduce ableist, gendered, genocidal, racist, and classist barriers to access and inclusion.

Western Ontologically/Epistemically Biased Data

Given the dominant western, English language bias of LLM datasets, LLMs run into the problem of bad data in = bad data out. Semantically, ontologically, and epistemically, LLMs are being trained to reproduce knowledge through dominant western norms. Thus, the harms of LLMs will continue to reify dominant western norms of oppression.

Biased Data Reproduces & Amplifies Bias, Exclusion

Over time, LLMs will rewrite and/or silence histories, producing the erasures of cultures outside the dominant-biased datasets. This will amplify ableist, gendered, genocidal, racist, and classist harms in society, particularly for knowledge reproduction. In this way, LLMs can be seen as a recolonizing pathway, with technology increasingly turned to as a key feature of societal designs and decision-making that will reproduce Western-biased ontologies and epistemologies.

Toxic Dominant Culture Maintains Norms of Oppression

DeepMind (a Google thinktank) identified "discrimination, exclusion, and toxicity" as the first of a very long list of harms from LLMs. The same report also identified harms arising from the following malicious uses of LLMs: "making disinformation cheaper and more effective; facilitating fraud, scams and more targeted manipulation; assisting code generation for cyber attacks, weapons, or malicious use; [and] illegitimate surveillance and censorship." They also identified human-computer interaction harms of "creating avenues for exploiting user trust" and "promoting harmful stereotypes by implying gender or ethnic identity" (Weidinger et al, 2021).

LLMs also pose the harm of a (further) move away from a feminist worldview of relationality, reproducing patriarchal hegemony through our day-to-day digital norms.

Ongoing Discrimination, Exclusion, Harms

As LLMs reproduce toxic dominant cultural norms, the effect will be compounding over time, perpetuating ableist, ecocidal, gendered, genocidal, racist, and socioeconomic harms as biased datasets increasingly reaffirm themselves.

Although LLM products may try to limit harmful feedback from being produced by their datasets/training, malevolent or nefarious uses (and sometimes even benign uses) are still able to craft workarounds that achieve harmful outputs. Given polarized political climates, the vulnerability of democratic institutions due to social media, and other toxic norms already present, the potential nefarious uses of LLMs pose cause for concern.

The real-world impact of reaffirming toxic dominant cultural norms will be seen in public policy, education, employment, housing, dating, and most areas of life where relationships can be mediated through technology. The divides that exist due to the dominant paradigms of settler colonization, capitalism, globalization, racism, and patriarchy will widen.

If and/or how we choose to mitigate LLM harms will speak volumes about our cultural ethics. Will we continue adopt authoritarian modes of control through a culture of surveillance, policing, and enforcement, using power and hierarchy to suppress, legislate, and regulate? Will we prioritize capitalist realism over human/environmental wellbeing and let economic issues drive the course of action, deferring ethical responsibility to the fallacy of the invisible hand regulating marketplace decisions?

 

"Some Harm Considerations of Large Language Models (LLMs) text-only version" is adapted from "Some Harm Considerations of Large Language Models (LLMs)" by Rebecca Sweetman, used under CC BY-NC-SA 4.0.

Further Reading