AI Must Die!

Critical Perspectives on the State of Artificial Intelligence

Page Description: Cover image consisting of a white cone-shaped security robot in a pool of water being recovered by two secuirty officers in blue shirts while a woman in a black dress observes. Incident 68: Security Robot Drowns Itself in a Fountain AI Incident Database (AIID) // incidentdatabase.ai

Publication Information

Page Description: QR code linking to zine landing page https://aimustdie.info. Version number v1.4 June 2025. Author contact information: Myke Walton, https://mwalton.me/, hi@mwalton.me, @mwalton.bsky.social. Cam Smith, https://smith.cam, camoverride@proton.me, @camoverride, Researched, written and designed with GOFHI (Good Old-Fashioned Human Intelligence), This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. You are free to copy, print, distribute, transmit, remix, transform & build on this work (or portions of it) under the following conditions: attribution of the authors / artists, not for commercial purposes, indication of what (if any) changes were made and under the same license as the original. These conditions may be waived in some circumstances, just ask!

Why?

AI is fucking everywhere: at work, at school, in our homes, in our phones, on our streets, in our governments, and at war. We are living through an era of extreme AI hype. In this climate, some people have gotten rich off of the stolen data and stolen labor that fuels these technologies. Others have been surveilled, oppressed, exploited, and killed. This text presents a short, practical guide to the technologies currently called “AI,” the ideologies and actors pouring gasoline on the AI dumpsterfire, and what we can do about it.

“AI” Does Not Exist

AI is a bunch of random tools. Some people describe these tools as “human-like” in order to sell a product

There’s no such thing as “AI” in the way the tech industry promotes it. AI is marketing hype, not “intelligence.” What people call AI is a messy collection of tools that do stuff: ChatGPT extrudes plausible-sounding text, Midjourney spits out synthetic images, Instagram’s algorithm sorts you into demographic categories to annoy you with ads, Amazon’s product recommendation system cyberstalks your every click to sell you stuff you don’t need. We use scare quotes occasionally to remind you that “Artificial Intelligence” is marketing hype and science fiction – it does not exist.

Popular chatbot tools like ChatGPT produce seemingly coherent outputs that may sound like they can “think” “understand” or “reason.” They cannot. These models are trained on massive, uncurated datasets of text scraped from the web. All these “Large Language Models” do is predict the next word in a sequence. That’s it. They’re indifferent to truth and are better understood as either “Bullshit Machines” [1] or “Stochastic Parrots” [2].

Image depicting a chat interface with ChatGPT in which the author asks the chatbot for details on the fictional country "Samaltmanstan" and the chatbot replies as though this were a real country

Comparing AI tools to human intelligence is anthropomorphization: the attribution of human traits to nonhuman things. People do this all the time and it’s usually no big deal: “my car’s name is Betty”, “my roomba is sad”, “Siri is being a jerk.” The title of this zine is of course tongue-in-cheek: “AI” can only “die” in the same sense that your cellphone “dies.” Problems arise when we get tricked into believing AI tools are a good stand-in for human behaviors: AI grifters are financially incentivized to overstate the ability of their tools to be our friends, lovers, therapists or replace us in the workforce. These sci-fi delusions are useful for selling people AI products. Despite the fact that “AI” doesn't exist, it’s ruining everything anyway [3].

A Brief History of Hype

AI hype cycles are periods in which attention and capital gets diverted toward whatever random computing tools people are calling “AI”

The tools and technologies called “AI” have gone through periods of popularity where they capture the attention of industry, science, and the general public. At other times called “AI Winters” these tools have lost attention and become ordinary parts of everyday technology. There have been two winters in the past. We’re currently living in the third hype cycle.

Artistic depiction of a black and white macontosh with a frowning face imposed on the screen, Sad Mac, @dualdflipflop via. flickr

A graphic depicting the evolution of AI hype over time. The graphic begins in 1950 with the definition of the "Turing Test" and "Perceptrons" peaking in the 1960s with Information Theory and Bayesian Inference before falling into a trough emphasizing the first "AI winter". The hype rises again in the 1980s with "Expert Systems" before falling into the "Second AI Winter" from the 1980s-90s. The 1990s see the rise of now traditional datascience tools such as Support Vector Machines and Random Forests with a second wave of Deep Learning Hype in the 2010s. In the current era, Transformer models diffusion and "general chaos" reign. The figure then suggests three diverging possible futures: one which acclerates towards "AI God?" another which plateaus (labeled "sanity") and a third in which hype collapses leading to a "third winter"

In some eras AI was largely what we call statistics today. At other times it was sprawling logic rule-sets called “expert systems.” In others it’s been “neural networks” or “deep learning.” The technical details of these terms don’t matter much for our purposes.

The things we call “AI” are a constantly rotating cast of half-working technologies.

When hype for AI increases, the tools and methods labeled AI get adopted by science, industry, and the military. This leads to a positive feedback cycle: there’s more investment, more research, and more products driving further hype. Eventually expectations exceed the ability of these technologies to deliver on their promises and the hype collapses. This leads to broad criticism and disillusionment with the field: an AI winter.

Blinded by the Hype: AI technologies developed during a hype cycle eventually become “normal.” They’re no longer considered AI and fade into the everyday. Nobody is bothered by the fact that their smartphone uses face detection when snapping a picture: this “AI” tool is old news – the hype for this particular technology is over.

An orange Bitcoin Logo, combining the letter "B" with a dollar sign

Like AI, there are plenty of other tech products that go through hype cycles– and they don’t necessarily provide any benefit to humanity.

As we’re currently living in the third AI hype cycle, no one truly knows what socially useful technologies will stick around when the hype eventually blows over.

“Workers entered the field around 1950, and even around 1960, with high hopes that are very far from having been realized in 1972. In no part of the field have the discoveries made so far produced the major impact that was then promised.” – The Lighthill Report [4]

Now that we understand AI products are just hyped up tools, we need to recognize that some tools are ripe for abuse.

Automated Violence

The real-life harms caused by AI technologies are well documented. These harms can’t be addressed with more or “better” technology – we need good old- fashioned social progress

The question of whether “AI” causes harm has already been answered: harms have been documented by scientists, journalists, and scholars the world over. This section is a rapid-fire annotated bibliography of exceptional scholarship and reporting that exposes various kinds of automated violence. We encourage curious readers to follow these references for more detail. The interdisciplinary authors of “On the Dangers of Stochastic Parrots: Can Language Models be Too Big? ” [3] accurately predicted many of the environmental🦜 and social harms arising from the broad adoption of “stochastic parrots”: a pejorative name for the core technology underlying popular chatbots like OpenAI’s ChatGPT, Google’s Gemini, Meta’s Llama and X’s Grok. Google tried to suppress the publication of the Stochastic Parrots paper, a clear example of scientific research being at odds with the goals of a for-profit corporation [5].

Graphic depicting Deepfake human eyeballs generated by the authors

Mathematician and Data Scientist Cathy O’Neil illustrated how unregulated algorithmic systems perpetuate and scale discrimination – describing AI based decision systems as “Weapons of Math Destruction” [6]. The work of Computer Scientists Joy Buolamwini and Timnit Gebru analyzed how and why AI facial recognition systems discriminate based on skin color and gender [7]. Scholar and activist Sasha Costanza-Chock demonstrated how TSA body imaging machines disproportionally flag transgender peoples’ bodies for additional screening [8].

Graphic depicting the output of a TSA body scanner illustrating "anomolies" dependent on percieved gender

Disability Studies and Tech Ethics scholar Ashley Shew has highlighted how AI products reflect an implicitly ableist design philosophy and often treat individual experiences of disability as medical impairments rather than challenges arising from social barriers [9]. A ProPublica investigation [10] demonstrated how algorithms that detect criminal re-offense entrench the classist, racist, and xenophobic foundations of the criminal justice system. Political Scientist Virgina Eubanks thoroughly documented the many ways in which algorithmic systems “profile, police and punish the poor” [11]. “Predictive Policing” algorithms, intended to forecast crime (think of the movie Minority Report), only serve to increase surveillance and oppression of poor communities [12], are racially biased, and notoriously bad at actually predicting crime [13].

Graphic depicting Heat map of officer patrols in parts of Elgin, Illinois. Credit: Geolitica

Deepfake tools have been used to generate non- consensual pornography and revenge porn [14]. Some of the datasets that these AI models are trained off of contain hateful, violent, and abusive content including Child Sexual Abuse Material (CSAM) [15]. The Israeli military is field-testing ‘robot dogs’ in Gaza and has deployed a targeting system called "Habsora" (“The Gospel”) to automate target selection for the IDF [16]. The world over, AI systems deployed by the military industrial complex spy, target, and murder while the corporate developers of these technologies (from Lockheed and Raytheon to Palantir, Anduril, and OpenAI [17, 18] to Amazon [19] and Google [20]) reap massive profits from mass suffering and death.

Image depicting an army green "robot dog" with a mounted assault rifle US Army Robot dog,Spc. Dean John Kd De Dios/DVIDS stopkillerrobots.org

Automated violence arises from inequalities in who has the power to design and deploy AI systems. The largely homogenous groups of people developing, peddling, and profiting from AI do not care about the human beings their tools harm– they’re out to make money. Data Scientist Catherine D’Ignazio and Digital Humanities scholar Lauren F. Klein calls this inability of AI peddlers to understand or care about the harmful effects of their products “privilege hazards”: “Those who occupy the most privileged positions among us—people with good educations, respected credentials, and professional accolades—are poorly equipped to recognize instances of oppression in the world” – Data Feminism [21]

Full Page text: You can’t code your way out of a social problem

Labor Exploitation Machines

The ways that new technologies affect our lives is determined by who has ownership, agency, and power

When the world is in an AI hype cycle, as it is now, new tools replace older tools because they’re faster, cheaper, or simply more hyped up. The owners and creators of these tools get rich in the process, while others lose their jobs. There’s nothing new about this process. New tools are developed, jobs change, and the relationship between workers and their work evolves as a result. What matters now is how society will respond to these technological changes: will power be further concentrated in the hands of the tech elite? Will union efforts suffer? Will workers get screwed?

"The AI can't do your job, but the AI salesman can convince your boss to fire you and replace you with an AI anyway" – Cory Doctorow

A historical anecdote: New weaving technologies were developed in the United Kingdom during the Industrial Revolution. As these technologies were adopted by factory owners, many workers took issue with the new machines disrupting their jobs. Luddites fought back by destroying these new machines. While Luddites have been mischaracterized as being against all technology, they were in fact reacting to shitty working conditions where they were likely to be maimed or killed by the machines they worked with. Smashing looms was their tactic, not their goal [22].

1812 illustration of Luddites smashing a loom. (Chris Sunde / Wikimedia Commons)

Why do these algorithms create such absurd outcomes? [23] AI has politics [24, 25], and those politics are not neutral. Proponents of AI technology claim that with more of your data, AI tools will produce better outcomes. But who owns the data? Who gets to choose what tools get made from it? Who gets money and power when these tools are adopted by society? Who gets to decide what “better” even means?

Consolidation of wealth and social control in the hands of AI authoritarians are features of these systems, not bugs.

This way of thinking refocuses our attention on the systems these tools are a part of, rather than the tools themselves. Like the Luddites, the problem is not the technology itself, but lack of ownership and control. Who has the power to decide what’s designed, how it’s built and how it’s used?

A black and white line-up of powerful AI actors: Musk, Andreesen, Thiel, Altman, Bezos and Zuckerberg

Sci-fi Fantasies & Tech Ideologies

Tech billionaires are incentivized to support ideologies that emphasize tech’s progress instead of social progress, as social progress can threaten their wealth and power

If you’re a techie type you’ve probably seen people start companies, build cool new products and get rich. If you’re interested in high technology and enjoy working with these tools, you might be inclined to support the goals of your company and be skeptical of interference and regulation by the government. Libertarian ideologies might be very appealing to you: ‘get the government out of my business and let me make the world a better place’ [26]. This belief system follows naturally from techno- optimism, the idea that we can create social progress by inventing new technologies that free people from their labors. “Techno-optimism” is largely repackaged, sci-fi flavored reactionary politics. For instance, venture capitalist Marc Andreessen’s “Techno-Optimist Manifesto” is no more than old-school elitism for a techbro audience. In this unhinged, rambling text he vilifies ethicists, academics, regulators, and conservationists in a section titled “The Enemy”. We invite the reader to find and read it for themselves – it’s wild. Techno-optimism doesn’t offer the world anything more than unregulated capitalism, a worship of the rich, and the vague hope that the products and services they create will eventually, pretty please, solve our social ills, even though there’s no evidence that technology alone ever leads to social progress. Techbro ideologies are zero-commitment and zero- accountability – thin veils for the worship of their one true god: profit.

Billionaires think the solutions to problems caused by billionaires are known only to billionaires.

Tech leaders credit themselves as revolutionizing and transforming society for the better, that the human race is on the cusp of a utopian age of “human flourishing” or “abundance.” If you’re a techno-optimist in the 2020s, the most fantastical way to do this is Artificial General Intelligence (AGI). AGI is the hyped up idea that we’re close to creating some kind of super “AI agent” with human-like abilities across a wide variety of domains. Techno-optimists believe that AGI is good, controllable and imminent: for only $9.99 per month, everyone gets their own digital servant! Not all discourse around AGI is framed optimistically. “Doomers” claim the emergence of AGI will destroy humanity. This is termed “existential risk” or “x-risk.” However both AGI-optimist and AGI-doomer ways of thinking serve to perpetuate the AI hype cycle and shift focus away from the real harms of AI towards the fictional: sci-fi Terminator scenarios or benevolent machine gods. Some even mix the two, claiming that AGI is inevitable and beneficial, but anyone who resists it will be retroactively punished by the AGI god (see you in hell Roko’s Basilisk! 🖕) We provide a short overview of these wacko ideas so you can better spot them in the wild (and avoid them). This clusterfuck of AGI fantasies have come to be collectively referred to as the “TESCREAL Bundle” [27]. This acronym, coined by Timnit Gebru and philosopher Émile P. Torres refers to: • Transhumanism - the human body is weak and can be transcended with technology, somehow. • Extropianism - the goal of life is to reverse entropy (impossible) but an AGI god with superpowers can help us do that, hopefully. • Singularitarianism - a mystical event called The Singularity is nigh and AGI god will happen – trust us. • Cosmism - humanity will merge with AGI and roam the stars as a Star Trek Borg-like entity (Sounds shitty tbh). • Rationalism - deeply weird community of bloggers on the website LessWrong obsessed with the “alignment” of AGI with “human values”, dubiously defined. • Effective Altruism / Longtermism - the lives and dignity of the humans alive today are vanishingly unimportant in contrast with the value of the “long- term potential” of humanity, whatever that means.

A collage of "Terminally online TESCREAL memes about paperclips, basilisks, shoggoths & waluigi"

It’s a mess. The “rationalists” are not particularly rational, the “altruists” are not altruistic, the rest blur the lines between pseudoscience and science fiction. What they have been shown to share however, is alignment with and historical roots in eugenics: the same discriminatory attitudes that animated the eugenicists of the 20th century have evolved to impose elitist definitions of ‘intelligence’, align development of machine intelligence with the interests of the powerful (to the detriment of the marginalized) while evading accountability by framing these activities as “safety research” and “serving the future of humanity” [27]. TESCREAL has polluted the discourse on AI ethics, captured the imaginations of regulators of all political shades – notably Chuck Schumer’s adoption of their p(doom) concept [28] – and paved the path for the ascendance of what has been called “The Nerd Reich” [29]: an emerging US techno-political order captured by oligarchs, robber barons, techbros, and fascists. Taco carts are more regulated than Artificial Intelligence [30] The last and perhaps the most impotent AI approach is one of liberal passivity: “How can we harness the ‘benefits’ of AI while managing the harms?”. Governments and corporations have demonstrated their total unwillingness, incompetence and incapacity to meaningfully commit to accomplishing this goal. Although there’s no reason to expect that AGI is inevitable, will take a particular form or is even possible, our regulatory systems have been captured by science fiction inevitability narratives. So what the fuck can we do about it?

AI is not Inevitable

We, the human race, need to decide how AI is developed and governed. The idea that certain social outcomes are “inevitable” is standard conservative bullshit

TESCREAL ideologies popular among Silicon Valley elites have the convenient side effect of side-stepping critiques of capitalism. They either take capitalism as a given or refocus attention elsewhere, failing to challenge for-profit AI. We should not accept that AI is destined to take a particular form or have a particular effect on society. It’s up to us to take back the power to decide how technology is built, adopted and governed. “There is an inevitability narrative that we hear from tech companies - and that is a bid to take our agency" –Emily Bender, Computational Linguist Tech Won't Save Us Podcast A common perception among the public is that the AI technologies Silicon Valley venture capitalists are peddling is the only way forward: we must adopt and accept their products or we’ll be left behind and lose our jobs. We must accept the surveillance, theft, and poor working conditions they impose and the death and oppression they cause–their way is the only way. This is a classic example of capitalist realism, the belief that not only is capitalism inevitable, but any alternative is unimaginable. This narrative is false. We must imagine alternatives. As per Mark Fisher:

It is easier to imagine the end of the world than the end of capitalism.

Adoption of “AI” technologies and integration of these tools into our social fabrics serve particular social and political agendas. These agendas are not inevitable, they are imposed.

A diagram depicting the supply chains and workflows involved in Amazon's Alexa Adapted from Kate Crawford’s Anatomy of an AI System, https://anatomyof.ai/

What is to be Done?

Move the conversation from Fear of Missing Out to Fuck Around & Find Out

Photograph of a Tesla Cybertruck burning outside of a Trump building

If there’s hope for a future in which AI is transformed into a liberatory technology, it cannot be achieved through passive acceptance of whatever snake-oil Silicon Valley is selling: it must be built with care, intention, and be rooted in community [31]. In the meantime, we’re faced with the convergence of AI hype and increasingly fascist governments: Elon’s disastrously failed “D.O.G.E.” project as an obvious example. In this crucial moment, our attitude towards AI should be the same as towards any other oppressive tool owned by the ruling class: skepticism, criticism, obstruction, and resistance. Here’s how we do that. Ask questions: adopt the “questions first” framework laid out in “The AI Con” [32]. If you’re concerned about the use of an AI system by your government, employer, or another organization, from the beginning insist on asking: 1. What exactly is the task the tool or system claims to automate? Should this task be done at all? 2. What are the inputs and outputs? What is the evidence that the outputs can be derived from the inputs? 3. How is the system evaluated? What was measured? Was the evaluation specific to the intended context? Who conducted the evaluation? 4. Who benefits from the adoption, integration or use of the AI tool? Who might be harmed or made vulnerable? What recourse do harmed people have? Red flags: 1. Is the AI system or tool being described as human? 2. How much agency do you feel you have? Is the decision to use an AI tool opt-in or opt-out? Is hesitancy, resistance or criticism met with retaliation? 3. Were you asked for your consent to be monitored or have your data used in the development of an AI system? Can this consent be revoked? 4. What data are being collected from or about you? How is this data used? Do you feel like you’re protected from use of this data that you did not consent to? 5. Slop (aka “GenAI Art”) and enshitification [33] – you know it when you see it. “Solidarity between highly-paid tech workers and their lower-paid counterparts – who vastly outnumber them – is a tech CEO’s nightmare.” –The Exploited Labor Behind Artificial Intelligence [34] Demand transparency from companies and governments developing AI tools regarding the training data they use, their labor practices, and the environmental impacts of their systems. Insist on enhanced scrutiny of AI corporate actors, enforcement of existing laws [35], and accountability for the harms and injustices they cause or perpetuate. Assume that these organizations cannot be trusted to self-regulate. They can’t. Disrupt or ignore distracting sci-fi narratives by drawing attention back to real and documented outcomes. Emphasize AI critical discourse focused on labor, surveillance, alienation, medicalization, marginalization, minoritization and the environment. Even if you believe that AGI is possible, do you really want tech oligarchs to have it? Resist the construction of datacenters in your backyard. AI needs massive computing infrastructure. The data centers used to build and run AI tools require unreasonable amounts of energy and water, their pollution poisons our air, they contaminate our soil, and deplete our shared resources. [36] Work in solidarity with data workers [37], ghost workers [38], gig workers [39]. Particularly if you work in the tech industry: advocate, educate, unionize, and build collective power [40]. Imagine alternatives to dystopia and the techno- capitalist status-quo [41]. Imagination is resistance. Make art, write solarpunk, never stop dreaming of a future where we are all safe and free. Work backwards from these imagined futures, what can we do today to make them achievable?

Graphic adapted from Ruha Benjamin's "Imagination: A Manifesto" with three concentric circles labeled from largest to smallest: 1. Desirable for communities dreaming and building a truly inclusive and interdependent society 2. Viable beyond "sounding good on paper," and could we theoretically sustain it and specifically iterate from it? 3. Achievable considering the relative power, strategies, and skills of its stakeholders

How long will this go on? When will the hype cycle end? Is another AI winter coming? Faced with an inhumane tech industry and complicit governments it may ultimately fall to the people to invent new ways to evade [42], refuse, resist [43], sabotage [44, 45] or otherwise destroy [46] AI.

Hammers up! AI Must Die!

Seize the means of computation

References

1. ChatGPT is bullshit, Ethics and Information Technology 2024 2. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Emily M. Bender, Timnit Gebru, Angelina 🦜 McMillan-Major, Shmargaret Shmitchell, FAccT 2021 3. AI does not exist but it will ruin everything anyway, Angela Collier, YouTube 4. Artificial Intelligence: A General Survey, Professor Sir James Lighthill, 1972 5. We read the paper that forced Timnit Gebru out of Google. MIT Technology Review, 2020 6. 7. Weapons of Math Destruction, Cathy O’Neil, 2017 Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Joy Buolamwini & Timnit Gebru 2018 8. Design Justice, A.I., and Escape from the Matrix of Domination, Sasha Costanza-Chock 2018 9. Ableism, Technoableism, and Future AI, Ashley Shew, IEEE Technology & Society 2020 10.How We Analyzed the COMPAS Recidivism Algorithm, Jeff Larson, Surya Mattu, Lauren Kirchner and Julia Angwin, ProPublica 2016 11.Automating Inequality, Virginia Eubanks 2018 12.Palantir has secretly been using New Orleans to test its predictive policing technology, Ali Winston, The Verge 2018 13.Predictive Policing Software Terrible At Predicting Crimes, Aaron Sankin & Surya Mattu, The Markup 2023 14.What you need to know about non-consensual deepfakes, Centre for Research & Education on Violence against Women & Children www.gbvlearningnetwork.ca/our-work/infographics/nonconsen sualsexualdeepfakes/index.html 15.Investigation Finds AI Image Generation Models Trained on Child Abuse, David Thiel, Stanford Cyber Policy Center 16.‘The Gospel’: how Israel uses AI to select bombing targets in Gaza, The Guardian 2023 17.OpenAI Is Working With Anduril to Supply the US Military With AI, WIRED 2024 18. Introducing OpenAI for Government” https://openai.com/global-affairs/introducing-openai-for- government 19.The Hidden Ties Between Google and Amazon’s Project Nimbus and Israel's Military, Wired 2024 20.What Google’s return to defense AI means, Patrick Tucker 2025 21.Data Feminism, Catherine D'Ignazio and Lauren F. Klein 2020 22.Blood in the Machine, Brian Merchant 2023 23.To Live in Their Utopia: Why Algorithmic Systems Create Absurd Outcomes, Ali Alkhatib https://www.youtube.com/watch?v=ClGIosevT0Y 24.Do Artifacts have Politics?, Langdon winner 1980 25.Does Deep Learning Have Politics? Tomo Lazovich 2021 26.Silicon Valley “Making the World a Better Place” https://www.youtube.com/watch? v=B8C5sjjhsso&ab_channel=BrianJ.Hall 27.The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence, Timnit Gebru Émile P. Torres 2024 28.US Senate AI ‘Insight Forum’ Tracker, techpolicy.press 29.The Nerd Reich, Gil Duran, thenerdreich.com 30.We Regulate Taco Carts More Than Artificial Intelligence”, Times Union,https://www.pressreader.com/usa/albany-times- union-sunday/20250601/281960318687141 31.Power to the People? Opportunities and Challenges for Participatory AI, Abeba Birhane, William Isaac, Vinodkumar Prabhakaran, Mark Diaz, Madeleine Clare Elish, Iason Gabriel, Shakir Mohamed, Conference on Equity and Access in Algorithms, Mechanisms, and Optimization 2022 32.The AI Con, Emily M. Bender & Alex Hanna 2025 33.Disenshittify or die! How hackers can seize the means of computation, Cory Doctorow DEF CON 2024 34.The Exploited Labor Behind Artificial Intelligence, Adrienne Williams, Milagros Miceli and Timnit Gebru, NOEMA Magazine 35.Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, Lina M. Khan US Federal Trade Commission 2023 36.AI has an environmental problem. Here’s what the world can do about that. UN Environmental Programme 37.Data Workers’ Inquiry, data-workers.org 38.Ghost Work, Mary L. Gray & Siddharth Suri 2019 39.Turkopticon, turkopticon.net 40.Tech Workers Coalition, techworkerscoalition.org 41.DAIR Zine Library, https://dairzine.vercel.app/ 42.Glaze, glaze.cs.uchicago.edu 43.Resisting AI An Anti-fascist Approach to Artificial Intelligence, Dan McQuillan 2022 44.Manifesto on “Algorithmic Sabotage”, Algorithmic Sabotage Research Group 45.Nightshade, nightshade.cs.uchicago.edu 46.Destroy AI, Ali Alkhatib 2024 47.The Internet Con, Cory Doctorow

Page Description: large text reading "PUT DOWN ALL MACHINERY HURTFUL TO COMMONALITY!" and a full-color photograph of Waymo Robotaxis Destroyed during an anti-ICE protest in Los Angeles, 2025

Page Description: a line-drawing cartoon depecting Elon Musk driving a cybertruck giving a Nazi salute with a distorted Trump face overhead. Illustrated by @at0mivan