The Algorithmic Sublime: Noise, Memory, and AI

Diffusion Glitch, 2024.

This is an adapted transcript of remarks to students in Frank Shephard’s Algorithmic Sublime class at the New School for Social Research in New York City on November 8, 2024. Only my side of the conversation was documented, and subheads are used in lieu of class questions.


In AI training, noise is introduced as information is gradually stripped away from the image, reducing it to basic shapes until only noise remains. This Gaussian noise distribution follows a pattern that the AI learns to reverse-engineer. The AI isn’t designed for perfect recall but to generate. Noise isn’t just the absence of information; it’s an unstructured space that can be reorganized until new possibilities emerge.

As a technical process, it's fascinating. But it is also a way of thinking about a system's logic — and its connection, or disconnection, from the logic of human memory. I say this not because I believe that artificial intelligence systems map to human thought, but because I think they don't, and because I think we can find clarity through the comparison and contrast of these very simple models of human thought—AI systems—with the things that humans can actually do.

I think we can find clarity through the comparison and contrast of these very simple models of human thought—AI systems—with the things that humans can actually do.

At the moment, a space I find fascinating is how systems deal with noise, either responding or rejecting noise, and the parallel there to our response as human beings. While humans navigate noise in intricate ways, AI deals with noise in structured responses. Where humans perceive the world, AI’s relationship to the world is indirect, as it processes only patterns inferred from a secondhand, mediated presence in the world — us. I sometimes call AI-generated images “infographics” because they don’t directly reference the world; they’re visualizations of data representing the world.

While training a diffusion model, an image has information stripped out over several steps. When finished, we’re left with a jpg of noise — the complete absence of information about what that image was. There is a memory of what that image was that is stored in the model as coordinates for how noise has spread in that image. The model, in a sense, is an atlas, and our prompt is used to find the right page. The model then uses this coordination of data points, the representation of thousands of constellations within millions of photographs, and finds a way to steer back to some abstraction of the words used to describe specific pixel arrangements: that is, our words tell the model what the noise is supposed to be. 

That starts with noise, too. With noise, there is an almost infinite range of possible potential; it is the equivalent of a blank canvas. However, that canvas is structured somewhat because the noise sets the stage for the image that will be produced. The appearance of random clusters of pixels creates a compositional structure which is then filled in with increasing amounts of detail. It’s like a canvas with paint on it that has to be incorporated into the composition.

Every step of that process is informed by references learned from training data.  I say all this because I'm engaged with this idea of information and memory and what's in that training archive — whose information is it and what does it represent? A lot of this information, which I'll talk about in a second, comes from cultural memory and archives. This transition of the archive into noise, from which to generate new things, is complicated and complex.

Swimming in the Noise

In my art, I explore noise’s dual nature as distraction and potential. For instance, the piece Swim features a noisy background that resembles water generated by asking the AI to create noise, leading to system glitches. 

Over this is archival footage of a swimmer from a 1939 film, slowly disintegrating into static. This film footage, of the actress Nini Shipley, was filmed for the purpose of erotic entertainment. It is a very tame film by today's standards, but that is how it is filed by the University of Chicago. I am thinking about her as a woman occupying a role in this particular media industry at that time. The presence of the male gaze and the camera's analysis of her movement. Now, we don't think about cameras as an apparatus for measuring data. When we're looking at a film, a film of a woman’s body, it is meant to be stared at. She is meant to be observed and studied. We are supposed to look at her body. That is the role of that film: translating her body into media, to be looked at by men. I’m trying to subvert that view, though it’s hard — and I’m a man, a straight man, so I can’t speak so well to this — but I do think that Nini Shipley is also enjoying herself in the water. By slowing this down, separating it from the context of erotic entertainment, I think we shift the kind of pleasure we get from her movements — it is less about her body, I think, and more about movement, freeness, the pleasure of floating. 

Finally, over the course of that nine minutes of slowed down video, the entire image has information stripped away until, by the final 30 seconds, we are simply looking at static. That final layer is what is meant to evoke that disintegration of memory and cultural memory and the documentation of lived experiences into static, into noise. And to consider for a minute what exactly that is. What exactly is static representing?

There are two ways of looking at noise. The first is that noise is a distraction. Noise is the thing that gets in the way of communication. It needs to be eradicated from a system, whether we're talking about phone lines, telephone wires, or the static that is introduced—the static, which is literally residue from the Big Bang. It is the motion of the Big Bang being propelled forward. If you tune into old televisions between channels, you see that electrical activity in the air, the motion of the birth of the universe. But you're also looking at entropy, the disintegration of the universe.

When we look at noise in a JPEG, we are looking at both the absence of information but the present potential of nearly infinite unformed varieties of information. We could find all kinds of paths to follow through the noise. I think about this not just as images or sound but as the human tendency to sort through the noise surrounding us and how we act as filters in our media environments, political environments, and the world at large. 

There are at least two relationships we can have with noise. The first is to see it as something we need to run from, a nuisance. But we can also see it as a space for play.

There are at least two relationships we can have with noise. The first is to see it as something we need to run from, a nuisance. But we can also see it as a space for play. Think about a noisy party—that can be energizing. A noisy festival can be energizing, too. But if we restrict the possibility to the existence of previous patterns, a lot of that energy gets sucked away.

When frightened, the behavior of a human being in a noisy environment is to identify patterns in that environment—things they can understand and relate to, for comfort. In a terrifying situation, people cling to patterns, to what is known. That, to me, is the model I see in AI. If we are going to make a human metaphor (which I don’t think we should), it’s as if AI is modeled on the mind of a frightened person.

There are alternatives to this, even within algorithmic systems, like the response of a playful person—someone who enters a noisy environment and sees possibility, who engages and plays with that noise. In image generation, we call this “temperature.” It references the universe's entropy, like the explosion of the Big Bang, which is heat. You get all kinds of variability and random associations at very high temperatures. At a low temperature, the output is highly constrained to the training data, producing recognizable images and patterns that are closely associated with the prompt.

In some sense, then, noise represents obliteration and annihilation. It represents the fading of memory. Noise in AI becomes a framework of possibility shaped heavily by past patterns yet still bounded by the references embedded within its training data. When I generate an image using the phrase “stereo view,” the AI tends to recreate photography alluding to the U.S. colonization of the Philipines because that is what is in the archives, reflecting specific historical narratives ingrained in those visual forms.

Moth Glitch

I'm very interested in the system glitch. Here, we have a moth that was found in relay set number 70, panel F of a very large room-sized IBM system back in the 1940s, I think.

And this moth was wreaking havoc within the system. It was not why we call things computer bugs, we call things bugs before that. But it is the first literal bug where the material of computation, the structure of the computer, the ways that information moved through the machine, was directly sort of entangled with the natural system of moths living in dusty places. The moth offers up an intervention of the material world into this sort of abstract world of data and data processing.

One of my works here is a piece called Moth Glitch. The glitch you see behind these shadows of moths is more of that AI-generated glitchiness, that asking the system for noise when it can’t produce noise, it creates these abstract patterns. It’s superimposed with moths, a reference to that moth, the bug. But it’s also a reference to Stan Brakhage, a filmmaker, an experimental filmmaker from the 1960s who was really interested in the materiality of film and transcending this representation that film offered. 

What he did is he found moths that had died flying towards the light at his cabin. He took their wings and bodies, pasted them to a film strip, and projected it on a wall as the film. What you’re seeing in that projection is the actual moths—the materiality getting in the way of the mediation.

In my work, I’m trying to think through the glitch and what the materiality is of these systems, to see if we can get the system to make a mark on an image. I'm curious about the equivalent of materiality, and to me, it’s a system failing—producing an artifact it isn’t supposed to. While we can never paste a real moth into a generative image file, we are assembling the wings and bodies of thousands or millions of moths because of the ways that data is aggregated, consolidated, and generated from. This was a nod to that; it was a way of thinking through Stan Brakhage's relationship to moths and our relationship to AI and materiality.

The reason is that these systems start with an image of noise. They look for an image of a flower within noise. And when I ask for an image of noise, the system generates abstract patterns, with a 100% certainty that the result is noise. This feedback loop produces artifacts of that noise being stripped away, removed, refined, towards a direction of clarity, but actually creating an abstraction.This phenomenon happens across various models, systems, and formats, producing distinct noises depending on the model—stable diffusion, midjourney, or a sonic texture.

They're interesting artistically, and maybe that's enough. But as a researcher, I need to know — what am I looking at? Are these representations of noise, something in the training data? Or is it a legitimate glitch, a failure of the system? When these happened, my immediate instinct was to look at the training data and search through captions associated with this word noise. Not anything looked like these. What I got when I searched for noise was charts. Oftentimes, especially if you were looking for Gaussian noise, you would get charts, the familiar bell curve of a Gaussian distribution. And I didn't see anything that resembled this.

There is also real variation in the output of this prompt. I've also encountered image artifacts that are just random blocks of color, like literal squares, which is completely incompatible with anything even associated with a sine wave that might be noise or noise in a statistical sense. So there is nothing that I'm seeing these systems point to in the training data. Instead, what I'm seeing when I look at the infrastructure of the system is that tension between CLIP, which in Stable Diffusion is responsible for saying, 100% chance that is a picture of a flower, go on, refine it. Occam's razor for me says that what is actually happening is it's generating this first image as it does, which is a noisy image. If you use MidJourney, it shows you that journey from noise to image; that first image is blurry, streaky, arbitrary shapes and patterns coming out of a rough space in the model that corresponds to that closest-to-noise level of shapes and patterns. And then CLIP is supposed to say whether that's the right direction or not. To me, if it is generating a random cluster of arbitrary shapes and patterns and CLIP is being used to confirm that yes, this is an image of arbitrary shapes and patterns, noise, Gaussian noise, then that feedback loop is failing. 

All memory is absent from noise.

Another interesting experiment with this is if you ask it for certain keywords, you will get noisy versions of those keywords. Hands is a classic example. Hands are a really iconic shape, and image-generation models notoriously don't get them right for various reasons. Models are designed to extend patterns, not count how many patterns have occurred and then to stop. This comes back to the idea, the conceptual idea of noise, that I think also talks to the subject of the course, which is the sublime. It is this tension between reference and memory and pattern and noise as a generative space where actually, everything is gone. All memory is absent from noise, but it is also highly generative and highly possible to generate something new from it.

There's a way of grappling with noise in terms of being overdetermined. The ways that we approach noise as human beings—noisy environments, noisy images, whatever—is a way of filtration. Filtration can be a constraint on possibility. This removal of memory from a system, from an archive, is also concerning because of the ways that ghosts in the archive come back. We think things are erased, but they resurface, as we saw with images of the Philippines during the colonization period from the US. There is a haunting that occurs in that noise. 

Archives vs. Datasets: Curating Memory with Care

Here’s where archives and AI intersect. Traditionally, an archive is a curated collection, assembled with intentionality and care, often revealing cultural biases in its selections. But a dataset? Often it’s just data—no human oversight, no curatorial intention. And therein lies the difference: a dataset lacks the “care” that curating implies. “Curatorial” even derives from the Latin word for “care.”

The ethical stakes here are enormous. Without oversight, datasets often perpetuate biases, sometimes even embedding deeply disturbing content. In recent years, several AI training datasets have been found to contain abusive or violent images—content that feels inherently cruel to process as “merely” data. When these images, some showing extreme human vulnerability, are scraped and archived, we risk losing our ability to witness ethically. Instead of honoring these moments as painful parts of history, they’re merely points of reference in a massive AI training machine.

Part of my research history is with the Flickr Foundation. I was thinking about Flickr as an archive and this transition that comes — I share images that have meaning to me. Flickr becomes a way of sharing those memories, but it also becomes an archive of those memories.

The role of the archive is not to remember. An archive does not remember. The archive enables the activation of memory.

An archive is a collection of things that create a context that can activate memory. The role of the archive is not to remember. An archive does not remember. The archive enables the activation of memory. It allows me to go and say, oh, that was when I went swimming with my dog in the summer. It was a great time. That photograph is a way of activating that meaning. The same is true in historical archives. This is a record of injustices that have been done. I can go and activate that memory. It's not always pleasant, but it is often meaningful, important, and relevant. Essentially, you have a collection of data that together form a context that then surfaces in an individual who comes with cultural assumptions, historical knowledge, backgrounds, and relationships and can interpret that.

When you have training data, you have a machine that does not interpret that, cannot interpret that, trying to find meaning and defining meaning in a very different way. I say this mostly because I want to contrast the metaphors. I'm interested in the tension. What we can learn about being human is not by looking at AI as a model for humanity but by contrasting ourselves with it.

The contrast is that an AI system, a diffusion model trained on images in an archive, the photos of me swimming with my dog last summer, doesn't know who I am, who my dog is, what the water is, or what it means. It knows the information and can reduce it—the count of pixels, the clusters of pixels, the likelihood that one set of pixels will be present in a constellation of other sets of pixels—that is then generated. We lose some of that obligation to remember those things when we say, oh, it's training data. I don’t know that we should give it up so easily.

Consider this: what kind of “memory” is created in a dataset compiled without care? It’s not memory in the human sense; it’s more of a plausible “vibe,” an approximation. There’s no reference to specific people, places, or moments—just plausible image patterns. And that’s where AI diverges from us. Human memory is deeply tied to experience, context, and often a sense of purpose or meaning. But machine memory? It’s about prediction, not preservation.

Every AI-generated image is a punch card, a piece of code that tells the machine how to render an image without understanding it. The lack of context and the absence of witnessing make AI a strange and sometimes unsettling tool. As someone who works with these systems, my work often tries to restore some form of witnessing—to bring back the human act of seeing and understanding.

On Data-Mind

My experience of the world is also about not necessarily seeing every tree, saying “it’s a tree,” and ignoring it, but discovering how to see a tree again, despite having seen a thousand trees before.

How do we decide how to structure our lives, and what are we giving away when we automate those decisions? One of the things I find meaningful here is this idea of the transcendence of order. There is a pleasure in being relieved of pre-existing order and structures. Being overdetermined is nobody's idea of a good time.

I don’t want to be assigned to be a thing. I want to be me, and free to discover who I might become. I want to be who I am and operate freely as that. This is an ongoing process, sometimes requiring revision. My relationship to myself might change over time, and I'd like that to be, as much as possible, on my own terms. And so it is constraining to find myself trapped in a definition of someone else's making that I am unable to rewrite.

Just as my relationship to myself may shift, so too may the things I see in the world. I want to be free to escape my own tolerance for the mundane experiences that I have filtered out. That means, for example, seeing every tree, as opposed to saying “it's a tree,” and ignoring it. Rather, I enjoy that I have some capacity for discovering how to see a tree again, as if it is new, despite having seen a thousand trees before. There is a joy, and a critical part of life, that comes from seeing the tree, or another person, in a new way, to break out of the order we have established.

This order, I think, is the flip side of noise. There's a spectrum there, two poles but it's a circle, and noise is the mid-point between all possible forms of structure. Order can lean into overdetermination, this reliance on categories, systems, and labels that limits possibility. A kind of structural mechanics gets imposed that is resistant not only to change, but the shift of perspective that instigates change. Order requires that, whereas noise challenges that. The thing is, is that all order deteriorates, whether we love the world we've made or not. Nothing freezes still.

Noise, for me, represents the overwhelming complexity of the world. We filter out much of this complexity to make being alive manageable. But what’s on the other side of that filter? When the complexity and vastness of the world impedes on the filter, we experience something we might call sublime. Sometimes this is liberatory, and sometimes this is terrifying.

So my experience of the world as a human being is about not necessarily seeing every tree as just a tree, but discovering how to see a tree again. How to move beyond the filter. The filter is “Order,“ leaning into categories and systems, structural mechanics, whereas noise challenges that, entering into the system and disturbing those conditions. We seem to cling to order in this current cultural period.

Gen AI came about during Covid, a time of great uncertainty. Lots of structures of order were coming apart, and lots of demands were made for clarity. Lots of trauma went unaddressed, and lots of social unrest emerged. AI was a window into control. We didn't need to talk to other people, who were driving us all mad. We could have a conversation with a machine, positioned as an apolitical machine that did not challenge us, and then we could retreat to social media and find reinforcement for our own identities.

Order is what people crave when the noise is overwhelming, and noise frightens us because it disrupts that order. We say: “constrain that! Please, somebody, come in and limit this!” We want strong borders and clear separation because the world has become so noisy and overwhelming. This is the politics of noise: the noise of the other, the noise we don’t understand, and the desire to force it to conform, to be tamed. 

Order is a fine thing, but clinging too tightly to its rules and rituals can cut off the possibility of, metaphorically, seeing a tree differently. So there's something I notice when I am collecting "data," versus being myself in the world. I've written about this before – I used to go out and photograph nature for GAN training, building models of my own photos. In so doing, I learned what I call "seeing like a dataset."

In the data-mind, we are saying that every tree must be labeled a tree. If I want to collect tree data, I need to know what type of tree it is. Ideally, I can have a tree Shazam that I can hold up my phone, and it'll tell me what tree it is. I don't necessarily want to see the tree. I don't want to have an experience of the tree. I want to know what the tree's category and label is. I want, perhaps, to document the tree and gather feedback on it from my social media feeds.

That desire, the desire of data-mindedness is I think, a dangerous one, potentially, because it inoculates us against the sublime. The sublime must be experienced, not quantified. The sublime depends on not being measurable. But we seem to strive to do so, nonetheless. As direct experience becomes more mediated in the everyday, the sublime seems to slip further away into inaccessibility.

The sublime must be experienced, not quantified. The sublime depends on not being measurable. But we seem to strive to do so, nonetheless. As direct experience becomes more mediated in the every day, the sublime seems to slip further away into inaccessibility.

If we think of static as entropy, the absence of definition when things become noise, it’s life and death. A wall moving past us creates and destroys all at once, and of course, this isn't very pleasant. As information flows expand, we drown in a perception of the world out of control. People want clear rules and definitions, not fuzzy spectrums. The experience of the tree is hard.

I'm reminded of Rilke, who said, "For beauty is nothing but the beginning of terror, which we can still barely endure, and while we stand in wonder, it coolly disdains to destroy us." 

What is entropy even? It's that Big Bang. It is the creation of the universe, but it is also the dissolving of the universe. All that has come from the result of this rolling wall of static through the universe that is simultaneously destroying it. How do we respond to that?

Well, one way is to say I will constantly feel the tingling sense of terror at this possibility of simultaneous creation and destruction. But only the beginning of that. The sublime cannot swallow us completely, nor can all the categories fall away, or else we are left in total disorder. We seek only the transcendence that we can still tolerate. When we see beauty, it's not murdering us or giving birth to us, but it's somewhere in between, somewhere that nags in either direction. 

That allows us to have that experience at all, because we are terrified if it crosses that threshold. We are facing death or our own birth. Depending on where you are in the cycle, both are arguably terrifying. If everything ordered fell away; we’d lose all sense of ourselves. But that also allows us to change or adapt to complexity. I'd be terrified to witness my own birth, terrified to confront the reality that strips me of all illusions.

It involves an ego death — we have to dissolve to become something else, and if we cannot cling even to our own conception of ourselves, then what is there to stand upon?

But there is also excitement around the birth of new things, so long as they fit into a space of integration rather than destruction. So, this, I think, also comes to this deeper level of this navigation of noise—the simultaneous possibilities of that rolling wall of static and entropy. 

How do we react to that fear of death and renewal, all packed into this wall of noise? Well, we either hang out there and linger in it, or we close ourselves off. We say that we narrow the spectrum. AI is a good way of handling the complexity without confronting it, without having to adjust ourselves. We cling to order while the navigation of complexity is filtered for us, into the order we are so eager to preserve.

But if you want to eradicate death from your life, you also end up eliminating the possibility of real beauty, because you're trying not to experience anything that comes close to it. If you preserve order at all costs, you never confront change.

If you want to eradicate death from your life, you also end up eliminating the possibility of real beauty, because you’re trying not to experience anything that comes close to it. Death is one of the few universal realities that binds us as humans.

Death is one of the few universal realities that binds us as humans. We are alone in it together, so to speak. But perhaps we seek to build a narrower world out of fear. We see the label of the tree, or we filter out the tree, and we are unable to really see it, or god forbid, to see other people. AI art is excellent for this because it keeps art within a set of narrow parameters. There is no real risk of accidentally tripping into the sublime. The boundaries are already well-defined.

To me, this is an essential part of how we navigate noise, how we identify noise, what noise means to us, and how we process that—again, politically, aesthetically, personally, and artistically. AI is a mechanism for stabilizing the world according to a prescribed order, but it is an order built from data-mind, not the human aspiration toward growth and freedom.

So I am wary, I suppose, of consuming the results of algorithmic measurement as a model of the world. At least, I worry that it is too often, particularly in its commercial and political applications, tightly calibrated toward limits and constraint rather than new sight, or new possibility.

Q: Compression and the Art of Losing Detail

One crucial aspect of AI generation is compression—reducing the information in an image, only to regenerate it later. Compression means losing details, stripping the image to something manageable, and then rebuilding it. But what gets lost? This process often sacrifices nuance in AI, leaving only the most essential or recognizable traits.

As an artist, exploring compression in my work is about questioning what we discard in our pursuit of simplicity. When an AI system reconstructs an image from compressed data, it’s not truly recalling the original. It’s making educated guesses, filling in gaps with plausible—yet always inaccurate—details. Historically, interpolation comes from, I think, the French, where people would put fake pages into books to make it look like the book was saying something different than it was originally.

Q: Witnessing in the Age of Big Data

When I look at LAION 5B, it’s 5 billion images collected without any kind of curatorial intervention or process. And so, there have been researchers who have found a lot of problematic content in the training data. A lot of abusive content. I have found images from Abu Ghraib and prisoners being tortured in the data set. Not to even speak of misogyny and violence and racism. A lot of that stuff is just there. There are examples of the word hero. If you search for the keyword hero and the things that are associated with what a hero is, it comes from archives of people awarded medals for heroism by Nazi Germany. So you have pictures of swastikas associated with the word hero.

I think that this idea of witnessing, you know — is of some importance to images. It is important to recognize other people's experiences that we lose when we say we won't even think about them. When there is someone, a child, in a moment of profound vulnerability, who's been documented, photographed, and shared online for the pleasure of their perpetrators, it is an exceptional act of cruelty, I think, to say, “Oh, that's data.” To me, that perpetuates the cruelty involved in taking that image and circulating those images in the first place. It feels to me like this was a really distasteful element.

LAION 5B has gone through an audit. They were dragged kicking and screaming into doing an audit and complained that people found this content and didn’t tell them privately, like blaming the researchers who did their jobs for them. But they did one. And now there is a training data set, RE LAION, that has 5 billion images, and none of them are child abuse content. Sadly, that counts for a victory right now — that we aren’t using child abuse imagery to build stuff. 

What does it mean to be a witness to a data set?

But yeah, what does it mean to be a witness to a data set? I mean, I don't want to go and find that stuff. That's not something I want to go looking for. Nobody does. So, this act of witnessing is not necessarily literal. I don't need to be in the room where child abuse has taken place. But we should acknowledge that, right? We should witness the feeling, the pain of that, in using those images and extending that abuse. We can witness that pain, speak to that, and address it. I feel visceral disgust about that original data set and using models built on that content. 

When we abstract things that have happened in the world, documentation, photographs from the world, and we say, well, we're predicting pixels... There's a kind of mechanistic logic to it that is opposed to the humanist value system.

The distinction between an archive and a dataset is our relationship with them. Datasets are an archive in a structural sense. They are a collection of images. That's essentially what an archive is. A photo archive is a collection of photographs. However, the dataset version is not designed to be a memorial. It is not intended to be cared for. But there are still consequences to that decision to segment that and say it's not an archive; it is a data set, and therefore, we don't have an obligation to its contents.  If that is the case, then we should not be surprised by what I've referenced before, the ghosts in the system that come about. To say that the data set is not an archive creates a categorical error, and we overlook responsible decisions at our peril.

Q: AI as a Political Artifact: Systems Aesthetics and Ethical Complexity

Around the time we discovered some of the stuff in LAION, I had a physical revulsion to even the idea of looking at an image from that dataset. This is part of the incentive to figure this out—how do I make something that doesn't reference the dataset? Which is where the noise investigations started.

But I can speak more about me, my role, and my sense of complicity in these things. I don't know, in terms of thinking more structurally there is the mitigation of harm. There's this idea of being vegetarian versus vegan in AI, in the sense that you can go vegan, saying, we're not going to use these tools, we're not going to help build these tools. They are fundamentally immoral. The system is too structurally flawed.

Then there's this sort of vegetarian approach, which says there is a system, and we have to navigate that system ethically. We must minimize harm, to mitigate damage. The question is, will this system collapse if I stop eating cheese? Probably not. But can I go and buy organic cheese? Can I work with things that are not complicit or complicit in a different way, navigating that complicity with a sense of responsibility?

Within me, there are two wolves. There is the radical Luddite: we don’t need technology because we lived without it before, so we can just shut the whole thing down and nothing will matter. No harm will come to this planet if we don't have a new picture-making machine. On the other hand, I find that unlikely to happen. I don't think we will get a mass movement saying, no, we’ve all agreed, none of us will use this thing. And there are different responses, at different levels of engagement. I might have one lens for talking to digital artists but another for policymakers, another for designers of the systems, and another for the people pondering whether to sell an archive for training data.

Then the question becomes, well, what do we do within the system? What are the leverage points? To some, that looks like complicity. To me, it looks like harm mitigation. That may just be unresolvable. They may, in fact, be the same thing.

Here’s an example. I've been involved in events like the DEFCON hacker convention, where they had a red teaming exercise. In information security, red teaming allows a window for hackers to try to hack the system, aiming to identify vulnerabilities to improve security. We were invited, as part of a group we call the Algorithmic Resistance Research Group (ARRG!), to present work in this environment. But then criticisms emerged, questioning why companies rely on unpaid volunteers to crack their systems instead of paying people. Although they do pay some, the effort sometimes appears more about protecting the system from being recognized as failing rather than addressing harms it might induce in specific communities or contexts.

For instance, if someone were to say, from their point of view as a Native American, that the generated content is offensive, bringing a cultural perspective on harm that these companies probably don’t have — well, that feedback wasn’t part of the task. The task was to report rule-breaking issues, like a game. I was there, navigating this idea of complicity—am I contributing to building legally defensible systems or helping to surface real risks? What I’ve concluded, which may not suit everyone, is that by working with AI, you are complicit in a system and will take some shit for it. It’s inevitable.

We were there to present a different viewpoint on the harms of these systems. We showed work we made with glitches, using AI to point to broader concerns. That’s okay, I think, but I also accept that, I understand that criticism because I am working with these systems. It’s just that I’m also trying to understand and talk about what these systems do.

I cannot understand and discuss the impacts and the changes in our relationship to creativity and communication without using these systems. I have to get in there and use them. That’s just me. There's a similar incentive for those who dislike these systems but need to figure out how to protect people and make the systems better. It may not be "vegan," but it’s about steering these systems in better directions.

Everyone is complicit unless they don’t use or touch them at all. That's the morally correct stance in many ways, but I don't think a refusal to engage helps me address the things I am concerned with and situated to address. So I'm guilty as charged. I acknowledge my complicity in these systems but also recognize that I have to be.

I also think about Jack Burnham's work on "Systems Esthetics" from the late 1960s, which discusses the artist's role in a systems context. At that time, artists began confronting political environments and broader systems of power. Burnham argues that the type of work an artist makes from systems cannot be disentangled. With AI, this feels particularly relevant; there is no AI art that isn’t about AI, and there’s no engaging with AI without engaging its politics. The aesthetic experience is never non-political. For instance, I can't remove that theorist part of my mind to engage with an image from Midjourney. I’d argue that we shouldn't aim for an experience divorced from politics. Instead, that “aesthetic experience” should surface its logic, our complicity, and its corresponding discomfort.

Some writers argue that the political lens shouldn’t be applied to AI aesthetics — I mean, they would say there is no AI aesthetics — but I feel the opposite. We need to look at politics because AI is a manifestation of political decisions and power structures. An AI system is made up of many systems, as illustrated in Kate Crawford and Vladan Joler’s work, "Anatomy of an AI System," and their more recent work, "Calculating Empires," which maps the epistemological, power, political, and environmental structures behind AI. I think that is great work that imagines against the imagination of AI. As an artist, I usually aim to imagine possibilities, but with AI, I’m focused on stripping my imagination away to look at what is there. This may stray from pure aesthetic imagination, but hopefully, it’s valuable in a different way.


You can sign up for the Cybernetic Forests newsletter here.