Critical Topics: AI Images,
Class Ten

Do Great Artists Steal?

Often when we talk about AI art we hear that it is made by combining the names of artists. Then we hear the phrase, “Good artists borrow, great artists steal,” which is attributed to either Steve Jobs or Salvador Dali or William Faulkner or Igor Stravinsky. The actual quote is from TS Eliot, published in 1920 in a book named “The Sacred Wood: Essays on Poetry and Criticism” — this is what he said: 

One of the surest of tests is the way in which a poet borrows. Immature poets imitate; mature poets steal; bad poets deface what they take, and good poets make it into something better, or at least something different. The good poet welds his theft into a whole of feeling which is unique, utterly different from that from which it was torn; the bad poet throws it into something which has no cohesion. A good poet will usually borrow from authors remote in time, or alien in language, or diverse in interest.

Critics of AI art write that the images are cheap knock offs of great works: The art critic Jerry Stalz said in an interview that most AI art was “crapola illustration,” and indicated that most of the work he’d seen has been done hundreds of millions of times by humans in history. Stalz didn’t talk about datasets, but datasets do offer one explanation for this: if you are drawing on the images in the dataset to make art, you are drawing on statistical averages of millions of pieces of art. This includes historical paintings, illustrations, comic books, video game screen grabs, dental records, nature drawings, charts and diagrams, selfies... the entire range of images are in these datasets, but they are all things that have been done. In order to make something profoundly new with these images, you would need to be profoundly creative with how you use them. This is a question I find really interesting with AI images. 

If you want to make something new, or your own, how do you make it with something that contains everything that has already been done? If you want to change the way people see the world, how do you do that with a dataset of everything they’ve already seen? 

In our last class we took a look under the hood of the datasets that drive many Diffusion models, uncovering some of the problematic ways that the data has categorized people. This week, we’re continuing to look at the datasets, from a different angle. We’ll look at what the dataset is with regard to artists, but we’ll also look at what you can do with this dataset. What are the limits on what you can make, what are the ethics around making generated images, and what are the effects that these images might have? 

In this class, we’re going to talk a lot about the idea of consent and datasets. Who gets to decide what’s included in a dataset, and who gets to opt out? We’ll look at the relationship between the images we get from Stable Diffusion and Midjourney and the ways they reflect — and don’t reflect — the images made by living artists. 

We’ll ask a broader question about art and power: looking at a history of artists who deliberately used appropriation to make their work. What defines an artist who appropriates as a form of art and commentary, from an artist who copies or plagiarizes work? 

Part One: Artists in the Dataset

It’s important to note that there are a lot of ways to address the question of artists in the dataset. There are legal obligations, but there are also ethical commitments — the difference between breaking the law and just being a jerk.

First, there’s the legal question. A number of lawsuits are being filed against various AI image generation tools for a number of reasons. One of these is a class action lawsuit on behalf of artists whose work was included in the training data without permission — arguing that because people can generate AI artworks based on their styles, without their consent, they are owed financial damages — or at least, companies profiting on these technologies need to stop. Another is the photography company, Getty Images, which is suing StabilityAI — the company that runs Stable Diffusion — for using its images without permission. Use of copyright material without permission is eligible for a fine of up to $150,000 per offense, and Getty believes StabilityAI used 12 million of its images. If you don’t want to do the math, yes, they are filing a $1.8 Trillion lawsuit, just under the total value of every service and product sold in the United States in a single year. 

The class action lawsuit is about the use of artist’s expressions in the dataset: the styles, color choices and other aspects of art that make up artistic expression. The Getty lawsuit is about copyright, the protection of specific works of art and the right to choose who makes copies and derivative works from things like photographs or movies. Getty is saying that by looking at the images, breaking them down and creating models based on those patterns, StabilityAI made a derivative work. It’s important to note that there are very different legal arguments and justifications for these lawsuits, and that they draw on a different set of precedents. We will go over some aspects of the law in a later class, but today we want to look at the data as it relates to artists and artistic expression. We’re going to dig into the datasets and see what we find — and what we think. 

So that’s the legal frame for this question. There’s also the ethical frame, and this question is the same question that surrounds data sets of any kind. That’s the idea of consent. Who gets to decide what data is in the dataset? If an artist uploads an image to DeviantArt with the intent of sharing it with humans, does that imply that they are sharing that image with deep learning systems too? Maybe you believe that there’s no difference between a robot or a person looking at and being inspired by a work of art that you made. But do you think that artists should be able to decide what happens to their own work? These are the ethical questions around participation and informed consent: do people know what they are agreeing to when they share data? Are people expected to anticipate every possible use of the pictures they post online, and how much control do they have over their data once it’s shared?  

Artists may upload work for the public to look at - it may be a way of getting noticed, of finding commissions, or finding community. At no point did artists expect that doing so would result in their work being used to train a massive artificial intelligence system. Now, the concern is that the images in the dataset contribute to art that looks like their illustrations, which means that more art in their style is being generated, which brings down the value of their artwork. Arguably, this true even if users prompt the model without using a specific artist’s name. 

Magritte’s The Treachery of Images: A pipe, with cursive writing that says (in French) “This is not a pipe.”

Let’s define style for a minute. What is an “artist’s style?” In concrete terms, it’s a way of creating something that sets it apart. Often, it carries out across multiple images or other creations. With images, we can think of it as a visual signature, the result of a single artist’s choices about things such as color, line thickness, and how things are re-created on paper or screens. Some artists have bold, distinct styles — and aim for that. Others aim to create realism, or replicate the decisions in a certain tradition. Nonetheless, when we make something, some part of ourselves is imprinted into it. It might be the result of conscious choices to be unique. It might be the decision to integrate aspects of other artist’s styles and combine them into something distinct. It might be that we are very good at certain things and emphasize them over other things we aren’t so good at. But ultimately, style can be thought of as a series of decisions — conscious or unconscious — about what we make and how we make it.

We can use Magritte for an example. Magritte famously painted the image to the top right, titled The Treachery of Images: a painting of a pipe, with the words beneath it, roughly, “This is a not a pipe.” We can look at the dataset and see that this pipe is pretty well associated with Magritte. Even more so if you just say “Magritte Pipe.” 

Images in the training data for “Magritte Pipe” show variations on the Magritte’s painting (seen above).

And yet, if we look for pipes, Magritte’s pipe is actually the only smoking pipe in the first results. The first page of results is just scratching the surface of the image data, it’s not meant to be complete. The point is that an artists name is strongly correlated to that artist’s style, and may sometimes be part of other categories. Why does this happen? Remember, we are talking about systems that scrape images from the World Wide Web, then look for any associated text. “Pipe” is an interesting example, because there are two kinds of pipes: pipes we smoke from, and pipes that handle our plumbing. It seems that the word “pipe” is often found online to mean plumbing. We might make a guess that this is because commercial websites selling plumbing are bountiful, and have publicly available images with clear labels.

But these categories aren’t cleaned up by people, and when collecting data, the images aren’t evaluated for meaning or specificity. A pipe is, really, any shape, any arrangement of pixels, that appears next to the letters p-i-p-e often enough.

So now let’s see what happens if we prompt Magritte pipe in Stable Diffusion. The results are below.

We get images in the style of Magritte, but barely any smoking pipes. So the style associated with “Magritte” is actually very strong. Magritte’s name is associated with many images with a particular style of painting. So the model has a strong inclination toward rendering this style of Magritte, because it has so much more information about what those images would look like. Style here seems to suggest an emphasis on a kind of constellation of subject matter (clouds, men in hats) and colors (browns, blues, whites). It’s only loosely integrating pipes, though they are present.

In summary, using the word Magritte says pretty clearly to the model that you want something in the style of a Magritte painting. Whereas pipe images are all over the place, it’s this style of Magritte that comes through most boldly in the images above: a specific artist’s name.

Meanwhile, if we do Paintings of a Pipe — with no particular artist — we get paintings of plumbing, in a style that looks like paint was used, but in a generic kind of style. It would be hard to say who painted this. They are a kind of aggregated style: an average, built on examples of everything in the training data that is associated with the word painting.

Just for fun, we can also ask for renderings for the phrase “This painting is not a pipe,” or the original french. Then we’re literally asking the model to give us something that is not a pipe. Ironically, it still gives us paintings of pipes. Do you have a theory of why that is? 

It’s a reminder that Diffusion models don’t really understand the meaning of language. It only understands key words and phrases. CLIP receives the words painting and pipe from your prompt, and these are words with much clearer representations in the dataset than “not” or “a”. So these models work by finding information associated with your prompt, not based on any kind of thinking about what you are trying to say. If key words are often lumped together, that pair of words might form its own category. But CLIP will examine the image being presented to it by the model, and determine which keywords matter most in your prompt. If it matches those keywords, it passes it through.

This is why knowing what’s in the dataset for the images you want to make can be helpful in steering the dataset toward the things you want. But it also helps to understand why artists are so upset about being in the dataset — now anyone can make an image in their style. Indeed, many artists have found that AI generated works associated with their names are overwhelming Google search results, making it harder to find real work that they’ve made — and want to sell, or use to attract paid work. Some companies have restricted the use of artist’s names in their prompt windows, but many have not.

That’s what happened to Greg Rutkowksi, a fantasy art illustrator whose work was used as a default prompt for an early diffusion model called Disco Diffusion. Rutkowski’s name was associated with so many images as a result of this that his own art became buried under AI generated copies. This wasn’t necessarily an issue of the images copying his style — it was that his name became synonymous with AI generated fantasy art. As a result, Stability AI removed his images from the dataset and in December of 2022 they introduced a method for individual artists to remove their work from the training data. 

Part Two: The Synthesizer

One of the big ethical questions around AI art is whether you should be able to use artist’s names in the prompts. On the one hand, some say that combining elements of artist’s style is how culture works: we see things, we blend things, we draw inspiration from art and make our own, with new evolutions and mutations emerging from the translation. 

It’s important to think about the quote we started the class with — that good artists borrow and great artists steal — and compare it to what the actual quote says: Bad poets deface what they take, and good poets make it into something better, or at least something different.

So what is the difference between defacing someone’s art, and what might it mean to make it different? The answer is going to vary from person to person. But as a class that is thinking critically about these issues, we want to think about this question as a question of power. Because good and bad art is in the eye of the beholder: what is tacky and cheap to one person may be beautiful and moving to another. 

We mentioned the art critic Jerry Staltz at the start of this class, and there are other critics who have weighed in, too. 

Mike Pepi writes: “I guess I’m just very frustrated with these tech people coming in and willy-nilly trying to use these interesting GAN networks to spit out something that just sort of looks surrealist or abstract,’ he says. ‘I definitely feel like there are some artists who do that, and the results just aren’t very good.” 

So this is not an art criticism class — it’s not a class about deciding what’s good and bad. It’s a critical thinking class. So instead of thinking in terms of good or bad, let’s think about the ways that image synthesizing tools can create power and take power away. 

We’re talking about image synthesis. But we have somewhere else to look for insight: music synthesis.

The RCA Corporations Mark-1 Synthesizer.

The picture above is the RCA Corporation’s Mark-1 synthesizer: "a three-ton mashup of digital data devices, mechanical and electromagnetic transduction circuits, tuning forks and vacuum tubes, punched paper rolls, wire brushes, relays, resonator chains, amplifiers, speakers, and disc recorders” (source) invented by Milton Babbit.

Describing the process of testing the equipment, a designer writes: 

[W]e analyzed piano recordings of ‘Polonaise’ by Chopin and ‘Clair de Lune’ by Debussy played by Iturbi, Rubinstein and Horowitz. Also ‘The Old Refrain’ by Kreisler played [by] Kreisler. The analysis was then synthesized and recorded and we intermixed short excerpts of the synthesized and original recordings for a test. We had fourteen excerpts, seven original and seven synthesized. Professional musicians and laymen were unable to detect the original from the synthesized versions. This proved that the electronic music synthesizer could produce great music. (Heyer n.d.) (Heyer, Mark. 1975. Harry F. Olson, an Oral History. Hoboken, NJ: IEEE History Center. Accessed October 14, 2018. https://ethw.org/Oral-History:Harry_F._Olson)

In a 1961 essay, Babbit writes: 

Present-day electronic media for the total production of sound, providing precise measurability and specifiability of frequency, intensity, spectrum, envelope, duration, and mode of succession, remove completely those musical limits imposed by the physical limitations of the performer and conventional musical instruments. The region of limitation is now located entirely in the human perceptual and conceptual apparatus, and the discovery and formulation of these constraints fall in the province of the psychoacoustician. (1961, 83–84)

Babbitt, Milton. 1961. “Past and Present Concepts of the Nature and Limits of Music.” In Collected Essays of Milton Babbitt, edited by Stephen Peles, 78–85. Princeton, NJ: Princeton University Press.


This kind of rhetoric probably seems familiar, as it is describing the movement of art making away from musicians and into the hands of machines, who are said to do it more perfectly and with greater freedom and flexibility, unconstrained by the burden of mastering a skill or craft. No missed notes. It was also tested — not trained, but tested — on a variety of existing musical pieces to see how it would perform. Later, the synthesizer would be leaned on to produce new sounds and new styles of composition that didn’t sound like existing music. But at the start, it was designed to replicate what was already out there. 

We should think about this when we think about image synthesizers like DALL-E and Stable Diffusion. Right now, much of what people make is replication: they can find an artists name and make something like that artist. The effect is likely to be similar to that of the synthesizer: this music didn’t sweep the charts, and eventually, people became interested in what else you could do with it. 

That meant experimentation.

Here’s Henri Pousseur - Scambi from 1957, an attempt to make something that sounded like a synthesizer instead of sounding like a computer playing a piano. 

Now, this may not be your cup of tea, and that’s fine. But this experimentation is what lead us to a new path for music: away from the RCA Mark 1 and its reproduction of classical music and into an era where electronic sounds could create new genres. Another experimental musician, Delia Darbyshire, was also intrigued by the potential of electronic sounds. Here’s a track made in 1963 — note the similarities to house music, IDM and techno. 

So there are two ways of looking at the music synthesizer. On the one hand, it was designed in a way that was an explicit threat to music makers of the time. In the 1980s, the UK Musician’s Union banned the use of synthesizers by members, fearing that using them to replace traditional orchestras and drummers would lead to unemployment among musicians. We often say these fears were overblown — but it’s also true that many bands and recording studios no longer needed to hire orchestras, leading to diminished employment among sessions musicians.

On the other hand, musicians could use synthesizers to make something new, and the style of music that the synthesizer created was one that allowed more individuals to create and record music without needing to hire other professional musicians. Few people would listen to a jazz performance written and performed by a computer: they want to hear musicians. But other genres can blend the synthesizer into their sound, or build music around synthesizers exclusively.

The difference is a matter of taste. But behind the scenes, there is still the matter of power. If you look at the technology as a way of copying and replacing humans, it can be read as a tool of power against artists. If you look at the technology as a way of making something that hasn’t been heard before, it can be thought of as a way of creating new forms of power: power of expression, creativity, and ideas. 

But a key point to make is that these things are not mutually exclusive. They are both happening at the same time. So it’s important to think about the potential of new technologies as well as their consequences. 

With image synthesis, people have often thought about and engaged with the question of appropriation. Is AI art appropriation

Part Three: A Brief History of Art Theft

We can look a bit to the way artists have tackled this when confronted by photography. From the 1960s to the 1980s, artists began to consciously appropriate photographic images. They did it by shifting the way they thought about images. They stopped seeing photographs as documents that represented the world, per se, and started thinking about how images could be understood as concepts.

Cover of Cosmopolitans “AI Issue”

Images stopped being about what the images showed us, and started being about images as ideas. There’s an argument that our culture is overflowing with images, and that it’s only natural to approach images as the source material for new images. Some artists put forward the idea that images are part of the landscape, and ask why images need to be original in the first place.

Diffusion is based on billions of uncategorized images and their captions. Cosmopolitan put out its first AI generated cover, which you see to the right. It has a bit of aged clumsiness about it — declaring that the cover, itself, is artificially intelligent is almost endearingly wrong.

If you ask for a photograph of an astronaut, you aren’t necessarily getting anybody’s individual photograph of an astronaut. You’re getting the model’s idea of what an astronaut is, based on the patterns in pixels that are common to images labeled “astronaut.” This image was then retouched and remixed, reshaped to fit the size of the magazine, and composited. The mask, by the way, was a way to get around the terrible representation of human faces that plagues these systems.

But this changes the meaning of the astronaut image as an image of an astronaut — it’s not like a photoshoot where a single photographer is directing the image making process. It isn’t anybody’s expression of something unique about a specific astronaut. It blends together images of people critical of astronauts and people who love astronauts. It gives you, in a sense, the absolute average of all astronauts. 

Robert Rauschenberg. Hotshot. 1983. MOMA.

Consider another piece about astronauts — Hot Shot, by Robert Rauschenberg, from 1983. Rauschenberg is taking other images, whole, and putting them into a new arrangement, suggesting a new work.

Pretty much nothing that Rauschenberg has assembled here is keeping its original meaning, and none of the images are original. He’s using these images to tell a new story through new arrangements: to steer the viewer into some new way of making sense of these images.

So even though this is literally a series of other people’s photographs, the arrangement, and their selection, is used to tell a new story. We can ask: what made Rauschenberg pick these images? We can ask that about this piece in a way that is less clear than if we ask it of Stable Diffusion, because Stable Diffusion’s answer is statistics.

Is the artist who writes a prompt doing the work that Rauschenberg did, in selecting these images? Is there a difference between pulling pictures from an archive to tell a new story, and pulling pixel arrangements out of an image model to tell a new story?

Possibly! An artist could very well use Stable Diffusion or some other AI generation system to think about what the datasets contain, and how to make use of the meaning of these categories of images. It’s unlikely that the designers of the Cosmopolitan cover were asking questions about our collective understanding of Astronauts, and certainly, nothing in the cover encourages the viewer to engage in that conversation. But it seems possible for artists to do so.

The director Carme Puche More, for example, uses diffusion models to create images of people that speaks to the distance between what they see of themselves and how the machine presents them. For example, she notes, it rarely offers up images of women astronauts.

More writes:

If we look at big data and imagine the world as a big sheet made up of billions of images, all of them organized by the law of the logarithm of proximity (words near an image define it), and take this sheet as a reference for the creation of new images... what dress will it give us? When society has not yet been able to create an imaginary that approaches reality, responsible, rich and inclusive with all the people who inhabit the planet, an Artificial Intelligence tool reaches the hands of everyone to be able create new images from these same biases. The result is sometimes devastating. It is true that the result is sometimes terrifying, but at the same time, we have found some hopeful results. We are investigating how the machine interacts with diverse and inclusive imaginaries. What we are discovering is that the machine tells us, in some way, that through art we have the power to generate a new imaginary that at the same time transforms the way we describe the world, even through the same words.

We should remember that just because a tool suggests a certain way to use it, we can still use the tool to interrogate it, and ask questions. This is also where appropriation comes in as a strategy for artists, in ways that resonate with the way AI might be used.

Appropriation is the use of another artist’s style as a way of commenting on that style. You can appropriate an artist as an homage, or as a critique, or as satire. You can also appropriate an image as outright theft. Remember what TS Elliot said about theft: Not that great artists steal, but that The good poet welds his theft into a whole of feeling which is unique, utterly different from that from which it was torn; the bad poet throws it into something which has no cohesion. 

Doing so can also be messy and harmful. To differentiate cultural appropriation from general “cultural exchange”, scholars refer to power dynamics. The lawyer Susan Scafidi defines appropriation as “taking intellectual property, traditional knowledge, cultural expressions, or artifacts from someone else’s culture without permission.” — Karina Sembe

Appropriation in art has been around for a while - Duchamp, the pioneer of Dada, famously drew a moustache on a copy of the Mona Lisa. But the appropriation artists arrived as a movement in the early 1980s. In an essay for The Met, Douglas Eklund writes: 

“What these fledgling artists [had] fully to themselves was the sea of images into which they were born—the media culture of movies and television, popular music, and magazines that to them constituted a sort of fifth element or a prevailing kind of weather. Their relationship to such material was productively schizophrenic: while they were first and foremost consumers, they also learned to adopt a cool, critical attitude toward the very same mechanisms of seduction and desire that played upon them from the highly influential writings of French philosophers and cultural critics such as Michel Foucault, Roland Barthes, and Julia Kristeva that were just beginning to be made available in translation. Among these thinkers’ central ideas was that identity was not organic and innate, but manufactured and learned through highly refined social constructions of gender, race, sexuality, and citizenship. These constructions were embedded within society’s institutions and achieved their effects through the myriad expressions of the mass media. Barthes infamously extended this concept to question the very possibility of originality and authenticity in his 1967 manifesto “The Death of the Author,” in which he stated that any text (or image), rather than emitting a fixed meaning from a singular voice, was but a tissue of quotations that were themselves references to yet other texts, and so on.”

Following this line of thought, images are freed from the author’s intent and take on a life of their own. The consumers of images — which is all of us — are shaped by these images, but we also shape the ways they’re interpreted. We negotiate our relationship with images: we use them to tell our own stories. When images seek to assert control, artists can assert counternarratives. 

Sherry Levine and Walker Evans.

One of the artists of this generation is the photographer Sherry Levine, who took the iconic photo above. Well, one of them. On the left is Sherry Levine’s photograph of the photo on the right, a picture of Allie Mae Burroughs which was taken by Walker Evans. Sherry Levigne photographed the photograph, and presented it with an open and explicit acknowledgement — calling the work “After Walker Evans.” This was a way of posing a question about photography and images, now that we live in a sea of them. 

Levine explained the work this way:

Originality was always something I was thinking about, but there's also the idea of ownership and property... It's not that I'm trying to deny that people own things. That isn't even the point. The point is that people want to own things, which is more interesting to me. What does it mean to own something, and stranger still, what does it mean to own an image?

A key distinction here may depend on how you interpret this work. On the one hand, Levine has said that taking Evans’ work and placing it in this new context creates a new work, because it starts a new conversation that creates a new relationship with the image. Thus the title, “after” Walker Evans, as in: Evans had his time with this story, now it’s time for someone else. (And what about the subject of the photograph, Allie Mae Burroughs?)

Or you could look at this as a feminist critique of power structure: an intervention by a female photographer in the white, male dominated space of the 1980s New York art world and beyond. One form of “defacement,” examined from the lens of power, can be to take someone’s artwork and make it something else. That’s an act of power. 

Result from “sherry levine walker evans portrait,” Midjourney 6.1.

When we are talking about data and images — and let’s be clear that I am speaking only on the use of data and images — appropriation can change meanings when you consider who is the one taking and who is being taken from. Levine wasn’t stealing Walker Evans work in ways that would impact Walker Evans, who was long dead. But even if she had, what was the balance of power between Evans — a famous photographer for decades by the time Levine had done this — and Levine? Who had more power, and how did Levine shift that power? Does that matter? 

Today you might think, OK, well I can appropriate Walker Evans too, by asking MidJourney to create a Walker Evans portrait. On the right is an example of this: results for the prompt “walker evans sherry levine portrait,” from Midjourney 6.1. What does this image say about power?

We might think of Diffusion models as appropriation machines: systems that look at the same sea of images that we do, and impose their own logic on them. If you take Sherry Levine’s perspective, that appropriation is a way of reclaiming power, then we open up a pretty big question — who has the power over images in the training data, and in the outcome of the prompts? What is our relationship with the power of images when we rely on these systems to produce these images?  

Because MidJourney, DALLE2, and Stable Diffusion are learning from our data - my data, your data - and building tools that people pay for to make new things. Is a Diffusion model in a position to take power from human image makers? Perhaps: if there’s a sea of cheap images made in someone else’s style, without their permission or consent, it would seem to be the case that those images would lead to people being bored by it. In that case, we could make a hundred images of an artist’s work it ways that harm the artist. That would be like using the synthesizer to replace a performer.

Or is there something that we, as users of the tools, do to change the balance of power with the machine and the companies that build and train them on our data? Can we use them to make something new and different, that doesn’t look like anything else? In this case, it’s like using the synthesizer to invent a new style of music altogether, like Delia Derbyshire.

These are interesting questions, and they don’t always have clear answers. Combining the names of different artists may lead to a fusion of styles that seems interesting and unique, as in collage. Using a single artist’s name in a way to create a commentary about that artist, or put them in a new conceptual light like Sherry Levine, might be another interesting strategy — in that case, Levine looked to photographers who already had a great deal of power, not obscure illustrators trying to make a living.

You may think it doesn’t really matter, that ultimately, you want to use these tools to make cool stuff. But it’s worth thinking about where this tool came from, who benefits from it, and who loses out. This is part of the craft of any artist: understanding the ethics of the field, and making decisions about why you would transgress them, and whose interests they serve.

As Douglas Crimp explained in his essay about the exhibition of appropriation artists, called Pictures: ‘We are not in search of sources or origins, but of structures of signification’ … ‘underneath each picture there is always another picture’.

Part Four: Warhol, Of Course

Another artist who responded to the images around him was Andy Warhol. Warhol famously painted the images he saw in the world, including Coca Cola bottles and Campbell’s Soup Cans. 

As for the paintings, the images I’ve used have all been seen before via the media. I guess they’re media images. Always from reportage photographs or from old books, or from four for a quarter photo machines. No, I don’t change the media, nor do I distinguish between my art and the media. I just repeat the media by utilizing the media for my work. I believe media is art. (Warhol, 1964)

So I want to trace an interesting timeline here. 

In 1964 Patricia Caulfield publishes this photograph of Hibiscus Blossoms in an issue of Modern Photography Magazine. Andy Warhol picks up a copy of this magazine, sees the image, and cuts it out. He blows up the photo and uses an industrial process - silk screening - to make a kind of stamp, so he can create as many copies of this image as he wants. He paints underneath them or over them in a variety of colors, and then he starts to sell these artworks. 

Warhol’s Flowers.

Caulfield sued Warhol over image rights, and they settled out of court. The agreement was that Caulfield would get one of Warhol’s paintings and, amusingly, so would her lawyer.

The legal terrain of Diffusion models is still an open question. When you create artworks using an artist’s name in the prompt, it’s very clear that you are asking the machine to produce a derivative work. If you get an image of a cartoon character that looks like an existing cartoon, it’s a derivative work. That’s true whether an AI makes it or if you draw it yourself, and it’s true whether you do it on purpose or by mistake.

So the legal questions of using work that looks like someone else’s work are still open because it’s new. But that doesn’t mean these questions are particularly different or difficult. They just haven’t been tested yet. 

There are more complex questions at play. The lawsuits will resolve questions of legal authority - who owns what. But we can also think about the questions in terms of agency: what you do with what the machine gives you. This class, as I have said before, is oriented around the idea that these outputs are products for you to use, rather than ends in and of themselves. And that using them thoughtfully, with attention to your responsibilities to others, is probably the best course of action. 

An AI generated image is an image produced by a machine using a specific process, one that abstracts millions of images into the constraints you give it. AI is not an artist - it isn’t filtering the world through any kind of experience, or sense of curiosity or wonder. It has no stories to tell. But you are an artist, you are a storyteller, because you are human. When you work with these images or tools, the most important question is not whether or not you made it, but what you do with it to tell a story that tells your story, or raises your questions. 

Warhol was an artist making a new artwork out of an existing artwork. It’s been radically transformed, and in the end, the legal side of the question remained open. But then in 1990, an artist named Elaine Sturtevant’s created a series of her own, called Repetitions. Repetitions took Warhol’s original silk screen that he used to make his Flowers paintings, and she used them to make more of them, a series called Warhol's Flowers.

Sturtevant didn’t invent the image herself, it’s an appropriation of Warhol’s appropriation of Caulfield. Sturtevant didn’t make the image herself, because the screen print was already made by Warhol. Instead, Sturtevant simply took Warhol’s existing process and did it on her own. Now, at this point we might say, enough’s enough. One copy is art, two copies is theft. But like Sherry Levine, I would suggest that the question of originality is separate from the question of what’s interesting. Because Sturtevant had a logic to this image that was completely distinct from Warhol’s and completely distinct from Caulfield. Sturtevant was raising a question: why was Warhol’s work a Warhol in the first place?

Many of Warhol’s images were made by studio assistants, and the process was mechanical. And Warhol wasn’t the only artist who worked this way. You could go back to Rodin and Camille Claudelle, the studio apprentice who sculpted many of Rodin’s works. So Sturtevant is using this work to comment on the art work, and the question of originality that surrounds the art world. 

Which brings us back to say: is AI art “original?” Is it art at all? If we take the product of a machine, can we present it as our own? And the answer, I think, is of course you can. This is a question that was settled over a hundred years ago when DuChamp put a Urinal into an exhibition. To bring back Barthes: a photograph says, “Look at this,” and we can look at art and say, “Look at this,” as well. We point at a thing and we say “This is art.” 

But the real question might be: why do we care about the art you made with this machine? What question does your AI art raise, what experience does it produce in the viewer, or for you? Ultimately the question around art isn’t just does this look cool or who made it first. And questions about appropriation are more than is this legally permitted?

Instead I would ask: is the thing you’re engaging with saying something interesting? Is it expressing something that you’ve felt, or experienced, or a way of seeing the world that comes from within you? Has the image made you see something, triggered a new idea, given rise to the start of a conversation that needs having? Does it connect to the viewer in a way that makes them say, “tell me about this?” Do they want to spend time in your world, do they want to connect with your characters, lose themselves in your images, engage with your music or soundtrack? Are you making work that says something about your context, your values, your thoughts?

The question “is AI art theft,” or “is AI art original,” is really hard to answer because every image will have a different story. It’s like asking, back in 1955, if electronic music was copying music or capable of making original music.  It comes down to specific images and individual practices. It’s an unsatisfying answer, but if you ask me if AI art is theft or a new art form, my answer is: it depends.  

You may have a stronger sense of clarity, and of course, that’s fine. I think AI art is in a unique position to raise all kinds of interesting questions. Just as the appropriation artists swam in a sea of images, AI artists are swimming in today’s modern world: a sea of data, surveillance, automation, user-generated content, memes, misinformation, deep fakes. There are so many important questions that we can raise through the use of AI. 

Part Five: Engaging

Agnieszka Kurant, composite images of online click workers — collaborators in Assembly Line, which turned this image into an industrial sculpture.

I want to show one more art project today, from the Polish artist Agnieszka Kurant, who made a series called Assembly Line.

For Assembly Line, Kurant made use of the same online click-work sites used to curate training data for artificial intelligence systems. But instead of paying workers to edit a collection of images, she asked them to take self-portraits. Kurant then used an algorithm to generate a kind of composite of these images — and used the resulting pixel information to print a 3D sculpture.

We’ve talked a lot about the appropriation of artists by artists today, and while there is some conceptual territory to cover there, it can be more interesting to think about how we might appropriate the technology, or the systems of this technology. Kurant uses Mechanical Turk not to curate a training set in a way that erases the workers who make it, as many image training sets do. Instead, the work highlights the human labor that goes into building these datasets — by building a dataset of and with those workers. In the end, the transformation into an object comments on the assembly line of industrial days: a suggestion that the people who make things are literally a part of the things they make. This work is a reflection and a comment on systems of power, because it uses Amazon Turk in ways the platform was never intended. Along the way, it uses the systems of AI art and image making to raise critical questions about the systems of AI art and image making. 

Taking photos of photos, making images of images, were important because they introduced questions about power and the meaning of those images. Today we live in a world of data, more so than images, and artists are asking similar questions about data, and digital infrastructure, and invisible labor, and algorithms. Rather than taking pictures of pictures, they make algorithms about algorithms. 

Another example is James Bridle, whose work, Autonomous Trap 001, was a clever comment on the limits of automated driving — an AI image system used to navigate the road. By painting a circle on pavement with salt, Bridle created a situation where the dotted line indicated the car could pass, but the solid line indicated it could not. In the end, the car entered into the circle but could not leave, forbidden by the solid line. It’s funny, it’s thoughtful, and it makes you think about the limits of AI vision. It tells us a little bit about how machine learning systems see the world. And it does it by playing with the systems themselves. 

What these artists have in common with artists like Levine is that they see these systems not as finished products, but as something we can work with. As are the images that come out of our computers: if we can reimagine them, we can make them something new. 

Right now, the technology of images is new, and bold, and inspires a lot of awe and wonder. So did the camera, when it was launched. And there was some controversy over whether photographers were making art at all. Eventually the mystery of photographic images faded, and we saw people working to create a new visual vocabulary with their cameras. Cinema did the same thing: people didn’t think to cut from one frame to another to tell a story, they had to figure it out through experimentation. 

This is an exciting time to be making work with these tools because we haven’t figured that out yet. But there’s some interesting work out there that is built around using AI for something new, the way people used synthesizers to make weird new forms of music — and maybe these will lead to new popular art forms. 

So this class is meant to encourage you to be experimenters. I’m asking you to define your own relationship to the work you make, and to think about what your intentions are. What do you want to say with the images you take from the AI’s output? How are you going to define whether it’s yours or the machines? But I also want to suggest that in the end, the question is to take the technology of these machines and think about what you can do with them. 

Looking for Something Else to Do?

Here’s a talk on datasets and identity from artist Heather Dewey-Hagborg. In the video version of this class, I show another work to discuss consent and data. This talk focuses on datasets from the perspective of another engaged and critical artist.

Works Referenced

Babbitt, Milton (1961) “Past and Present Concepts of the Nature and Limits of Music.” In Collected Essays of Milton Babbitt, edited by Stephen Peles, 78–85. Princeton, NJ: Princeton University Press.

Brody, Martin. “The Enabling Instrument: Milton Babbitt and the RCA Synthesizer.” Contemporary Music Review, vol. 39, no. 6, 2020, pp. 776–794, doi:10.1080/07494467.2020.1863011.

Douglas Crimp (1977). Pictures Catalog, Artists Space

Eklund, Douglas. “The Pictures Generation.” The Met’s Heilbrunn Timeline of Art History, 1 Jan. 1AD, https://www.metmuseum.org/toah/hd/pcgn/hd_pcgn.htm.

Heyer, Mark. 1975. Harry F. Olson, an Oral History. Hoboken, NJ: IEEE History Center. Accessed October 14, 2018. https://ethw.org/Oral-History:Harry_F._Olson

Scafidi, Susan (2005). Who Owns Culture?: Appropriation and Authenticity in American Law. Rutgers University Press, 2005.

Sembe, Karina (2023). “How Appropriation Works for Those Who Practice It and Those Who Fight It.” EastEast, https://easteast.world/en/posts/218. Accessed 20 Feb. 2023.

Siegel, Jeanne (1991). “After Sherrie Levine.” In Art Theory and Criticism: an Anthology of Formalist Avant-Garde, Contextualist and Post-Modernist Thought, edited by SallyEverett, 264-272. Jefferson: McFarland and Company Inc. Publishers.

Chloe Stead (2019). “Is AI Art Any Good?” Art Basel, 12 Dec. https://www.artbasel.com/news/artificial-intelligence-art-artist-boundary.