Critical Topics: AI Images,
Class Twelve

Your AI is a Human (But Not Like That)

In 1770, a civil servant and scientist, Wolfgang von Kempelen, went to the court of the Hungarian Empress, Maria Theresa, with a remarkable automaton. The Mechanical Turk, as it was called, was a wooden sculpture of a man, as you see here, standing behind a chess board. Kempelen opened the doors to show that the cabinet was empty, and then set about showing off what this remarkable chess playing machine could do. Closing the doors, he wound it up with a key, which was a pretty standard way of cranking up a mechanical clock works system at the time. Then he challenged any audience member to a game of chess. 

The robot would move pieces around and, quite often, it would win. It even played Benjamin Franklin during a trip to Paris, as well as a quite famous chess player. In fact, the mechanical turk went on playing chess for 84 years with no one being able to figure out the precise arrangements of gears and clockworks that gave it such an intelligence. 

Bharat Rao 2019

Here you see a reproduction of that machine built in 2019. You can see that the doors are cleverly designed so that when opened, it’s actually quite difficult to see what’s inside. 

But it wasn’t a super complex machine. It was actually a magic trick, or a hoax, with a person inside the box, with clever compartments where the human arms could control the robot’s movements like a puppet, and hide the presence of their body when the cabinet doors were opened. 

Perhaps people understood this was an elaborate trick, but many attributed other explanations: that it was possessed, or that it was indeed a brilliantly engineered intelligent machine. Notably, one of the pioneers of complex adding machines, Charles Babbage, saw the machine and set about trying to build a simple version of it, which became the mechanical calculator. So in essence, the history of computers is a result of a hoax: a machine that pretended it wasn’t a human. 

Today, artificial intelligence systems often continue to have people hidden behind cabinet doors. Amazon even offers a service named for the Mechanical Turk: with Amazon’s modern version, you don’t hide someone in a cabinet. You hide them behind a user interface. Amazon’s Mechanical Turk allows you to pay humans pennies for tiny tasks, and these tasks are often inserted into the data pipeline, disguised as an autonomous system. 

We’re going to watch a video produced by Amazon to highlight this Mechanical Turk system. As we do, remember that this is Amazon presenting its own version of the service, so be mindful about those claims. But also, as you look at the jobs that are described, look at how they mirror the structures of artificial intelligence: particularly, think about the types of tasks that they are talking about and how those map to some of the artificial intelligence systems we’ve been looking at. Sometimes, this is called “artificial artificial intelligence".” See if you can see why. 

So the reality of these systems is that they’re often used for establishing training data that drives AI. And these workers often experience low wages and no job security. There is also the risk of unclear instructions from their contractors, who might not describe the needs or use case of the overall project in ways that help these micro-workers make sense of the tasks they are meant to do. They also can’t organize for workers rights, because as the video explained, they are scattered across the world, and often have very little connection to one another beyond the platform. 

These human workers have built the training data for a wide range of classifications tasks, including labeling images and audio examples for training. But on average, they make about $2 an hour, so there’s little chance that you have people doing this for a living in places like the United States or Europe. In fact, the majority of the workers are based in India, Kenya and Venezuela. 

One of the things these workers might do is impersonate an actual AI. A study in 2019 found that 40% of machine learning startups in Europe didn’t use machine learning at all. An example of this was a startup marketing an AI for travel recommendations. In fact, when you prompted an AI, your query was routed to Mechanical Turkers, who would work to find an ideal itinerary for your travel plans. (Ram 2019)

In other words, this is AI impersonation: which Tubaro jokingly refers to as “humans stealing the jobs of computers.” Of course, what’s actually happening is that AI is an appealing technology that people want to invest in, while paying people pennies to find luxury hotels is not. 

The experience of the human on the other end of these requests is oftentimes completely disconnected from the overall project. They may get arbitrary instructions, with no idea why they are being asked to do it at all. We’ve talked in this class about the importance of context: why we should understand, for example, that an image needs to look a certain way or how to check that it’s appropriate for what it will be used for. That’s very hard to do when you are relying on people with no idea what they are actually categorizing or why. 

Here’s testimony of the type of random tasks assigned to these workers:

“They tell you: draw a circle around a tomato. We don’t know why. I think everyone knows what a tomato is, I hope […]. Then I think to myself: if it’s there, it must be useful to someone, for something, but … Why, I don’t know.”

Another area that humans are often used for tasks we think are automated is the area of content moderation. We’re going to watch a video about content moderation tasks by Casey Newton from the Verge, describing how these jobs are set up and the effect that it has on those workers. This video acknowledges the existence of child abuse and other crimes that may be difficult to hear about; if you’re not up for it at the moment, feel free to skip ahead. 

So this makes the case for building AI for these kinds of tasks. But to build an AI, you have to train an AI, which means humans have to do these tasks to begin with. To build a system that can label violent content, people would have to watch and label hours of violent content in order to tell a machine what types of images or content fit the description. As Mar Hicks writes; “the major social media platforms are incapable of governing their properties with computational, AI-based means alone.”

The very idea that this is a solution for these platforms speaks to something interesting about AI and the nature of power. If companies are looking to automate labor and work, but ultimately just ship it to developing countries working extremely low wages, there’s something strange about that. It seems to suggest that the low-wage workers you are hiring to do this work aren’t human: we literally asked to pretend they don’t exist. Automation infers that no human is doing the work. So if you automate by sending these jobs off to poor people or poorer countries, what does that say about one’s view of those people? 

The question of technology and politics goes back a long way, of course, but it’s worth flagging a particular quote by Langdon Winner, written in 1980, in which he suggested the question that artifacts — any technology — has a political dimension, too. He writes: 

“Artifacts can contain political properties. First are instances in which the invention, design, or arrangement of a specific technical device or system becomes a way of settling an issue in a particular community. Seen in the proper light, examples of this kind are fairly straightforward and easily understood. Second are cases of what can be called inherently political technologies, man-made systems that appear to require, or to be strongly compatible with, particular kinds of political relationships. Arguments about cases of this kind are much more troublesome and closer to the heart of the matter. By "politics," I mean arrangements of power and authority in human associations as well as the activities that take place within those arrangements.”

We can think about these quite broadly when we think about the politics of artificial intelligence. Remember that Winner’s definition of politics is not just about political parties, regulation, and that kind of thing. Rather, it is about the question of who has power and how they use it. The promise of automating content moderation systems is political in the first sense: it identifies a problem and proposes a technology that can fix it. But then when you look at what that technology actually does, you realize you’re in a different set of political problems, in which the power disparity between workers in developing countries and the tech companies of rich countries is fully on display. The only reason these ideas of sending automation work to developing countries is even possible is because of that power disparity.

In general, automation of anything — from shoe manufacturing to automatically creating illustrations — is political. There is a familiar understanding, especially in the tech world, that automation might ultimately cost a lot of jobs, but that it creates new opportunities. If you can save money producing shoes, for example, by automating the shoemaker, you might be able to hire many more people to ship, package, and design those shoes. The cost of making shoes goes down, and the former shoemakers go get retrained to do new jobs. 

Economists Daron Acemoglu and Pascual Restrepo describe a process called “so-so automation” which is useful to consider when we talk about AI and automation. For Acemoglu and Restrepo, this idea of the automated shoe factory holds up under certain conditions. But in fact, whenever a job is automated, a variety of things happens. And I bring this up because it’s so important to a big discussion about AI images, which is whether it puts artists out of work. 

Automation, Acemoglu says, does a few different things. One of them is displacement, the elimination of jobs. But there’s also a productivity boost: companies can make stuff cheaper, and consumers can buy it cheaper, so companies can make more stuff and customers can buy more stuff, which means more people can get jobs. That’s called "the reinstatement effect." And from 1947-1987, that’s exactly what happened, they argue: “there was plenty of automation, but this was accompanied by the introduction of new tasks... in both manufacturing and the rest of the economy that counterbalanced the adverse labor demand consequences of automation.”

So the idea that new technology leads to new jobs took hold here, because that had been more or less the case for 40 years. But then something changed. After 1987, automation created fewer new jobs, because more companies started focusing on automation to reduce labor costs. So-So Automation refers to automation technologies that are focused on eliminating labor, rather than increasing productivity. 

Example 1: Self-checkout kiosks at supermarkets are not more productive. They simply replace workers, and are therefore cheaper. But they do not pass benefits on to customers in ways that allow them to get cheaper (or more) groceries. 

Example 2: Automated customer service menus, which make it harder to get the information customers actually want.

We discussed questions of automating art making in terms of the conceptual and artistic practice. But there’s more to the question than aesthetics and concepts. Artists working with images that have certain meanings, and have been scrambled up to make new arrangements. But the use of AI itself is a choice, a decision that inscribes a certain kind of meaning and position into the work you make with it. Langdon Winner asked if artifacts have politics. We might ask if AI images are an artifact of a certain technological system that has its own political priorities. And for many, using these tools signals an alignment with those priorities.

At the very least, we ought to be aware of the politics in AI-based work, because it influences the reception of the images we make.

Some of these political statements have direct, tangible impacts on the lives of other artists. For example:

  • Will image automation create new opportunities or work for illustrators, photographers and designers?

  • Could AI image tools increase the productivity of illustrators, photographers and designers, or simply replace them? 

  • Do these tools allow for innovation that creates new industries? 

We can speculate, but we don’t know. 

Again, we don’t know what this is going to do in terms of people and the jobs they do. Just like ChatGPT and other AI technologies, we need to know what we’re doing when these things give us answers. Its text is often wrong, or manipulative, the code may not work. And if we over-rely on these tools, we may lose the levels of expertise we’ve acquired that help us see its mistakes. But will GPT4 take away jobs of coders? With DALLE take jobs away from artists? We still don’t know. 

Part Two: Your Artist is a Human

Art is also a challenging case to compare directly to automation, because so much of what an artist does is their own expression, and done for themselves. Training an AI to specifically take that work is, at best, dubious. Take the case of illustrator Hollie Mengert, who is a professional illustrator who has worked with Disney as well as making her own line of illustrations. 

Hollie Mengert’s illustrations (left) and those produced by an AI model trained on her work (right).

In 2022, a Reddit user took just 32 illustrations from Hollie’s website and built a model that could transfer any image into her own personal style. We’ve talked about the factor of scale before in things like LAION 5B, where billions of images and hundreds of thousands of artworks are swept into the dataset, regardless of copyright or consent. We’ve seen that this information gets into the pictures we generate, sometimes directly, sometimes not. But one of the more egregious cases is when people use these backbones of open source image diffusion models or GANs and train them specifically to imitate someone else’s style without any permission. 

“As far as the characters, I didn’t see myself in it. I didn’t personally see the AI making decisions that that I would make, so I did feel distance from the results. Some of that frustrated me because it feels like it isn’t actually mimicking my style, and yet my name is still part of the tool.”

She wondered if the model’s creator simply didn’t think of her as a person. “I kind of feel like when they created the tool, they were thinking of me as more of a brand or something, rather than a person who worked on their art and tried to hone things, and that certain things that I illustrate are a reflection of my life and experiences that I’ve had. Because I don’t think if a person was thinking about it that way that they would have done it. I think it’s much easier to just convince yourself that you’re training it to be like an art style, but there’s like a person behind that art style.”

Haveibeentrained is a website that lets anyone look and see if their artwork is part of the dataset used to train diffusion models, at least models that use LAION 5B. This tool, from Holly Herndon and Mat Dryhurst’s Spawning project, also allows artists to opt their work out of that training data, and Stable Diffusion has agreed not to use artists who opt out when training future models. But as we’ve seen, with open source models, people can create all kinds of malicious tools that ignore these requests. So what can artists do then? 

A tool called Glaze has been created for artists who want to share their work online but are worried about these types of incidents. The authors of the paper and the tool describe these as “style mimicry attacks, in which “a bad actor uses an AI art model to create art in a particular artist’s style without their consent. More than 67% of art pieces showcased on a popular AI-art sharing website leverage style mimicry.”

  • Glaze is a tool for artists to inject human-imperceptible noise into their photographs that makes it impossible for a machine to “learn from.” 

  • “The cloaks make calculated changes to pixels within the images. The changes vary for each image, and while they are not necessarily noticeable to the human eye, they significantly distort the image for AI models during the training process. A screenshot of any of these images would retain the underlying alterations, and the AI model would still be unable to recognize the artist’s style in the same way humans do.” 

Here’s how this tool works. Diffusion models don’t see pictures, they see clusters of pixels. They understand the properties of individual pixels, how they fit together, and what they represent through their links to CLIP. Glaze uses this technology to analyze artwork and see exactly what the machine would read in that image as it is broken down in training.

So if a flower would break down in a certain way, Glaze subtly shifts the pixels in your image — in ways humans can’t see — so that when it breaks down, it looks like something else. You could think of it as superimposing another image’s style into the existing image. In the paper, Glaze describes using Van Gogh as a “cloak,” inserting the style of Vincent Van Gogh’s paintings into anyone else’s images. As a result, when a Glazed image is broken down into noise, the clusters of pixels related to style are those that resemble Van Gogh - and not the original artwork. So your piece of illustration or photography becomes illegible to the system, which sees similar content, but all of the style elements are interpreted as Van Gogh. 

This is a technological solution and, it’s already been pointed out, there are some technological vulnerabilities. Bad actors can go and undo this cloaking. But from a legal perspective, it represents something important. A big defense of taking copyrighted illustration or photographic work from these companies is that they aren’t supervising it, that the model is just gulping down all of this data. If a company inserts code to unmask these cloaked images, it’s basically an admission that they want to take data from artists who aren’t giving people permission. 

And this poses a really interesting question — if we can embed these masks into images, isn’t that clear enough an assertion that you want to opt out of training? If a company undoes that, aren’t they very clearly ignoring the intent of that artist? 

Part Three: Copyright Law

All of this comes down to copyright. And some say that an artist actually has no legal rights to exclude themselves from AI scraping. They argue that the machines learn these images in the exact same way people do, and that as long as the images it produces aren’t directly stealing an actual piece that an artist produced, copyright doesn’t matter. It is true that you can’t copyright a style, and you can’t copyright specific content: if I go and photograph trees, I can’t sue anyone for photographing trees. If I go and paint trees blue, I can’t stop other people from painting trees blue. But if you take my exact photo of blue trees and share it, of course, that’s where copyright comes in. 

So what rights do you have, under copyright, for images created by an AI? The answer is not many. But right now, it seems that the US Copyright office does not believe that you are the author of an AI image, and that the AI is not protected. So. Let’s dig into this for the final piece of our talk today: when the AI is NOT a human. This matters, because according to the US Copyright Office:

“In the Office’s view, it is well-established that copyright can protect only material that is the product of human creativity. Most fundamentally, the term ‘‘author,’’ which is used in both the Constitution and the Copyright Act, excludes non-humans.”

In the first round of advice about copyright, the copyright office doesn’t suggest anything particularly concrete about AI generated artworks and whether the work you make with an AI can be protected. But it does suggest some interesting case studies that point to where the copyright office might land. 

The first comes from 1884, and centers on the question of whether a camera was the author of a photograph — which would make all photographs ineligible for copyright. The office notes that the case was settled by the office finding that photographs are the work of an author, not the machine, even if the case was made that one can simply point a camera at anything and make an image. That was actually key: the fact that one has limitless possibilities suggested, in some way, that human agency was involved in choosing what became an image. 

The monkey “selfie.”

Another case came from 2018, in which a photographer left a camera on the ground, and then a monkey approached the camera and photographed itself while playing with the camera. This is the actual image — it’s not AI generated, this is the actual macacque monkey and this is the picture it took. The photographer published the photo and was sued by an animal rights group, which argued that the monkey had taken the photo and was therefore the copyright holder. In this case, copyright would have been extended to a non-human, but the case was never decided legally. Instead, the case was dismissed because the animal rights group had no legal authority to represent the monkey. Basically, the monkey couldn’t sign any forms handing its legal representation over to the group, so there was no legal grounds for the org to sue anybody on its behalf. 

The advice concludes that ‘‘to qualify as a work of ‘authorship’ a work must be created by a human being’’ and that it ‘‘will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.’’

You can read this as saying, AI is not a creative tool. But it’s not really saying that. It goes on to say that sometimes, on a case by case basis, these works might involve greater degrees of human authorship over others. 

And it offers up some of the questions it might ask as AI generated art comes in to be considered.

  • “Whether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.”

  • In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of “mechanical reproduction” or instead of an author’s “own original mental conception, to which [the author] gave visible form.”

So we’ve just looked at two ways in which your AI is a human: first, there are cases when what is called an AI is literally a person, just a person far away from you and hidden from the interfaces you use. That includes things like content moderation to startups that simply haven’t figured out their machine learning technology yet and use Mechanical Turk to process your requests. We also look at cases where the machines are replacing humans, and the history of automation and the claim that automation leads to new jobs — which, we’ve seen, is not a universal truth, but a bit of a holdover from the 1980s. And then we looked at the automation of artists: that there are artists in the dataset, and many tools built that act as if these artists are not people at all, but simply brands. We saw a few tools, like Glaze, and HaveIBeenTrained, that are helpful in protecting artists from having their work stolen. And we talked about the first wave of copyright protection advice from the US copyright office, which makes it clear that AI is not a human, and has no rights to its output — but that neither does anyone else, at least not yet. 

And there’s another element of Your AI Being Human that might be worth mentioning, which is the question of AI and sentience. Is your AI a sentient being? It’s my hope that seeing how these systems work, understanding the fully mechanized processes behind them - of breaking down and drawing images according to information, data, and algorithms — makes it clear that these tools aren’t creative agents. They don’t really make art the way people do, not out of any sense of experience or personal reflection. That doesn’t mean that artists can’t make compelling things with them, and it doesn’t remove the remarkable leaps forward that these tools represent for visual storytelling and human creativity. But they require humans to make them make things, and in my opinion, we shouldn’t be so eager to take the human out of the AI. In that sense, the AI is human in that we are the ones who do creative things with it. 

Works Referenced:

Tubaro, P., Casilli, A. A., & Coville, M. (2020). The trainer, the verifier, the imitator: Three ways in which human platform workers support artificial intelligence. Big Data & Society, 7(1). https://doi.org/10.1177/2053951720919776

Ram A (2019) Europe’s AI start-ups often do not use AI, study finds. Financial Times. Available at: https://www.ft.com/content/21b19010-3e9f-11e9-b896-fe36ec32aece.

Newton, Casey. “Google and YouTube Moderators Speak out on the Work That’s Giving Them PTSD.” The Verge, 16 Dec. 2019, https://www.theverge.com/2019/12/16/21021005/google-youtube-moderators-ptsd-accenture-violent-disturbing-content-interviews-video 

Roberts, Sarah T. “Your AI Is a Human.” Your Computer Is on Fire, The MIT Press, 2021, pp. 51–70.

Winner, Langdon (1980) “Do Artifacts Have Politics?” Computer Ethics, Routledge, 2017, pp. 177–192.