Critical Topics: AI Images, Class 15

The Opposite of Information

This is an additional class, added from a lecture delivered to Masters Students at Elisava Barcelona.
It has been integrated in the AI Images class as of August 2024. No video is available yet.

This class is going to look at misinformation, photojournalism, and AI. We’re going to look at a particular photograph, from a particular time, and look at the rise of misinformation and photoshop. Unfortunately, this means we are going to talk about war, and in particular, a war between Israel and Lebanon.

I understand if, with recent events, this is a hard thing to talk about dispassionately. In this conversation, I really don’t want to imply anything about current events. If anything, the conclusion I am going to come to here is: all of this is deeply specific and contextual.

That said, if you don’t want to dwell in the headspace of a war zone, I understand if you want to skip this talk. Otherwise, let’s go back to 2006.

Ben Curtis, AP, 2006.

The image to the right was taken by the AP photographer Ben Curtis in 2006, during Israel’s war against Hezbollah in Lebanon. This is the image and the caption:

“A child’s toy lies amidst broken glass from the shattered windows of an apartment block near those that were demolished by Israeli air strikes in Tyre, southern Lebanon, Monday, Aug. 7 2006. Israeli bombs slammed into a complex of buildings flattening four multi-storied apartment blocks, including the one that had been the target of Saturday’s Israeli commando raid, whilst a civil defense ambulance was hit in the rear and slightly damaged with emergency workers who had gone to the bomb site to search for bodies being forced to flee.” (AP Photo/Ben Curtis)

Curtis was a war reporter and this image was one of nine images he transmitted that day. He’d traveled with a number of other reporters in a press pool as a way of insuring collective safety, and had limited time on the ground. He described the city as mostly empty, and the apartment building that had just been detonated as having been evacuated. 

Soon after that, the photo’s popularity lead other photographers to start seeking out similar images of toys discarded beside exploded apartments. As more of these images started to get published, many in the US began to ask questions as to whether these photos were being staged: had the photographers put these toys into the frames of these images?

It’s definitely true that, after this image was published, there were a large number of similar photos shared in the press pool. To the right, you see an assortment of those, mostly from Reuters.

The documentary filmmaker Errol Morris talked to Curtis at great length and wrote about the surrounding controversy of that photo.

Morris raises the point that the photo Curtis submitted didn’t say anything about victims. By seeing an image of a toy amongst debris, readers could deduce that children were killed in the building. Curtis notes that the caption describes only the known facts: it doesn’t say who the toy belonged to, doesn’t say that there were bodies. The image of the toy, and how it was contextualized, was extremely slippery. 

Later, the image would be paired with commentary condemning both Israel and Hezbollah. Some showed it as evidence of Israeli war crimes; other suggested it was evidence of Hezbollah’s use of human shields. 

Again, I will acknowledge that we are talking about disinformation in the midst of a disinformation crisis. I don’t select this example to make any kind of commentary about the current situation, and there are certainly people who could address that situation better than I could. 

But 2006 marked an important turning point in the history of disinformation and digital manipulation, because it was also the conflict in which a photoshopped image was published in major newspapers and were found to be severely edited in misrepresentative ways. 

An image of smoke pillars over Beirut, and an image submitted to the press pool by Adnan Hajj in 2006.

Reuters photographer Adnan Hajj has used the photoshop clone stamp tool to create additional plumes of smoke and to darken the plumes of smoke in the image. He also submitted images where he copied and pasted multiple fighter jets where only one had been, and showed multiple missile trails where only one missile had been fired. Hajj has maintained that he was merely cleaning dust from the images, but the degree of alteration went far beyond dust spots, with entire buildings and planes being copied and pasted to show more dramatic damage than actually took place. 

One of the telling things in the photograph submitted by Adnan Hajj is how obvious the editing appears to us today. That it made its way through editorial oversight, photo selection and page design says something about the newness of technology and our inability to perceive technological artifacts. Nobody was looking for the things we now know are hallmarks of photo editing: the repetition of plumes on the top left, for example. These are jarring to us today, because we know what to look for, we know the signals of Photoshop usage.

Below is an image being sold by Adobe’s Stock Photo service. It is clearly an image generated by AI, likely Midjourney. And it, too, duplicates the image taken by Ben Curtis in 2006.

This time, the image is being sold in the context of Israel’s war in Gaza, with the image of a teddy bear on the streets of a bombed city being presented when you search Adobe stock photographs for pictures of Palestinians. A number of AI generated images are for sale — here’s one purporting to be a Palestinian refugee. 

Misinformation and disinformation are so often associated with AI images, and deep fakes. But misinformation didn’t start with AI. One of the things we might want to start with is asking what exactly information is, to begin with. 

An AI-generated photograph of a Palestinian refugee being sold on Adobe’s Stock Photo service.

Communication in the Presence of Noise

Claude Shannon is a safe bet if you are looking to identify the founder of information sciences as we know them today. Shannon was the one who initially pitched the idea of text scanning and prediction. He also had a model of how communication worked: an element of his job at Bell Labs, the American telephone network where he did much of his work in the 1940s.

Shannon devised a model of what communication was, and he articulated it in a book, Communication in the Presence of Noise.

Information starts from a source. It moves from that source into a transmitter. Shannon was looking at telephones, and this was quite literal. You have something you want to say to your friend. You are the information source. You bring up a device — the telephone, an email, a passenger pigeon — and you use that device to transmit that message. Along the way this signal moves into the ether between the transmitter and the sender. That’s when noise intervenes. 

For Shannon, noise is the probability that one symbol will change to another symbol while being transmitted across a channel.

Noise is the opposite of information, and there are a lot of ways that noise can be introduced to the signal. For Shannon, this went beyond telephones: it could be that fog obscured a flashing light meant to guide an airline pilot. There could be a degradation of signal, such as a glitched image occurring somewhere between the transmission from a digital camera into our hard drives. So noise started by understanding hiss over the telephone, but this was soon expanded to mean basically anything that interferes with the information source arriving intact to its destination. 

Shannon generalizes this idea as a symbol being changed to another symbol. And, at the end of this talk, I am hoping you’ll watch a video by Charles and Ray Eames, made for Bell Labs to share Claude Shannon’s ideas of information with a wider audience. It’s linked to the right. It’s interesting for two reasons: one, it shows this really foundational idea directly as it was intended by Claude Shannon, for the lab that paid him to come up with it. Second, it’s Charles and Ray Eames, and so, for designers and artists, it’s a really fascinating bit of history.

For now, the diagram is, I think, a useful way for thinking about dis- and mis-information. You have a message that arrives in the head of an observer: someone has seen something and wants to communicate it. It goes into the transmitter, and this has to re-encode that idea in order to be transmitted. It goes into the space between the transmitter and receiver, and then on to its destination. But in between, things can happen. 

Let’s look at this through the lens of social media platforms. 

You see an event has happened. You want to share it. This could be a major news event, or a simple life event. You send it to the transmitter: your phone is the transmitter, and Facebook is the receiver. But Facebook is then going to interpret this all kinds of ways. It’s an extra step in that communication, where noise can be introduced. 

So arguably, applying this lens to algorithms, we can see how already the Facebook filter is acting upon your transmission: it is modifying and clarifying that signal, assigning it weights and deciding who will see it and in what priority. Facebook would tell you this is an attempt to minimize noise from signal: to make sure audiences see only what they want to see. This noise serves as a filter from blocking certain kinds of content. On Twitter, we now see this used when people post links to certain domains: it is re-prioritized and diminished if it is from the Twitter competitor, Substack, for example. 

Meanwhile, synthetic images operate in this space. A photo of Donald Trump’s arrest is an example of false information, a false depiction of an event. This is part of the noise that we encounter in the visual space. In one category are images that depict false events but are not perceived to be true. These images were shared with full transparency, for example, but of course, as it moved through the network, noise was introduced: the caption was removed.  

It isn’t just deepfakes that create noise in the channel. Labeling real images as deepfakes introduces noise, too. An early definition of disinformation from Joshua Tucker & others in 2018, defined it as:

“the types of information that one could encounter online that could possibly lead to misperceptions about the actual state of the world.”

It’s noise — and every AI generated image fits that category.

Deepfakes and Epistemic Trust

A paper published at the outset of the Russian invasion of Ukraine explored the question of deepfakes in the war.

AI generated images are the opposite of information: they’re noise. Even benign examples contain no shred of truth. The danger they pose isn’t so much what they depict. It’s that their existence has created a thin layer of noise over everything, because any image could be discussed as an AI generated fraud. To meet that goal — and it is a goal — they need the social media ecosystem to do their work.

Another example, from early in the Ukraine war, was the use of a synthetic image — in this case, not AI but video game footage — of the “Ghost of Kyiv,” which purported to show a legendary Ukrainian fighter pilot killing Russian planes. The footage was actually video game footage.

We also saw footage of Zelensky surrendering, which was circulated online. What’s interesting here is that Ukrainian officials knew this video would be shared and pre-bunked the misinformation campaign: essentially, they announced that Russia would produce a fake video of Zelensky surrendering. Then, when the video came about, lots of people were aware that it was going to happen. There’s some interesting analysis of this in the paper, too, and something that comes up often. Media reports will often say, “the fake image was shared x million number of times.” And yet, if you look at those shares, many, many millions of those will be from people sharing that it is fake. In other words, these images get circulated, but not always at face value. 

The problem with this is that anyone can then debunk an image as faked, even if it isn’t. We can think about this, again, in terms of introducing noise into the signal of media. It isn’t just deepfakes that create that noise. It’s the recontextualization of real images as deepfakes that creates significant layers of noise into this signal. 

The response can be more dangerous than a fake image. Some may leverage the threat of deepfakes to justify taking over or restricting access to social media or even traditional media channels, for example, as a way of reducing noise. Rather, the entire idea of documentary evidence is being systemically undermined. Everything could be a deepfake, and so nothing should be trusted. In essence, the response to deepfakes, paired with poor moderation strategies on social media sites, has been to dismiss everything as noise. This makes it impossible for signals to break through in meaningful ways. 

In the absence of clear signals, in the absence of clarity and reliable information, speculation fills the gaps. And so you see how very quickly conspiracy theories start to spread: that the entire war itself is a deepfake, being shown on social media. 

The epistemic threat of deepfakes is really pretty simply put by Twomey et al — 

“Deepfakes reduce the amount of information that videos carry to viewers.”

“As deepfakes become more prevalent, it may be epistemically irresponsible to simply believe that what is depicted in a video actually occurred. Thus, even if one watches a genuine video of a well-known politician taking a bribe and comes to believe that she is corrupt, one might not know that she is.”

Essentially, the idea is that anything transmitted from any source is not reliably carried through to you. The entire apparatus of communication then becomes challenged. 

Discourse Hacking

For about two years in San Francisco my research agenda included the rise of disinformation and misinformation: fake news. I came across the phrase “discourse hacking” out in the ether of policy discussions, but I can’t trace it back to a source. So, with apologies, here’s my attempt to define it.

Discourse Hacking is an arsenal of techniques that can be applied to disturb, or render impossible, meaningful political discourse and dialogue essential to the resolution of political disagreements. By undermining even the possibility of dialogue, you see a more alienated population, unable to resolve its conflicts through democratic means. This population is then more likely to withdraw from politics — toward apathy, or toward radicalization.

As an amplifying feedback loop, the more radicals you have, the harder politics becomes. The apathetic withdraw, the radicals drift deeper into entrenched positions, and dialogue becomes increasingly constrained. At its extreme, the feedback loop metastasizes into political violence or, in the case of vulnerable democracies, collapse.

Fake news isn’t just lies, it’s lies with true contexts. It was real news clustered together alongside stories produced by propaganda outlets. Eventually, all reporting could be dismissed as fake news and cast it immediately into doubt. Another — (and this is perhaps where the term comes from) — was seeding fake documents into leaked archives of stolen documents, as happened with the Clinton campaign.

The intent of misinformation campaigns that were studied in 2016 was often misunderstood as a concentrated effort to move one side or another politically. But money flowed to both right and left wing groups, and the goal was to create conflict between those groups, perhaps even violent conflict.

It was discourse hacking. Russian money and bot networks didn’t help, but it wasn’t necessary. The infrastructure of social media — “social mediation” — is oriented toward the amplification of conflict. We do it to ourselves. The algorithm is the noise, amplifying controversial and engaging content and minimizing nuance.

Expanding the Chasm

Our communication channels can only do so much, in the best of times, to address cycles of trauma and the politics they provoke. Whenever we have the sensation that “there’s just no reasoning with these people,” we dehumanize them. We may find ourselves tempted to withdraw from dialogue. That withdrawal can lead to disempowerment or radicalization: either way, it’s a victory for the accelerationist politics of radical groups. Because even if they radicalize you against them, they’ve sped up the collapse. Diplomacy ends and wars begin when we convince ourselves that reasoning-with is impossible.

To be very clear, sometimes reasoning-with is impossible, and oftentimes that comes along with guns and fists or bombs. Violence comes when reason has been extinguished. For some, that’s the point — that’s the goal. But that point isn’t at the stage when you feel frustrated by an online argument.

The goal of these efforts is not to spread lies. It’s to amplify noise. Social media is a very narrow channel: the bandwidth available to us is far too small for the burden of information we task it with carrying. Too often, we act as though the entire world should move through their wires. But the world cannot fit into these fiber optic networks. The systems reduce and compress that signal to manage. In reduction, information is lost. The world is compressed into symbols of yes or no: the possibly-maybe gets filtered, the hoping-for gets lost.

Social media is uniquely suited to produce this collapse of politics and to shave down our capacity for empathy. In minimizing the “boring” and mundane realities of our lives that bind us, in favor of the heated and exclamatory, the absurd and the frustrating, the orientations of these systems is closely aligned with the goals of discourse hacking. It’s baked in through technical means. It hardly matters if this is intentional or not — The Purpose of a System is What it Does.

Deep fakes are powerful not only because they can represent things that did not occur, but because they complicate events which almost certainly did. We don’t need to believe that a video is fake. If we decide that it is beyond the scope of determination, it can be dismissed as a shared point of reference for understanding the world and working toward a better one. It means one less thing we can agree on.

But people use images to tell the stories they want to tell, and they always have. Images — fake or real — don’t have to be believed as true in order to be believed. They simply have to suggest a truth, help us deny a truth, or allow a truth to be simplified.

Pictures do not have to be true to do this work. They only have to be useful.