Architectures of Violence: an Interview with Caroline Sinders
After the insurrection, I was looking at the online discussion platform of a radical pro-Trump group. If you only looked at the website’s design — maybe render the text into gibberish — you’d think it was all very normal. The code was stolen from Reddit, a site ostensibly designed for facilitating a good conversation. But once exiled for extremist views, they simply took Reddit’s architecture somewhere else, and used it to facilitate a whole different idea of what “a good conversation” was meant to be.
The rhetoric was stomach churning in its violent fantasies, but there was something banal about it all. People were so comfortable planning these things, earning upvotes — small symbolic rewards — for contributing more extreme, more vile, and more violent rhetoric to the frenzy. The architecture, designed to reward contributions, had normalized a race to harm. It wrapped a coup in the simple, everyday architectures of social media interfaces. And that’s what struck me: these sites work just as well for sharing ideas with your local beekeeping group as for plotting to overthrow a government.
In contrast, Caroline Sinders’ new work, Architectures of Violence, is uncomfortable and full of friction. The show “ruminates on harassment, harm and white supremacy in digital spaces and how those spaces are designed.” One piece, the net art work “within the terms and conditions,” includes a series of multi-layered, 15-minute-long films which gather streaming videos from extremist online media sources, presented as layered collages of simultaneous broadcasts on nine topics — white supremacy, gender-based violence, anti-trans rhetoric, covid conspiracy theory videos, anti-vax videos, and more. The films are the results of Sinders’ online harassment research work, all found on mainstream platforms through search and recommendations.
That work is deep, because her work rises from a rigorous research approach: she has held fellowships with both research and art institutions, including Harvard’s Kennedy School, the Mozilla Foundation, Yerba Buena Center for the Arts, and Eyebeam, while her work has been exhibited at the Tate Exchange in Tate Modern, Victoria and Albert Museum, MoMA PS1, LABoral, Ars Electronica, and in publications including Slate, Quartz, and Wired.
I came across her work in 2017, when she was bringing together art and research to expand the dialogue around the ethical development of AI. “Feminist Data Set,” is an ongoing multi-year series of workshops-as-art, created with communities to slow down the pipeline of machine learning systems. That project asked deep questions about each phase of development — such as the gathering and labeling of data, the selection or production of an algorithm — through a feminist lens for reducing harm.
This interview with Caroline was in advance of the opening of Architectures of Violence at Telematic Media Arts in San Francisco. The piece, ‘within the terms and conditions,’ was commissioned by the Photographer’s Gallery in London in May 2021.
This conversation has been edited for space, clarity, and flow.
Images of “Architectures of Violence” at Telematic Media Arts in San Francisco, May 2021. Courtesy the artist.
Caroline Sinders: I’ve been thinking about digital and net art as a form of photojournalism and witnessing. As a white woman who studies white supremacy and online harassment, it’s important for me to have a conversation about the really hard parts of the internet.
The work may be more pedantic and direct than traditional fine art, but when you’re trying to combat harm you have to be pretty direct or you run the risk of endorsing that harm. But it’s necessary to have these conversations around the politics of platforms. Art is a good place to do that, because why not have these conversations in the museum? Why not confront politics in the white cube space?
My work is trying to do that and play with the idea of “expanded documentary,” taking a journalism or research ethos to another kind of art form. For me, that’s using ripped or found media, or placing actual data in a place for conversation.
Eryk Salvaggio: To be very literal about this idea of expanded photojournalism, Architectures of Violence feels like you’ve walked into the Internet with a camera and documented what you found there. Memes, screen grabs, and ripped YouTube videos are unusual topics for “photojournalism,” partly because the Web is already a flat space — it’s already images.
You’re taking the Web seriously as a place where we live and things happen. That’s at odds with the common excuses for online abuse — “it’s just the internet, it’s not real.”
So in this show, you’re putting that in a three dimensional space and suddenly what we’re looking at isn’t a private world, but our shared, social one. People can see a visitor’s reaction. It introduces a kind of negotiation between those two worlds — the one we imagine is private and the one in which we carry out our lives, where we can be shamed for looking without reacting. Does that change the way people see these images?
CS:
I’m definitely interested in uneasiness and making other white people uncomfortable about this. People with dominance ought to feel uncomfortable and sit with their discomfort and challenge what causes that. Some of these ideas come out of my experience in human rights research. That isn’t comfortable research. You’re looking at atrocities out in the world, and so you have to learn how to sit with your own discomfort. In my space of human rights technology, we’re often having a lot of hard, personal conversations about working through inequity and pain.
It also feels like there’s an open sea of content promoting white supremacy in spaces like YouTube or Facebook. It’s there, but with filter bubbles and algorithmic search, it’s often hard to see or find if you aren’t looking for it. Imagine having a map and not being able to see massive regions where something is happening. This work is about saying “it’s here” — not on some hidden corner of the internet, but on the same platform you give your kids to watch videos, where you watch TED Talks or upload makeup tutorials. It’s not the darknet, it’s right here.
ES:
That’s where you got the source material for your video installations, then — mostly from mainstream platforms?
CS:
Some of the videos came from other people’s research, such as Joan Donovan and Brandi Dexter- Collins, who had done work looking at disinformation after the publication of the 1619 Report. Some of the videos follow a stricter methodology than others, the “anti- anti- anti-” video all shares a common rhetoric. But you don’t have to watch much of these videos to understand what they’re talking about.
I was assembling a lot of videos on anti-trans rhetoric when the Substack conversation started happening — where many of these people who had been deplatformed in spaces like Twitter were migrating to Substack, like Graham Linehan. When you watch the videos it’s very clear what the agenda is. They are using words like “gender critical” to hide the extent of their hate speech, but then purposefully misgender people, or spread unsubstantiated facts about transitioning. So I was adding those videos and grabbing these videos when this happened, and it illustrates just how many videos YouTube has about everything.
The way the videos are presented at the gallery, it made sense that they had to be layered over one another in a way that makes them almost impossible to see. That was a choice that made sense for me — because you could very easily accidentally look like you’re endorsing something when you call it out.
In the Gallery, I want people to think about things like that in relation to terms and conditions. Who decides what is harmful? What fits within those terms and conditions? How does stuff get taken down?
ES:
Right, the videos become a kind of smeary, abstract bombardment of talking heads and fascist chyrons, but what became clear to me is that there is a connection, a homogeneity between them. There’s a real normalization of a pretty extreme ideology going on out there. But then there’s also an invisibility to it. Which, I imagine, is exactly how newcomers might get lured into white supremacy through a yoga anti-vax video, right? You can’t see those connections when you get to the first one, you don’t know the path you’re setting out on. So platforms can say “we’re going to take down this genre of video, but not that genre,” but they’re all permeating into one other, reinforcing one another…
CS:
Emily Gorcinksi and I — she’s an anti-neo-nazi, anti-white supremacy activist — one of the things we’re trying to do is show how all of these varying forms of harm are so closely related. It’s centered on the idea of white supremacy, even if it doesn’t seem like it at first. So there are other things, like misogyny or homophobia, that get maybe “excused,” because they’re more normalized for some people. Some people draw lines in different places. What we’re trying to talk about is how these things are related, and online they’re especially related. The Anti-Vax movement is a very particular racial demographic, for example.
But the downstream effects of harm in those videos, where they’re propagandizing their message, is similar to how someone might spread misinfo on corona. And the policy on these platforms is that they can’t tell the difference between someone expressing concerns about vaccines from someone who is spreading misinformation. But one is seen as a more elevated harm. So you have to look at the power of the person who’s saying it.
They might look at things without that context — without paying attention to how many followers a person has, or the weight that person has in a culture. Thinking about these harms as more interlinked can be helpful. JK Rowling’s opinions are dangerous because of the platform she has and the space she occupies in culture. I’m interested in a conversation about those harms, and using art to cause reflection in a way that if I were to debate someone, I really can’t.
ES:
Debate doesn’t cause reflection?
CS:
I mean, it can, it can — but I’m thinking about, how do you debate with someone you don’t quite know or will never see? There have been studies done that if someone holds a belief you can’t logic someone out of that belief. You can’t use a logical argument or even an emotional one. So I think of small moments of reflection, which is where art can be really helpful in opening that space. I would hope someone would sit with it and be taken aback and ask themselves what it feels like to have that discomfort, and think of that metaphor and if there is anything in your life that is too close to it. I think the beauty of the work is that the artist is rarely there — it sits with people differently than how you’d read an article. That’s the power of presenting these ideas, especially really uncomfortable ones.
ES:
What did compiling this work show you?
CS:
There’s a casualness to this brutality, and that casualness is a result of platforms. It’s very easy to upload videos, it’s very easy to post a meme. I’m not saying to limit that access. But it’s important to reflect on the idea of feeds.
So the work became informed by friction— the videos are meant to be overwhelming. Nothing should feel easy — it’s my observation on the current ease of platforms. There’s so much data on big tech platforms. It was easy to get. I’m a UX designer, a researcher and an artist, so I’m looking at functionality, and what I wanted to create was a space where you can’t “zoom out” of this. I like inserting that friction. If you get on these platforms, the biggest source of friction is logging in. After that you can get anywhere.
ES:
So what’s the theory of change that drives this idea of photojournalism, and the way you’re presenting things here as a kind of expanded documentary?
CS:
I think the gallery is a space in people’s lives, a space for interacting with things differently than they do on the TV or in journalism. Putting these kinds of conversations on in the white cube allows some distance for different kinds of relationships with the subject matter.
ES:
I went yesterday up to Richmond (Virginia) and there’s this work of Paul Rucker’s, Storm in the Time of Shelter, who has created these oversized mannequins wearing Klan uniforms made from a variety of fabric prints and materials. I left thinking about the role of art in the imagination, how this was a challenging way to have a conversation, in the best sense. The logic of these ideologies, and this propaganda, is that it’s so anti-imagination. Fascist iconography, Klan uniforms, right-wing rhetoric, they’re all designed to restrict our imagination — it has to be a real challenge to pull this out of the gutters and into an artistic environment?
CS:
With White Supremacy in particular, we’re not in a place to be “imaginative” with it. There are ways to open it up and explore different slices, centering different kinds of imagination, like Afrofuturism, or Indigenous futures. Those are the people most affected by white supremacy, so it makes the most sense to center those groups and those voices, and not white people, in conversations about creating a more equitable future.
Sometimes we hear “white supremacy” and think it’s a specific thing, and it is, but it’s also not. It encompasses so many actions and things because it’s a modality of oppression that someone can be committed to that takes many forms.
With Feminist Data Set, I would explain to people that we’re going to go out and look for intersectional, feminist data, and people would think that was going to be easy. But it’s really complex and complicated, especially if the data doesn’t have to be about intersectional, feminist data. So then, what is intersectional feminism in practice when it comes to data? It’s the structure and descriptors of how things are gathered.
It’s like a subliminal thing, and it’s harder to focus on. You have to pick a thing — something in your community, for example, gentrification or corruption. You have to pick something specific and contain it, and then you can read and see if the data you find is intersectional or not, and how you might change it.
So back to white supremacy, white supremacy is everywhere and in everything. So how do you offer a modality that is the opposite of that? You need a theory that is as big and as wide. I don’t think I’m the person to imagine a future without white supremacy, because that isn’t a conversation that should center my voice. But I do think it’s important for art to provide a space for having conversations that make us uncomfortable. I want to make art that makes white people uncomfortable and to sit with that discomfort, a mirror that reflects the ubiquity and the casualness of it.
That piece you describe, I’d want to see, but it sounds like we’re looking at something created from the mundane. KKK uniforms are made out of fabric, someone gets an order every year and makes the emblems. Some company is very okay with making hundreds of patches and flags and sending them to the KKK’s office. Somebody knows their addresses- and those people know exactly what they are making and who they are making it for.
In the Third Reich, you had blueprints. Architects sat down and made all these buildings: barracks, cafeterias, and gas chambers. That’s something I want people to think about. There are many hands involved in the dissemination of white supremacy, and some of those hands are at YouTube and Facebook. Some of those hands are shaping the way we talk about this, and some of those hands are afraid to talk about this.
I think it’s important to sit in that ubiquity and acknowledge it. But I can’t be part of imagining that alternative future. I want to make space for it, but I’m not the artist leading that conversation. But people forget how broad white supremacy is. It sounds like something so specific, and it is, but it’s also subliminal and contextual and hidden in many other actions, and I think people don’t think about this enough.
ES:
I want to talk about how wrapped up irony has ended up in forms of harm — five-minute long rants on YouTube about beating women followed by “I’m just kidding,” or the common refrain of online harassers being that people just need to take a joke. The culture of the Web, for a long time, seems like it’s been positioned in opposition to what someone might label as earnestness or sincerity...
CS:
I want to put more earnestness on the walls. Anything you put out there, people are going to take as reality. Retweets kind of are endorsements. It’s a nod toward something you are thinking about. So when people make jokes like that, there is something more telling that you think it’s funny. You probably don’t think it’s a reality because you don’t talk to women, or you view suffering as less valid because you don’t respect women. And both are warped views of reality.
ES:
There was this phase of ironic racism, where you could wave a flag that said you know better than to say the thing you just said. And that seemed like a really clumsy epoch in all this, so even mainstream things like The Office seemed to be doing it, this sort of cringe comedy that drew its punchlines from the idea that we all knew better. And the irony defense came out of that...
CS:
A lot of that got learned in the culture of 4Chan. It’s not just the far right, it’s part of internet trolling. Whitney Phillips talks about this... and it’s hard to say “where it came from.” But my read is that we’re constantly shifting social norms. I remember when people would use words for the mentally handicapped, real casually. And then it started to come up that you shouldn’t say that — don’t stigmatize or other people. And we update the language we use, learn to use new words, and stop using the old ones. We had identity politics back in the ‘90s. But early internet culture, we didn’t have the language we have now to talk about power. So people would just say outrageous things to be funny. That was a style. This humor came out of recognizing these things and dominant white culture didn’t talk about privilege and power. We should have, we could have, but we didn’t. And so this humor was where people went — escalating, escalating, escalating the outrageousness, but at the same time you’re also decontextualizing, decontextualizing, decontextualizing. And when it gets away from you, you come back to saying it was all just a joke.
ES:
So, how do platforms deal with this irony defense, and provide a platform for varieties of speech, and handle alt-right misogynist youtubers advocating for violence against women? We talked about entering the Web as a real place, so where are the architects?
CS:
Platforms do think about satire and take a very First Amendment interpretation in the US. We don’t have anti-semitism laws in the US, so it’s allowed to persist. World War II, Europe got these laws — they still have freedom of speech, right? — but the US doesn’t have them. And platforms don’t provide moderators with training or support to do their job well. They have to hit numbers, they’re looking at traumatic stuff, they don’t have counselors, they’re not paid well. And they have to think about free speech and intentionality.
For example, legally, to prove harm, you have to prove the likelihood of a threat to a court to show how real it is. And if you don’t know who the person is, this is considered a lower likelihood. Now, platforms could create policies against this stuff, but policies come from laws.
Take swatting [pranking the police into raiding a house with a SWAT Team]. There was a death from prank swatting and that changed the conversation. And police — who are in a position of power and are able to influence policy when they are being harassed — were able to take steps against Doxing, and working towards criminalizing prank Swatting.
ES:
The people experiencing harassment and discomfort there were in a position of power, so policy got changed. But that shows it can be done.
CS:
Right. And some politicians are taking this stuff seriously. There are good ways to acknowledge downstream effects and these can be written into bills. If prank swattings become a felony, then eventually you’d see platforms start to change or at least acknowledge it. So how do you leave space for politicized free speech, which may sound like a call for harm against a politician? Nobody handles this very well, but it’s all compounding and added to the problem.
Content moderators are not a problem, they’re part of the equation of problems. The rules of regulation, the nature of support, the language they speak. We don’t know how YouTube is prepping people internationally on political movements. But you look at Facebook’s involvement in genocide in Myanmar, and we have clear proof of harm there from the platform. We have to look at all of these pieces as a part of the same equation — the “mathematics” here is so many complexities of interlocking and intertwined harm.
What I often reflect on is: what does it mean to exist in a platform that is larger than any nation-state that has ever existed?
An edited version of this piece appeared in the Cybernetic Forests newsletter.