Dance Like (2025)

In my early 30s I came to realize I had lived through severe bouts of depersonalization growing up. As a child and a teenager, I never felt that my body was mine. I somehow never knew or trusted what my body looked like. If I looked in a mirror, the person I saw would be more or less identical day to day, but my perception of that body – how I made sense of it, the story I told about it to myself – would vary wildly. I was radically disconnected from whatever my body was, and constantly seeking confirmation – not compliments, or insults, though I would take them. Mostly I just needed someone to tell me: "this is what you look like."

I came to that realization after starting to run and losing nearly 140 pounds in my late 20s. But this depersonalization – this failure to grapple with the reality of having a body at all – is part of why I had those 140 pounds to lose in the first place. I didn't know I was overweight. Looking at my body never made sense to me, never felt like who I was. Unlike other forms of depersonalization, I was firmly situated in my mind, and thoughts. Everything else attached to me was weird.

I think this has cultivated a higher sense of empathic response for bodies. I feel it when I watch other people dance, I feel it in crowds when I am dancing. I feel it when I am in the water. In my time in hot springs, I learned to focus on the sense of submersion of my body into that body, the temperature level matched to mine so that the perception of my body seemed to reach beyond the surface boundary of my skin to the edges of the stone bath.

Dance-Like is a set of lyrics I wrote in May 2024. I produced a version of this song back then, but recently picked it up again to try out a new sound synthesis tool. The resulting song is the basis for the piece below, alongside shots of bodies rendered by OpenAI's Sora using my free test credits as I sought to understand how Sora works.

When I watch these videos of bodies in water – a subject that I am, for whatever reason, clearly drawn toward – I realize that not everyone will see them as I do. I am more on the side of being fascinated than repelled, because in some ways they represent the abstract perceptions of my own body. They are an abstract, body-like mass, ever-changing, indiscernible. It doesn't feel alien to me, it feels familiar.

The lyrics, meant to be positioned as a command and response by the imagined AI-generated vocalist — a verbalization of how optimized data "sees" the world – is a call to the listener to embrace the disembodiment of that system, a calling out to depersonalize yourself from the experience of music to orient yourself to music as the system did its pattern-finding.

It was meant as a way to think through what, exactly, we are listening to when we listen to AI music. But I think it is also quite personal: it’s a song about me, when I only allowed myself to be a mind, a kind of girding against the anxiety of childhood trauma. It reminds me that AI is, in many ways, a model of a traumatized mind.

Still from "Dance-Like," Music Video, Eryk Salvaggio 2025: Rough shape of legs against a backdrop of blue water with "Dance Like You Were Never Born" written on the screen in all caps.

Dance-Like
Dance like you don’t have a body
Dance like your heart can’t race
to the speed of the beat
Dance like you were never born
Dance like you could never feel
Unaware of all sensation
Unaware of all emotion
Dance-like, you don’t have a body
Dance-like, you don’t have a body
Dance-like, you could never feel
Anything at all

Having a Body

I think of the scenes in this video not as a condemnation of any system's inability to produce a "correct body," but as an optimistic reminder that there is no universal definition of a body to be produced. I empathize with these bodies. Somehow they feel liberating.

There are many individual experiences of having and being embodied. While this video is a critique of AI systems and their logic, I also want to critique the critique. AI's distance and disembodiment leads to a total breakdown in how it predicts human behavior. In some sense, this is a visualization of that, in terms of impossible movements.

AI treats bodies as supplemental to data. What it renders instead is a failure to align with corporate, technocentric visions of bodies. Instead, it produces something radical: unconstrained, unconfined bodies, capable of being anything, not because the system is liberating but because the system's attempt at constraint has failed.

The implicit condemnation of these outcomes is not that the companies are failing in their tasks and should be mocked for it – though any failure of AI to meet corporate goals is fine with me. The failure, as far as I am concerned, is that these companies are trying to "fix the bodies" to more perfectly align with... what, exactly?

Joanna Zylinska's "Diffused Seeing" challenges the critique of AI's lack of normative bodies – that is, the arguments where people point out the "wrongness" of hands or faces to show AI's lack of discernment. In it, she uses my own hand and kissing examples – (while acknowledging the aim of my critique is away from normativity, ie, I don't want the models to create endless stereotypes of socially revered bodies). The issue of AI "wrongness" is indeed deeper and more complex.

As she writes:

Castigating a generative model for getting things ‘wrong’ thus only ever makes sense if we are to assume that the primary function of generative AI technology is to produce verisimilitude, i.e., to create more of what we already have. Yet what if, rather than seeing this technology as premised on delivering an accurate response to a natural-language-defined prompt, we saw them as conversation pieces, provocations –or, indeed, prompts –opening up a dialogue, with us and for us, on the fundamental incommensurability between the word and the image, between the world and its representation?

Admittedly I do assume this goal of verisimilitude, because we operate in an environment where diffusion models are run by companies that are clearly targeting a corporate ideal that reaches for verisimilitude to a social ideal of the most-socially-valued forms of bodies.

But I take Zylinska’s point, because I often encounter the idea that the glitch is inherently a critique of the system's capacities, rather than the system designer's goals. Glitches, because they are rooted in technical specificities, are often read as a critique of technology rather than the social aims of that technology. But the glitch can also point to those aims.

Zylinska and I likely agree here. In my work, the glitch is a source of hope, and noise is a space of overwhelming possibility. The glitch is not just an "error," it is a crack in the wall, opening up a sliver of access to some other side of things. Just as the Freudian slip is not a "mistake," but an insight into the underlying thought. To me, that is what the glitch is: it's a moment where the guardrail that protects assumptions about things slips, and something under the surface is momentarily made accessible.

Still from "Dance-Like," Music Video, Eryk Salvaggio 2025. Rough shape of blue legs against a orange-tan backdrop with "Dance Like You Don't Have a Body" written on the screen in all caps.

It's not radical to suggest that the failures of generative AI systems are the most interesting things about them. As machines aimed at averaging the vast sea of corporate and personal images, new vocabularies don't emerge unless we bend the system against its will, like poets working within and against the constraints of language. Failures and glitches in these systems reveal the logic embedded into their processes.

They are also problematic: the failure to generate bodies of color, for example, is not a glitch per se, it's a bias of the data manifesting in the output of the system, a visualization of the bodies prioritized by that dataset and therefore, the people who built the system in this way. The images you see here are what came out of Sora: when it comes to race, Sora is working in perfect alignment with what its designers prioritized. These bodies take a radical variety of forms, but all of them are white.

I don’t want to pretend that Sora is not producing biased content by deliberately prompting for diversity. This is a document of those biases — hopefully, to an informed audience who understands the absence as a self-inflicted critique by the system, rather than my “direction.” But I think that’s nebulous. It's a reminder not to overstate the disparate representation of bodies by AI systems or to overvalue it. There is no such thing as a fully liberatory corporate technology.

The distorted bodies we see as evidence of failing systems should be reconsidered: are they failures, and if so, what, or who, are they failing?

“The distorted bodies we see as evidence of failing systems should be reconsidered: are they failures, and if so, what, or who, are they failing?”

Yet, I would argue that the distorted bodies we see as evidence of failing systems should be reconsidered: are they failures, and if so, what, or who, are they failing? If a hand or face is distorted, are we not relying on our human sense of "averaging" in order to declare it so? In other words, how do we know what a hand looks like, and when we look at hands that do not match that ideal, are we mocking the AI system – or are we mocking the real-world, human hands that do not look like ours?

If we understand that the constraint of language is what we are up against when we write expressively, then we should take note. The constraint of the AI model, the laws of averages it has arrived at, is what the AI artist ought to be working against. When there are ways to escape those constraints, they are worth exploring.

I have worked with hands in AI since the beginning of my interest in this field: in late 2019, I trained a GAN on thousands of photographs of my partners hands, unaware that the resulting images would be distorted. In early 2023 I started sharing glitched diffusion outputs in a series called "Gaussian Noise, Human Hands" which celebrated the glitches of these systems as ways out of the constraints of "mean images" rather than condemned them.

None of that critique should be confused with me suggesting that I want AI generating super median stereotypes of bodies or that meeting such a goal is a criteria for a "successful" system.

I think the point of this video could easily be misunderstood as "Ha, ha, look at these terrible bodies!" I don't want to lean into that, though obviously, maybe that's what is going to happen here. But I believe there is something more interesting to say about these bodies.

One of the things that has captivated me about the conversations I've been able to have about dance, robotics and AI has been the awareness of bodies, of mirror neurons and their failures to fire, which bodies are met with empathy and which are not, and the uncanny valley effect: where we connect, and where we distance ourselves, from other bodies.


Sign up for the weekly newsletter here.

Eryk Salvaggio