Critical Image Synthesis:
Glitching the AI Imaginary

Thanks for attending the Glitching the AI Imaginary: Critical Image Synthesis workshop! This page contains a list of additional resources worth checking out, including some of the tools and readings we discussed. If you want to explore more you can also check out my free, online Critical Topics in AI Images course, or subscribe to my newsletter (cybernetic forests).

 

Resources

Tools

Use these tools thoughtfully! They are not listed because they’re particularly ethical but because they offer useful control to the artist to find new workflows beyond the prompt-and-response format.

  • Image generation: ComfyUI is an open-source tool suite that you can run in Google Collab or through a paid browser app called RunComfy. It allows you to use a variety of open-source models and customize your own LORAs to fine-tune them. Most derive from Stable Diffusion, which is sourced from images taken from the web without consent. However, as more palatable models become available, they can be used in the Comfy pipeline simply through a checklist. Let’s hope someone makes them.

  • Sound Generation: The data sourcing on every sound and music generation platform is dubious. Proceed with caution. Most of my noise experiments were done through UDIO, which allows you to access training data directly. That means you can experiment with the model in ways you can’t with other companies, which sort your prompts into more easily recognizable genres. UDIO also allows access to quite powerful parameters in advanced mode.

  • Video Generation: Runway’s GEN-3 models are accessible, though control is somewhat limited. Runway remains ethically dubious in its data sourcing, but its platforms have been oriented toward digital artists and they offer a variety of additional AI tools for experimentation.

  • Data Sourcing: No perfectly trained AI model for images, sound, or video exists. Spawning’s Source Plus is an exciting project built with artists Holly Herndon and Matt Dryhurst that aims to create more ethical, transparent datasets and potentially even a royalties system for artists who share their data on their terms. Worth keeping an eye on.

More Workshops!

  • For in-depth image synthesis workshops from a technical, hands-on perspective, you can’t beat Derrick Schultz. If there’s an open class coming up, take it! (Link)

  • If you’re more into the hacking of physical objects, one of the best intros to electronics hacking courses you can take are from Dogbotic Labs, online but based in Berkeley, CA.

More Questions?

  • You can send me an email anytime you want to chat. Just use the contact form.


Readings

Critical Image Synthesis was coined by UK academic Richard Carter to describe the process of using generative AI as an interrogator rather than a producer: that is, how to use the tools to discover and reveal biases, system infrastructure, and cultural logics that become embedded into the system.

That paper cites my paper, How to Read an AI Image, which is an introduction to examining and understanding AI-generated images more clearly.

In terms of theory and “the unimagining of AI,” I would point you to my essay in Tech Policy Press, Challenging the Myths of Generative AI. It discusses some of my thinking about the “AI imaginary,” why these myths are pervasive, and why artists ought to challenge them.


Artwork

My work referenced in the talk:

Artists worth following: